LEARNED DENSITY ESTIMATION WITH IMPLICIT MANIFOLDS

    公开(公告)号:US20230385693A1

    公开(公告)日:2023-11-30

    申请号:US18202450

    申请日:2023-05-26

    CPC classification number: G06N20/00 G06N7/01

    Abstract: Probability density modeling, such as for generative modeling, for data on a manifold of a high-dimensional space is performed with an implicitly-defined manifold such that points belonging to the manifold is the zero set of a manifold-defining function. An energy function is trained to learn an energy function that, evaluated on the manifold, describes a probability density for the manifold. As such, the relevant portions of the energy function are “filtered through” the defined manifold for training and in application. The combined energy function and manifold-defining function provide an “energy-based implicit manifold” that can more effectively model probability densities of a manifold in the high-dimensional space. As the manifold-defining function and the energy function are defined across the high-dimensional space, they may more effectively learn geometries and avoid distortions due to change in dimension that occur for models that model the manifold in a lower-dimensional space.

    IDENTIFYING AND MITIGATING DISPARATE GROUP IMPACT IN DIFFERENTIAL-PRIVACY MACHINE-LEARNED MODELS

    公开(公告)号:US20230385444A1

    公开(公告)日:2023-11-30

    申请号:US18202440

    申请日:2023-05-26

    CPC classification number: G06F21/6245

    Abstract: A model evaluation system evaluates the extent to which privacy-aware training processes affect the direction of training gradients for groups. A modified differential-privacy (“DP”) training process provides per-sample gradient adjustments with parameters that may be adaptively modified for different data batches. Per-sample gradients are modified with respect to a reference bound and a clipping bound. A scaling factor may be determined for each per-sample gradient based on the higher of the reference bound or a magnitude of the per-sample gradient. Per-sample gradients may then be adjusted based on a ratio of the clipping bound to the scaling factor. A relative privacy cost between groups may be determined as excess training risk based on a difference in group gradient direction relative to an unadjusted batch gradient and the adjusted batch gradient according to the privacy-aware training.

    SHARED MODEL TRAINING WITH PRIVACY PROTECTIONS

    公开(公告)号:US20230153461A1

    公开(公告)日:2023-05-18

    申请号:US17987761

    申请日:2022-11-15

    CPC classification number: G06F21/6245

    Abstract: A model training system protects data leakage of private data in a federated learning environment by training a private model in conjunction with a proxy model. The proxy model is trained with protections for the private data and may be shared with other participants. Proxy models from other participants are used to train the private model, enabling the private model to benefit from parameters based on other models’ private data without privacy leakage. The proxy model may be trained with a differentially private algorithm that quantifies a privacy cost for the proxy model, enabling a participant to measure the potential exposure of private data and drop out. Iterations may include training the proxy and private models and then mixing the proxy models with other participants. The mixing may include updating and applying a bias to account for the weights of other participants in the received proxy models.

Patent Agency Ranking