Interpretability Framework for Differentially Private Deep Learning

    公开(公告)号:US20220138348A1

    公开(公告)日:2022-05-05

    申请号:US17086244

    申请日:2020-10-30

    Applicant: SAP SE

    Abstract: Data is received that specifies a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The calculating is based on a ratio of probabilities distributions of different observations, which are bound by the posterior belief ρc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset. Related apparatus, systems, techniques and articles are also described.

    Interpretability framework for differentially private deep learning

    公开(公告)号:US12147577B2

    公开(公告)日:2024-11-19

    申请号:US18581254

    申请日:2024-02-19

    Applicant: SAP SE

    Abstract: Data is received that specifies a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The calculating is based on a ratio of probabilities distributions of different observations, which are bound by the posterior belief ρc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset. Related apparatus, systems, techniques and articles are also described.

    INTERPRETABILITY FRAMEWORK FOR DIFFERENTIALLY PRIVATE DEEP LEARNING

    公开(公告)号:US20250036811A1

    公开(公告)日:2025-01-30

    申请号:US18904462

    申请日:2024-10-02

    Applicant: SAP SE

    Abstract: Data is received that specifies a bound for an adversarial posterior belief pc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The calculating is based on a ratio of probabilities distributions of different observations, which are bound by the posterior belief pc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset. Related apparatus, systems, techniques and articles are also described.

Patent Agency Ranking