-
公开(公告)号:US11138778B2
公开(公告)日:2021-10-05
申请号:US16619111
申请日:2018-06-28
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Dinesh Mysore Siddu , Krishnamoorthy Palanisamy , Sudipta Chaudhury , Anshul Jain , Pravin Pawar , Nagaraju Bussa
Abstract: There is provided a computer-implemented method (200) for obscuring one or more facial features of a subject in an image. A head of the subject is detected in the image (202) and a location of one or more facial features of the subject is identified in the image (204). A region of the image to modify is determined based on the location of the one or more facial features (206). The determined region comprises a part of the head on which the one or more facial features are located. The image within the determined region is modified to obscure the one or more facial features (208).
-
公开(公告)号:US20200074101A1
公开(公告)日:2020-03-05
申请号:US16549712
申请日:2019-08-23
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Eric Thomas Carlson , Mohammad Shahed Sorower , Sreramkumar Sitaraman Viswanathan , Manakkaparambil Sivanandan Sreekanth , Anshul Jain , Sunil Ranjan Khuntia , Ze He
IPC: G06F21/62
Abstract: The present disclosure is directed to centralized de-identification of protected data associated with subjects in multiple modalities based on a hierarchal taxonomy of policies and handlers. In various embodiments, data set(s) associated with subject(s) may be received. Each of the data set(s) may contain data points associated with a respective subject. The data points associated with the respective subject may include multiple data types, at least some of which are usable to identify the respective subject. For each respective subject: a classification of each of the data points may be determined in accordance with a hierarchal taxonomy; based on the classifications, respective handlers for the data points may be identified; and each data point of the plurality of data points may be processed using a respective identified handler, thereby de-identifying the plurality of data points associated with the respective subject.
-
公开(公告)号:US20230394320A1
公开(公告)日:2023-12-07
申请号:US18032838
申请日:2021-10-14
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Shreya Anand , Anshul Jain , Shiva Moorthy Pookala Vittal , Aleksandr Bukharev , Richard Vdovjak
IPC: G06N3/098
CPC classification number: G06N3/098
Abstract: Some embodiments are directed to a federated learning system. A federated model is trained on respective local training datasets of respective multiple edge devices. In an iteration, an edge device obtains a current federated model, determines a model update for the current federated model based on the local training dataset, and sends out the model update. The edge device determines the model update by applying the current federated model to a training input to obtain at least a model output for the training input; if the model output does not match a training output corresponding to the training input, include the training input in a subset of filtered training inputs to be used in the iteration; and determining the model update by training the current federated model on only the subset of filtered training inputs.
-
公开(公告)号:US20210240853A1
公开(公告)日:2021-08-05
申请号:US17267523
申请日:2019-08-23
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Eric Thomas Carlson , Mohammad Shahed Sorower , Sreramkumar Sitaraman Viswanathan , Sreekanth Manakkaparambil Sivanandan , Anshul Jain , Sunil Ranjan Khuntia , Ze He
Abstract: The present disclosure is directed to methods and apparatus for centralized de-identification of protected data associated with subjects. In various embodiments, de-identified data may be received (1102) that includes de-identified data set(s) associated with subject(s) that is generated from raw data set(s) associated with the subjects. Each of the raw data set(s) may include identifying feature(s) that are usable to identify the respective subject. At least some of the identifying feature(s) may be absent from or obfuscated in the de-identified data. Labels associated with each of the de-identified data sets may be determined (1104). At least some of the de-identified data sets may be applied (1108) as input across a trained machine learning model to generate respective outputs, which may be compared (1110) to the labels to determine a measure of vulnerability of the de-identified data to re-identification.
-
-
-