-
公开(公告)号:US10339311B2
公开(公告)日:2019-07-02
申请号:US15435961
申请日:2017-02-17
Applicant: SAP SE
Inventor: Martin Haerterich , Martin Johns
Abstract: Various examples are directed to detecting anomalous modifications to a software component. For example, a computing device may receive, from a version control system, version metadata describing properties of a plurality of commits for the software component. The computing device may generate a plurality of commit clusters based, at least in part, on the properties of the plurality of commits. The computing device may determine a first anomalous commit of the plurality of commits and generate an alert message indicating a first code segment modified by the first commit.
-
公开(公告)号:US12147577B2
公开(公告)日:2024-11-19
申请号:US18581254
申请日:2024-02-19
Applicant: SAP SE
Inventor: Daniel Bernau , Philip-William Grassal , Hannah Keller , Martin Haerterich
IPC: G06F21/62 , G06F17/18 , G06F18/214 , G06N20/00
Abstract: Data is received that specifies a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The calculating is based on a ratio of probabilities distributions of different observations, which are bound by the posterior belief ρc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset. Related apparatus, systems, techniques and articles are also described.
-
公开(公告)号:US20240184892A1
公开(公告)日:2024-06-06
申请号:US18079665
申请日:2022-12-12
Applicant: SAP SE
Inventor: Tom Ganz , Martin Haerterich , Philipp Rall
CPC classification number: G06F21/577 , G06F11/3664 , G06F21/563 , G06F2221/033
Abstract: Applications may contain vulnerabilities to attack via malicious inputs. Machine-learning models may be trained to detect these vulnerabilities by accepting source code as input and outputting a probability that each of a set of vulnerabilities exists in the source code. Explanation methods may identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified. The results of the directed fuzzing are used to select between explanations generated by multiple explanation methods, to provide additional training data to a machine-learning model, to provide additional training data to an explanation method, or any suitable combination thereof.
-
公开(公告)号:US20240184891A1
公开(公告)日:2024-06-06
申请号:US18079611
申请日:2022-12-12
Applicant: SAP SE
Inventor: Tom Ganz , Martin Haerterich , Philipp Rall
IPC: G06F21/57
CPC classification number: G06F21/577 , G06F2221/033
Abstract: Applications may contain vulnerabilities to attack via malicious inputs. Machine-learning models may be trained to detect these vulnerabilities by accepting source code as input and outputting a probability that each of a set of vulnerabilities exists in the source code. Explanation methods may identify one or more locations within the source code that are likely to cause the vulnerability. Directed fuzzing provides a range of inputs to source code. The inputs that cause the source code to fail are detected and the portions of the source code that were vulnerable are identified. The results of the directed fuzzing are used to select between explanations generated by multiple explanation methods, to provide additional training data to a machine-learning model, to provide additional training data to an explanation method, or any suitable combination thereof.
-
公开(公告)号:US11366982B2
公开(公告)日:2022-06-21
申请号:US16140022
申请日:2018-09-24
Applicant: SAP SE
Inventor: Martin Haerterich , Benjamin Hilprecht , Daniel Bernau
Abstract: Various examples are directed to systems and methods for detecting training data for a generative model. A computer system may access generative model sample data and a first test sample. The computer system may determine whether a first generative model sample of the plurality of generative model samples is within a threshold distance of the first test sample and whether a second generative model sample of the plurality of generative model samples is within the threshold distance of the first test sample. The computer system may determine that a probability that the generative model was trained with the first test sample is greater than or equal to a threshold probability based at least in part on whether the first generative model sample is within the threshold distance of the first test sample, the determining also based at least in part on whether the second generative model sample is within the threshold distance of the first test sample.
-
公开(公告)号:US20180196939A1
公开(公告)日:2018-07-12
申请号:US15403603
申请日:2017-01-11
Applicant: SAP SE
Inventor: Christoph Haefner , Martin Johns , Martin Haerterich
IPC: G06F21/53
CPC classification number: G06F21/53 , G06F2221/033
Abstract: Systems and methods are provided herein for establishing a protection framework for a component. Identified assets of a component requiring protection from a potential attack are received. A list of assets is generated based on the identified assets. A protection framework is configured to include at least one defensive pattern to protect the list of assets against the potential attack. The protection framework is executed to establish a hardened boundary between the component and an attack surface of the component.
-
-
-
-
-