-
公开(公告)号:US11921911B2
公开(公告)日:2024-03-05
申请号:US17374942
申请日:2021-07-13
Applicant: Microsoft Technology Licensing, LLC
Inventor: Stavros Volos , David Thomas Chisnall , Saurabh Mohan Kulkarni , Kapil Vaswani , Manuel Costa , Samuel Alexander Webster , Cédric Alain Marie Fournet , Richard Osborne , Daniel John Pelham Wilkinson , Graham Bernard Cunningham
CPC classification number: G06F21/85 , G06F21/602 , H04L9/30 , H04L9/3265
Abstract: A peripheral device, for use with a host, comprises one or more compute elements a security module and at least one encryption unit. The security module is configured to form a trusted execution environment on the peripheral device for processing sensitive data using sensitive code. The sensitive data and sensitive code are provided by a trusted computing entity which is in communication with the host computing device. The at least one encryption unit is configured to encrypt and decrypt data transferred between the trusted execution environment and the trusted computing entity via the host computing device. The security module is configured to compute and send an attestation to the trusted computing entity to attest that the sensitive code is in the trusted execution environment.
-
公开(公告)号:US11126757B2
公开(公告)日:2021-09-21
申请号:US16166047
申请日:2018-10-19
Applicant: Microsoft Technology Licensing, LLC
Inventor: Stavros Volos , David Thomas Chisnall , Saurabh Mohan Kulkarni , Kapil Vaswani , Manuel Costa , Samuel Alexander Webster , Cédric Alain Marie Fournet
Abstract: A peripheral device, for use with a host, comprises one or more compute elements a security module and at least one encryption unit. The security module is configured to form a trusted execution environment on the peripheral device for processing sensitive data using sensitive code. The sensitive data and sensitive code are provided by a trusted computing entity which is in communication with the host computing device. The at least one encryption unit is configured to encrypt and decrypt data transferred between the trusted execution environment and the trusted computing entity via the host computing device. The security module is configured to compute and send an attestation to the trusted computing entity to attest that the sensitive code is in the trusted execution environment.
-
公开(公告)号:US11526613B2
公开(公告)日:2022-12-13
申请号:US16503455
申请日:2019-07-03
Applicant: Microsoft Technology Licensing, LLC
Inventor: David Thomas Chisnall , Cédric Alain Marie Fournet , Manuel Costa , Samuel Alexander Webster , Sylvan Clebsch , Kapil Vaswani
Abstract: A computer system has a separation mechanism which enforces separation between at least two execution environments such that one execution environment is a gatekeeper which interposes on all communications of the other execution environment. The computer system has an attestation mechanism which enables the gatekeeper to attest to properties of the at least two execution environments. A first one of the execution environments runs application specific code which may contain security vulnerabilities. The gatekeeper is configured to enforce an input output policy on the first execution environment by interposing on all communication to and from the first execution environment by forwarding, modifying or dropping individual ones of the communications according to the policy. The gatekeeper provides evidence of attestation both for the application specific code and the policy.
-
公开(公告)号:US11288575B2
公开(公告)日:2022-03-29
申请号:US15599058
申请日:2017-05-18
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ryota Tomioka , Matthew Alastair Johnson , Daniel Stefan Tarlow , Samuel Alexander Webster , Dimitrios Vytiniotis , Alexander Lloyd Gaunt , Maik Riechert
Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
-
公开(公告)号:US12210831B2
公开(公告)日:2025-01-28
申请号:US17493819
申请日:2021-10-04
Applicant: Microsoft Technology Licensing, LLC
Inventor: Elena Pochernina , John Winn , Matteo Venanzi , Ivan Korostelev , Pavel Myshkov , Samuel Alexander Webster , Yordan Kirilov Zaykov , Nikita Voronkov , Dmitriy Meyerzon , Marius Alexandru Bunescu , Alexander Armin Spengler , Vladimir Gvozdev , Thomas P. Minka , Anthony Arnold Wieser , Sanil Rajput , John Guiver
IPC: G06F40/30 , G06F16/901 , G06F16/903
Abstract: In various examples there is a computer-implemented method of database construction. The method comprises storing a knowledge graph comprising nodes connected by edges, each node representing a topic. Accessing a topic type hierarchy comprising a plurality of types of topics, the topic type hierarchy having been computed from a corpus of text documents. One or more text documents are accessed and the method involves labelling a plurality of the nodes with one or more labels, each label denoting a topic type from the topic type hierarchy, by, using a deep language model; or for an individual one of the nodes representing a given topic, searching the accessed text documents for matches to at least one template, the template being a sequence of words and containing the given topic and a placeholder for a topic type; and storing the knowledge graph comprising the plurality of labelled nodes.
-
公开(公告)号:US12099927B2
公开(公告)日:2024-09-24
申请号:US17706586
申请日:2022-03-28
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ryota Tomioka , Matthew Alastair Johnson , Daniel Stefan Tarlow , Samuel Alexander Webster , Dimitrios Vytiniotis , Alexander Lloyd Gaunt , Maik Riechert
Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
-
-
-
-
-