-
公开(公告)号:US11049006B2
公开(公告)日:2021-06-29
申请号:US15510356
申请日:2014-09-12
Applicant: Microsoft Technology Licensing, LLC
Inventor: John Langford , Gang Li , Frank Torsten Bernd Seide , James Droppo , Dong Yu
Abstract: Techniques and constructs can reduce the time required to determine solutions to optimization problems such as training of neural networks. Modifications to a computational model can be determined by a plurality of nodes operating in parallel. Quantized modification values can be transmitted between the nodes to reduce the volume of data to be transferred. The quantized values can be as small as one bit each. Quantization-error values can be stored and used in quantizing subsequent modifications. The nodes can operate in parallel and overlap computation and data transfer to further reduce the time required to determine solutions. The quantized values can be partitioned and each node can aggregate values for a corresponding partition.
-
公开(公告)号:US10375163B2
公开(公告)日:2019-08-06
申请号:US15011176
申请日:2016-01-29
Applicant: Microsoft Technology Licensing, LLC
Inventor: Gang Li , Larry Jin , Erin Honeycutt , Mark Rubinstein , Jesus Barcons Palau
Abstract: Systems, methods, and computer-readable media for providing cross device messaging and enhanced synchronization of messages. In some configurations, an endpoint computing device can receive input of a mobile operator message. The endpoint computing device can process the message, and send a signal with the message to a relay computing device for delivery to a recipient computing device. In some configurations, the signal with the message may be sent to the relay device via a reference user profile in a distributed service platform (e.g., the Cloud). The endpoint computing device can send the signal with the message to a single relay computing device, or to multiple relay computing devices, for delivery. In some configurations, the endpoint computing device can send the signal with the message to a first relay device, which can then send the signal with the message to a second relay device for delivery to a recipient device.
-
公开(公告)号:US20160026914A1
公开(公告)日:2016-01-28
申请号:US14873166
申请日:2015-10-01
Applicant: Microsoft Technology Licensing, LLC
Inventor: Dong Yu , Li Deng , Frank Torsten Bernd Seide , Gang Li
Abstract: Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively.
-
公开(公告)号:US10325200B2
公开(公告)日:2019-06-18
申请号:US14873166
申请日:2015-10-01
Applicant: Microsoft Technology Licensing, LLC
Inventor: Dong Yu , Li Deng , Frank Torsten Bernd Seide , Gang Li
Abstract: Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively.
-
公开(公告)号:US20170222968A1
公开(公告)日:2017-08-03
申请号:US15011176
申请日:2016-01-29
Applicant: Microsoft Technology Licensing, LLC
Inventor: Gang Li , Larry Jin , Erin Honeycutt , Mark Rubinstein , Jesus Barcons Palau
CPC classification number: H04L67/1095 , H04L51/043 , H04L51/10 , H04L51/14 , H04L51/34 , H04L67/24 , H04L67/306 , H04W4/12
Abstract: Systems, methods, and computer-readable media for providing cross device messaging and enhanced synchronization of messages. In some configurations, an endpoint computing device can receive input of a mobile operator message. The endpoint computing device can process the message, and send a signal with the message to a relay computing device for delivery to a recipient computing device. In some configurations, the signal with the message may be sent to the relay device via a reference user profile in a distributed service platform (e.g., the Cloud). The endpoint computing device can send the signal with the message to a single relay computing device, or to multiple relay computing devices, for delivery. In some configurations, the endpoint computing device can send the signal with the message to a first relay device, which can then send the signal with the message to a second relay device for delivery to a recipient device.
-
公开(公告)号:US11522954B2
公开(公告)日:2022-12-06
申请号:US16532348
申请日:2019-08-05
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Gang Li , Larry Jin , Erin Honeycutt , Mark Rubinstein , Jesus Barcons Palau
IPC: H04L67/1095 , H04L67/306 , H04W4/12 , H04L51/043 , H04L51/214 , H04L67/54 , H04L51/10 , H04L51/234
Abstract: Systems, methods, and computer-readable media for providing cross device messaging and enhanced synchronization of messages. In some configurations, a distributed service platform may store a user profile. The user profile may include device information (i) indicating active devices associated with the user profile and capabilities of the active devices and (ii) one or more potential relay devices for relaying messages to the active devices. The user profile may be provided to a first device among the active devices referenced in the device information. A signal may be received from the first device that includes a message designating one or more recipient devices from the active devices. One or more relay devices may be selected from the one or more potential relay devices in response to receiving the signal, and the signal may be sent to the one or more relay devices.
-
-
-
-
-