-
公开(公告)号:US12056604B2
公开(公告)日:2024-08-06
申请号:US16024369
申请日:2018-06-29
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
Inventor: Vivek Seshadri , Amar Phanishayee , Deepak Narayanan , Aaron Harlap , Nikhil Devanur Rangarajan
Abstract: Layers of a deep neural network (DNN) are partitioned into stages using a profile of the DNN. Each of the stages includes one or more of the layers of the DNN. The partitioning of the layers of the DNN into stages is optimized in various ways including optimizing the partitioning to minimize training time, to minimize data communication between worker computing devices used to train the DNN, or to ensure that the worker computing devices perform an approximately equal amount of the processing for training the DNN. The stages are assigned to the worker computing devices. The worker computing devices process batches of training data using a scheduling policy that causes the workers to alternate between forward processing of the batches of the DNN training data and backward processing of the batches of the DNN training data. The stages can be configured for model parallel processing or data parallel processing.
-
2.
公开(公告)号:US11868880B2
公开(公告)日:2024-01-09
申请号:US16276250
申请日:2019-02-14
Applicant: Microsoft Technology Licensing, LLC
Inventor: Nikhil Devanur Rangarajan , Jorgen Thelin , Amar Phanishayee , Guanhua Wang , Shivaram Venkataraman
IPC: G06N3/08 , G06F13/42 , G06N3/02 , G06F15/163
CPC classification number: G06N3/08 , G06F13/4221 , G06F15/163 , G06N3/02 , G06F2213/0026 , G06F2213/0062
Abstract: An interconnect topology for communication between GPUs in a computing system is determined. A quantity of directed spanning trees are generated for transmitting data between the GPUs using the interconnect topology and packed. The directed spanning trees define the connections between GPUs that are to be utilized for the transmission and the amount of data to be transmitted on each connection. Program code is generated for implementing the data transfer defined by the directed spanning trees. When the program code is executed, the directed spanning trees are used to pipeline the transmission of chunks of data, such as model parameters used during data-parallel DNN training, between the GPUs. The program code can also determine an optimal chunk size for data to be transferred between the GPUs.
-
公开(公告)号:US10061791B2
公开(公告)日:2018-08-28
申请号:US14340514
申请日:2014-07-24
Applicant: Microsoft Technology Licensing, LLC
Inventor: Amar Phanishayee , Ratul Mahajan , Rayman Preet Singh , Trinabh Gupta , Jaeyeon Jung
IPC: G06F17/30 , G11B27/034 , G11B27/11
CPC classification number: G06F16/2228 , G06F16/24562 , G11B27/034 , G11B27/11
Abstract: Techniques and constructs to facilitate data management can provide improved response time and space efficiency for time-series data such as from connected devices. The constructs may enable receiving a stream of time-series data comprising a plurality of objects and a time identification associated with each of the objects. One or more tags are associated with the objects. The constructs may also chunk the stream into a plurality of contiguous chunks, each including a plurality of objects, create an index associating the time identification and the one or more tags, transmit the chunks to a first, remote storage, and then store the index.
-
公开(公告)号:US12165038B2
公开(公告)日:2024-12-10
申请号:US16276395
申请日:2019-02-14
Applicant: Microsoft Technology Licensing, LLC
Inventor: Daniel Lo , Bita Darvish Rouhani , Eric S. Chung , Yiren Zhao , Amar Phanishayee , Ritchie Zhao
Abstract: Apparatus and methods for training a neural network accelerator using quantized precision data formats are disclosed, and, in particular, for adjusting floating-point formats used to store activation values during training. In certain examples of the disclosed technology, a computing system includes processors, memory, and a floating-point compressor in communication with the memory. The computing system is configured to produce a neural network comprising activation values expressed in a first floating-point format, select a second floating-point format for the neural network based on a performance metric, convert at least one of the activation values to the second floating-point format, and store the compressed activation values in the memory. Aspects of the second floating-point format that can be adjusted include the number of bits used to express mantissas, exponent format, use of non-uniform mantissas, and/or use of outlier values to express some of the mantissas.
-
公开(公告)号:US12045724B2
公开(公告)日:2024-07-23
申请号:US16237202
申请日:2018-12-31
Applicant: Microsoft Technology Licensing, LLC
Inventor: Daniel Lo , Amar Phanishayee , Eric S. Chung , Yiren Zhao , Ritchie Zhao
CPC classification number: G06N3/084 , G06F7/49915 , G06F9/30025 , G06F9/5027 , G06N5/046 , G06N20/00
Abstract: Apparatus and methods for training a neural network accelerator using quantized precision data formats having outlier values are disclosed, and in particular for storing activation values from a neural network in a compressed format for use during forward and backward propagation training of the neural network. In certain examples of the disclosed technology, a computing system is configured to perform forward propagation for a layer of a neural network to produced first activation values in a first block floating-point format. In some examples, activation values generated by forward propagation are converted by the compressor to a second block floating-point format having a narrower numerical precision than the first block floating-point format. Outlier values, comprising additional bits of mantissa and/or exponent are stored in ancillary storage for subset of the activation values. The compressed activation values are stored in the memory, where they can be retrieved for use during back propagation.
-
公开(公告)号:US20200210839A1
公开(公告)日:2020-07-02
申请号:US16237202
申请日:2018-12-31
Applicant: Microsoft Technology Licensing, LLC
Inventor: Daniel Lo , Amar Phanishayee , Eric S. Chung , Yiren Zhao , Ritchie Zhao
Abstract: Apparatus and methods for training a neural network accelerator using quantized precision data formats having outlier values are disclosed, and in particular for storing activation values from a neural network in a compressed format for use during forward and backward propagation training of the neural network. In certain examples of the disclosed technology, a computing system is configured to perform forward propagation for a layer of a neural network to produced first activation values in a first block floating-point format. In some examples, activation values generated by forward propagation are converted by the compressor to a second block floating-point format having a narrower numerical precision than the first block floating-point format. Outlier values, comprising additional bits of mantissa and/or exponent are stored in ancillary storage for subset of the activation values. The compressed activation values are stored in the memory, where they can be retrieved for use during back propagation.
-
公开(公告)号:US10187292B2
公开(公告)日:2019-01-22
申请号:US15130787
申请日:2016-04-15
Applicant: Microsoft Technology Licensing, LLC
Inventor: Monia Ghobadi , Ratul Mahajan , Amar Phanishayee , Danyang Zhuo , Xuan Kelvin Zou
IPC: G06F15/16 , H04L12/721 , H04L12/751 , H04L12/713 , H04L12/24
Abstract: Techniques and architectures may be used to generate data center network topologies that use less reliable and less expensive links mixed with links of higher reliability. Such topologies may be categorized into reliability classes, where each class corresponds to a bound(s) on reliability of paths that include the links. A topology class may be selected for use by an application based, at least in part, on the degree of reliability demanded by the application.
-
公开(公告)号:US10084868B2
公开(公告)日:2018-09-25
申请号:US15256563
申请日:2016-09-03
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ranveer Chandra , Ashish Kapoor , Sudipta Sinha , Amar Phanishayee , Deepak Vasisht , Xinxin Jin , Madhusudhan Gumbalapura Sudarshan
CPC classification number: H04L67/18 , G01C11/02 , H04L12/66 , H04L41/0896 , H04L47/762 , H04L67/10 , H04L67/12 , H04L67/2828 , H04L67/322 , H04N5/23238 , H04N7/181
Abstract: A gateway that may be implemented in a local network and that communicates with a cloud network to provide efficient services in a weakly connected setting is disclosed. The gateway may be configured to enable services that efficiently utilize resources in both of the gateway and the cloud network, and provide a desired quality of service while operating in a weakly connected setting. The gateway may provide data collection and processing, local network services, and enable cloud services that utilize data collected and processed by the gateway. The local network may include one or more sensors and/or video cameras that provide data to the gateway. In a further implementation, the gateway may determine an allocation of one or more tasks of a service between the gateway and a cloud network by determining the allocation of the one or more service tasks based on desired service latency.
-
公开(公告)号:US12277502B2
公开(公告)日:2025-04-15
申请号:US18415159
申请日:2024-01-17
Applicant: Microsoft Technology Licensing, LLC
Inventor: Daniel Lo , Amar Phanishayee , Eric S. Chung , Yiren Zhao
Abstract: Apparatus and methods for training a neural network accelerator using quantized precision data formats are disclosed, and in particular for storing activation values from a neural network in a compressed format having lossy or non-uniform mantissas for use during forward and backward propagation training of the neural network. In certain examples of the disclosed technology, a computing system includes processors, memory, and a compressor in communication with the memory. The computing system is configured to perform forward propagation for a layer of a neural network to produced first activation values in a first block floating-point format. In some examples, activation values generated by forward propagation are converted by the compressor to a second block floating-point format having a non-uniform and/or lossy mantissa. The compressed activation values are stored in the memory, where they can be retrieved for use during back propagation.
-
公开(公告)号:US10740195B2
公开(公告)日:2020-08-11
申请号:US16141269
申请日:2018-09-25
Applicant: Microsoft Technology Licensing, LLC
Abstract: This document relates to data storage techniques. One example can buffer write commands and cause the write commands to be committed to storage in flush epoch order. Another example can maintain a persistent log of write commands that are arranged in the persistent log in flush epoch order. Both examples may provide a prefix consistent state in the event of a crash.
-
-
-
-
-
-
-
-
-