-
公开(公告)号:US20190180170A1
公开(公告)日:2019-06-13
申请号:US15839301
申请日:2017-12-12
Applicant: Amazon Technologies, Inc.
Inventor: Randy Huang , Ron Diamant
Abstract: Provided are systems, methods, and integrated circuits for a neural network processing system. In various implementations, the system can include a first array of processing engines coupled to a first set of memory banks and a second array of processing engines coupled to a second set of memory banks. The first and second set of memory banks be storing all the weight values for a neural network, where the weight values are stored before any input data is received. Upon receiving input data, the system performs a task defined for the neural network. Performing the task can include computing an intermediate result using the first array of processing engines, copying the intermediate result to the second set of memory banks, and computing a final result using the second array of processing engines, where the final result corresponds to an outcome of performing the task.
-
公开(公告)号:US20190179795A1
公开(公告)日:2019-06-13
申请号:US15839157
申请日:2017-12-12
Applicant: Amazon Technologies, Inc.
Inventor: Randy Huang , Ron Diamant , Jindrich Zejda , Drazen Borkovic
Abstract: Provided are systems, methods, and integrated circuits neural network processor that can execute a fast context switch between one neural network and another. In various implementations, a neural network processor can include a plurality of memory banks storing a first set of weight values for a first neural network. When the neural network processor receives first input data, the neural network processor can compute a first result using the first set of weight values and the first input data. While computing the first result, the neural network processor can store, in the memory banks, a second set of weight values for a second neural network. When the neural network processor receives second input data, the neural network processor can compute a second result using the second set of weight values and the second input data, where the computation occurs upon completion of computation of the first result.
-
公开(公告)号:US10303621B1
公开(公告)日:2019-05-28
申请号:US15452117
申请日:2017-03-07
Applicant: AMAZON TECHNOLOGIES, INC.
Inventor: Ron Diamant , Alex Levin , Barak Wasserstrom
Abstract: An electronic system includes a secret value (e.g., an encryption key) which is used for its intended purpose after which the address translations in the system's memory management unit are modified to prevent further access to the secret value. The address translation modifications also include modification of a translation for the memory management unit itself thereby preventing further modification of the address translations. The secret value cannot again be accessed until the system is reinitialized, but the address translations are modified during each system initialization so that the secret value is only usable for its intended purpose during the initialization process. In other implementations, the system modifies mappings between physical addresses and hardware components to preclude further access to the secret value.
-
公开(公告)号:US10210083B1
公开(公告)日:2019-02-19
申请号:US15669346
申请日:2017-08-04
Applicant: AMAZON TECHNOLOGIES, INC.
Inventor: Noam Efraim Bashari , Ron Diamant , Yaniv Shapira , Barak Wasserstrom
IPC: G06F12/08 , G06F12/0802 , G06F12/02
Abstract: An apparatus such as a system-on-a-chip includes memory that is distributed through one or more functional hardware circuits. Each functional hardware circuit includes memory, and each functional hardware circuit can be configured to have its memory used either by the respective functional hardware circuit or by the apparatus' master device (e.g., main processor). For those functional hardware circuits that are not needed for a given application, their memories can be repurposed for use by the master device. Related methods are also disclosed.
-
公开(公告)号:US10187081B1
公开(公告)日:2019-01-22
申请号:US15633506
申请日:2017-06-26
Applicant: Amazon Technologies, Inc.
Inventor: Ron Diamant , Ori Weber
Abstract: Disclosed herein are techniques for improving compression ratio for dictionary-based data compression. A method includes receiving a data block to be compressed, selecting an initial compression dictionary from a plurality of initial compression dictionaries based on a characteristic of the data block, loading the initial compression dictionary into an adaptive compression dictionary in a buffer, and compressing the data block using the adaptive compression dictionary. The method also includes updating the adaptive compression dictionary based on data in the data block that has been compressed, while compressing the data block.
-
公开(公告)号:US10168909B1
公开(公告)日:2019-01-01
申请号:US15084013
申请日:2016-03-29
Applicant: Amazon Technologies, Inc.
Inventor: Ron Diamant , Svetlana Kantorovych , Georgy Machulsky , Ori Weber , Nafea Bshara
Abstract: Described herein are techniques for providing data compression and decompression within the bounds of hardware constraints. In some embodiments, the disclosure provides that a processing entity may load a portion of a data stream into a memory buffer. In some embodiments, the size of the portion of data loaded into the memory buffer may be determined based on a capacity of the memory buffer. The processing entity may determine whether the portion of data loaded into the memory buffer includes matching data segments. Upon determining that the portion of data does not include matching data segments, the processing entity may generate a sequence that includes uncompressed data and an indication that the sequence contains no matching data segments.
-
公开(公告)号:US20180375528A1
公开(公告)日:2018-12-27
申请号:US15976312
申请日:2018-05-10
Applicant: Amazon Technologies, Inc.
Inventor: Ori Weber , Ron Diamant , Yair Sandberg
Abstract: The following description is directed to decompression using cascaded history buffers. In one example, an apparatus can include a decompression pipeline configured to decompress compressed data comprising code words that reference a history of decompressed data generated from the compressed data. The apparatus can include a first-level history buffer configured to store a more recent history of the decompressed data received from the decompression pipeline. The apparatus can include a second-level history buffer configured to store a less recent history of the decompressed data received from the first-level history buffer.
-
公开(公告)号:US10140096B1
公开(公告)日:2018-11-27
申请号:US15379103
申请日:2016-12-14
Applicant: AMAZON TECHNOLOGIES, INC.
Inventor: Ron Diamant
Abstract: A device includes parallel connected ring oscillators, a pseudo random number generator (PRNG), and a configuration circuit. The parallel connected ring oscillators include a first and second ring oscillator. The PRNG is configured to generate pseudo random bits at every cycle. The configuration circuit is configured to receive and parse the pseudo random bits to generate and distribute a first configuration value and second configuration value based on the pseudo random bits. The first ring oscillator is configured according to the first configuration value. The second ring oscillator is configured according to the second configuration value.
-
公开(公告)号:US10063422B1
公开(公告)日:2018-08-28
申请号:US14982505
申请日:2015-12-29
Applicant: Amazon Technologies, Inc.
Inventor: Ron Diamant , Leah Shalev , Nafea Bshara , Erez Izenberg
IPC: G06F15/173 , G06F15/16 , H04L12/24 , G06F3/06 , H04L29/06
CPC classification number: G06F3/0604 , G06F3/0607 , G06F3/0638 , G06F3/0661 , G06F3/067 , H04L67/1097 , H04L67/42
Abstract: Technologies for performing controlled bandwidth expansion are described. For example, a storage server can receive a request from a client to read compressed data. The storage server can obtain individual storage units of the compressed data. The storage server can also obtain a compressed size and an uncompressed size for each of the storage units. The storage server can generate network packet content comprising the storage units and associated padding such that the size of the padding for a given storage is based on the uncompressed and compressed sizes of the given storage unit. The storage server can send the network packet content to the client in one or more network packets. The client can receive the network packets, discard the padding, and decompress the compressed data from the storage units.
-
公开(公告)号:US10020819B1
公开(公告)日:2018-07-10
申请号:US15718669
申请日:2017-09-28
Applicant: AMAZON TECHNOLOGIES, INC.
Inventor: Ron Diamant , Michael Baranchik , Ori Weber
CPC classification number: H03M7/6005 , H03M7/3084 , H03M7/4037 , H03M7/60 , H03M7/6017 , H03M7/6023
Abstract: A computing system includes a network interface, a processor, and a decompression circuit. In response to a compression request from the processor the decompression circuit compresses data to produce compressed data and transmits the compressed data through the network interface. In response to a decompression request from the processor for compressed data the decompression circuit retrieves the requested compressed data, speculatively detects codewords in each of a plurality of overlapping bit windows within the compressed data, selects valid codewords from some, but not all of the overlapping bit windows, decodes the selected valid codewords to generate decompressed data, and provides the decompressed data to the processor.
-
-
-
-
-
-
-
-
-