MULTI-MEMORY ON-CHIP COMPUTATIONAL NETWORK
    211.
    发明申请

    公开(公告)号:US20190180170A1

    公开(公告)日:2019-06-13

    申请号:US15839301

    申请日:2017-12-12

    Abstract: Provided are systems, methods, and integrated circuits for a neural network processing system. In various implementations, the system can include a first array of processing engines coupled to a first set of memory banks and a second array of processing engines coupled to a second set of memory banks. The first and second set of memory banks be storing all the weight values for a neural network, where the weight values are stored before any input data is received. Upon receiving input data, the system performs a task defined for the neural network. Performing the task can include computing an intermediate result using the first array of processing engines, copying the intermediate result to the second set of memory banks, and computing a final result using the second array of processing engines, where the final result corresponds to an outcome of performing the task.

    FAST CONTEXT SWITCHING FOR COMPUTATIONAL NETWORKS

    公开(公告)号:US20190179795A1

    公开(公告)日:2019-06-13

    申请号:US15839157

    申请日:2017-12-12

    Abstract: Provided are systems, methods, and integrated circuits neural network processor that can execute a fast context switch between one neural network and another. In various implementations, a neural network processor can include a plurality of memory banks storing a first set of weight values for a first neural network. When the neural network processor receives first input data, the neural network processor can compute a first result using the first set of weight values and the first input data. While computing the first result, the neural network processor can store, in the memory banks, a second set of weight values for a second neural network. When the neural network processor receives second input data, the neural network processor can compute a second result using the second set of weight values and the second input data, where the computation occurs upon completion of computation of the first result.

    Data protection through address modification

    公开(公告)号:US10303621B1

    公开(公告)日:2019-05-28

    申请号:US15452117

    申请日:2017-03-07

    Abstract: An electronic system includes a secret value (e.g., an encryption key) which is used for its intended purpose after which the address translations in the system's memory management unit are modified to prevent further access to the secret value. The address translation modifications also include modification of a translation for the memory management unit itself thereby preventing further modification of the address translations. The secret value cannot again be accessed until the system is reinitialized, but the address translations are modified during each system initialization so that the secret value is only usable for its intended purpose during the initialization process. In other implementations, the system modifies mappings between physical addresses and hardware components to preclude further access to the secret value.

    Dictionary preload for data compression

    公开(公告)号:US10187081B1

    公开(公告)日:2019-01-22

    申请号:US15633506

    申请日:2017-06-26

    Abstract: Disclosed herein are techniques for improving compression ratio for dictionary-based data compression. A method includes receiving a data block to be compressed, selecting an initial compression dictionary from a plurality of initial compression dictionaries based on a characteristic of the data block, loading the initial compression dictionary into an adaptive compression dictionary in a buffer, and compressing the data block using the adaptive compression dictionary. The method also includes updating the adaptive compression dictionary based on data in the data block that has been compressed, while compressing the data block.

    Compression hardware acceleration
    216.
    发明授权

    公开(公告)号:US10168909B1

    公开(公告)日:2019-01-01

    申请号:US15084013

    申请日:2016-03-29

    Abstract: Described herein are techniques for providing data compression and decompression within the bounds of hardware constraints. In some embodiments, the disclosure provides that a processing entity may load a portion of a data stream into a memory buffer. In some embodiments, the size of the portion of data loaded into the memory buffer may be determined based on a capacity of the memory buffer. The processing entity may determine whether the portion of data loaded into the memory buffer includes matching data segments. Upon determining that the portion of data does not include matching data segments, the processing entity may generate a sequence that includes uncompressed data and an indication that the sequence contains no matching data segments.

    DECOMPRESSION USING CASCADED HISTORY WINDOWS
    217.
    发明申请

    公开(公告)号:US20180375528A1

    公开(公告)日:2018-12-27

    申请号:US15976312

    申请日:2018-05-10

    Abstract: The following description is directed to decompression using cascaded history buffers. In one example, an apparatus can include a decompression pipeline configured to decompress compressed data comprising code words that reference a history of decompressed data generated from the compressed data. The apparatus can include a first-level history buffer configured to store a more recent history of the decompressed data received from the decompression pipeline. The apparatus can include a second-level history buffer configured to store a less recent history of the decompressed data received from the first-level history buffer.

    Preventing ring oscillator phase-lock

    公开(公告)号:US10140096B1

    公开(公告)日:2018-11-27

    申请号:US15379103

    申请日:2016-12-14

    Inventor: Ron Diamant

    Abstract: A device includes parallel connected ring oscillators, a pseudo random number generator (PRNG), and a configuration circuit. The parallel connected ring oscillators include a first and second ring oscillator. The PRNG is configured to generate pseudo random bits at every cycle. The configuration circuit is configured to receive and parse the pseudo random bits to generate and distribute a first configuration value and second configuration value based on the pseudo random bits. The first ring oscillator is configured according to the first configuration value. The second ring oscillator is configured according to the second configuration value.

    Speculative data decompression
    220.
    发明授权

    公开(公告)号:US10020819B1

    公开(公告)日:2018-07-10

    申请号:US15718669

    申请日:2017-09-28

    Abstract: A computing system includes a network interface, a processor, and a decompression circuit. In response to a compression request from the processor the decompression circuit compresses data to produce compressed data and transmits the compressed data through the network interface. In response to a decompression request from the processor for compressed data the decompression circuit retrieves the requested compressed data, speculatively detects codewords in each of a plurality of overlapping bit windows within the compressed data, selects valid codewords from some, but not all of the overlapping bit windows, decodes the selected valid codewords to generate decompressed data, and provides the decompressed data to the processor.

Patent Agency Ranking