EFFICIENT DATA COMPRESSION FOR SOLID-STATE MEMORY
    11.
    发明申请
    EFFICIENT DATA COMPRESSION FOR SOLID-STATE MEMORY 有权
    用于固态存储器的高效数据压缩

    公开(公告)号:US20160283159A1

    公开(公告)日:2016-09-29

    申请号:US14671929

    申请日:2015-03-27

    Abstract: Compression and decompression technology within a solid-state device (SSD) is disclosed that provides a good compression ratio while taking up less on-chip area. An input interface receives an input stream to be compressed. An output interface provides a compressed stream. A history buffer is of a fixed size that is a fraction of a size of a data buffer. Processing logic encodes into the compressed stream element types, literals and pointers, the latter which reference copies of data found elsewhere within the history buffer during compression. The history buffer may be multiple banks in width, where the data is loaded from the input stream sequentially across rows of the banks. The decompression side may be similarly designed, optionally with a different number of banks. The pointers may be a fixed two bytes including four bits for length and eleven bits for offset of back reference to a copy (or other combination).

    Abstract translation: 公开了在固态设备(SSD)中的压缩和解压缩技术,其提供了良好的压缩比,同时占用较少的片上区域。 输入接口接收要压缩的输入流。 输出接口提供压缩流。 历史缓冲区具有固定大小,它是数据缓冲区大小的一小部分。 处理逻辑编码为压缩流元素类型,文字和指针,后者在压缩期间引用历史缓冲区中其他位置的数据副本。 历史缓冲器可以是宽度的多个存储体,其中数据从输入流顺序地跨越存储体的行加载。 减压侧可以类似地设计,可选地具有不同数量的堤。 指针可以是固定的两个字节,包括用于长度的四个位和用于对拷贝(或其他组合)的反参考偏移的十一位)。

    METHOD AND APPARATUS FOR EFFICIENT DEFLATE DECOMPRESSION USING CONTENT-ADDRESSABLE DATA STRUCTURES

    公开(公告)号:US20220200623A1

    公开(公告)日:2022-06-23

    申请号:US17133609

    申请日:2020-12-23

    Abstract: Apparatus and method for efficient compression block decoding using content-addressable structure for header processing. For example, one embodiment of an apparatus comprises: a header parser to extract a sequence of tokens and corresponding length values from a header of a compression block, the tokens and corresponding length values associated with a type of compression used to compress a payload of the compression block; and a content-addressable data structure builder to construct a content-addressable data structure based on the tokens and length values, the content-addressable data structure builder to write an entry in the content-addressable data structure comprising a length value and a count value, the count value indicating a number of times the length value was previously written to an entry in the content-addressable data structure.

    HARDWARE ACCELERATORS AND METHODS FOR OFFLOAD OPERATIONS

    公开(公告)号:US20180095750A1

    公开(公告)日:2018-04-05

    申请号:US15282372

    申请日:2016-09-30

    CPC classification number: G06F9/50 G06F9/5044

    Abstract: Methods and apparatuses relating to offload operations are described. In one embodiment, a hardware processor includes a core to execute a thread and offload an operation; and a first and second hardware accelerator to execute the operation, wherein the first and second hardware accelerator are coupled to shared buffers to store output data from the first hardware accelerator and provide the output data as input data to the second hardware accelerator, an input buffer descriptor array of the second hardware accelerator with an entry for each respective shared buffer, an input buffer response descriptor array of the second hardware accelerator with a corresponding response entry for each respective shared buffer, an output buffer descriptor array of the first hardware accelerator with an entry for each respective shared buffer, and an output buffer response descriptor array of the first hardware accelerator with a corresponding response entry for each respective shared buffer.

    INSTRUCTION FOR ACCELERATING SNOW 3G WIRELESS SECURITY ALGORITHM
    16.
    发明申请
    INSTRUCTION FOR ACCELERATING SNOW 3G WIRELESS SECURITY ALGORITHM 审中-公开
    加速雪的指令3G无线安全算法

    公开(公告)号:US20170048699A1

    公开(公告)日:2017-02-16

    申请号:US15238698

    申请日:2016-08-16

    Abstract: Vector instructions for performing SNOW 3G wireless security operations are received and executed by the execution circuitry of a processor. The execution circuitry receives a first operand of the first instruction specifying a first vector register that stores a current state of a finite state machine (FSM). The execution circuitry also receives a second operand of the first instruction specifying a second vector register that stores data elements of a liner feedback shift register (LFSR) that are needed for updating the FSM. The execution circuitry executes the first instruction to produce a updated state of the FSM and an output of the FSM in a destination operand of the first instruction.

    Abstract translation: 用于执行SNOW 3G无线安全操作的矢量指令由处理器的执行电路接收和执行。 执行电路接收指定存储有限状态机(FSM)的当前状态的第一向量寄存器的第一指令的第一操作数。 执行电路还接收指定第二向量寄存器的第一指令的第二操作数,该第二指令寄存器存储用于更新FSM所需的线性反馈移位寄存器(LFSR)的数据元素。 执行电路执行第一指令以产生FSM的更新状态和FSM在第一指令的目的地操作数中的输出。

    METHOD AND APPARATUS FOR SPECULATIVE DECOMPRESSION
    17.
    发明申请
    METHOD AND APPARATUS FOR SPECULATIVE DECOMPRESSION 有权
    用于测量分解的方法和装置

    公开(公告)号:US20160321076A1

    公开(公告)日:2016-11-03

    申请号:US14698486

    申请日:2015-04-28

    Abstract: An apparatus and method for performing parallel decoding of prefix codes such as Huffman codes. For example, one embodiment of an apparatus comprises: a first decompression module to perform a non-speculative decompression of a first portion of a prefix code payload comprising a first plurality of symbols; and a second decompression module to perform speculative decompression of a second portion of the prefix code payload comprising a second plurality of symbols concurrently with the non-speculative decompression performed by the first compression module.

    Abstract translation: 一种用于执行诸如霍夫曼码之类的前缀码的并行解码的装置和方法。 例如,装置的一个实施例包括:第一解压缩模块,用于执行包括第一多个符号的前缀码有效载荷的第一部分的非推测解压缩; 以及第二解压缩模块,用于执行与由第一压缩模块执行的非推测性解压缩同时地包括第二多个符号的前缀码有效载荷的第二部分的推测性解压缩。

    CIRCUITRY AND METHODS FOR LOW-LATENCY EFFICIENT CHAINED DECRYPTION AND DECOMPRESSION ACCELERATION

    公开(公告)号:US20220309190A1

    公开(公告)日:2022-09-29

    申请号:US17214820

    申请日:2021-03-27

    Inventor: VINODH GOPAL

    Abstract: Systems, methods, and apparatuses for low-latency page efficient chained decryption and decompression acceleration are described. In one embodiment, a processor comprises a hardware processor core, and an accelerator circuit coupled to the hardware processor core, the accelerator circuit to: in response to a descriptor, comprising an indication of a hash key and encrypted data to be decrypted, from the hardware processor core, perform a determination that the encrypted data is to be read in an encrypted order or a reverse order from the encrypted order, in response to the determination that the encrypted data is to be read in the reverse order, generate a resultant authentication tag in the reverse order for the encrypted data based at least in part on the hash key without reordering the encrypted data in the reverse order into the encrypted order, and, in response to the determination that the encrypted data is to be read in the encrypted order, generate the resultant authentication tag in the encrypted order for the encrypted data based at least in part on the hash key.

    ACCELERATOR APPARATUS AND METHOD FOR DECODING AND DE-SERIALIZING BIT-PACKED DATA

    公开(公告)号:US20200004535A1

    公开(公告)日:2020-01-02

    申请号:US16024815

    申请日:2018-06-30

    Abstract: An apparatus and method for loading and storing multiple sets of packed data elements. For example, one embodiment of a processor comprises: a decoder to decode a multiple load instruction to generate a decoded multiple load instruction comprising a plurality of operations, the multiple load instruction including an opcode, source operands, and at least one destination operand; a first source register to store N packed index values; a second source register to store a base address value; execution circuitry to execute the operations of the decoded multiple load instruction, the execution circuitry comprising: parallel address generation circuitry to combine the base address from the second source register with each of the N packed index values to generate N system memory addresses; data load circuitry to cause N sets of data elements to be retrieved from the N system memory addresses, the data load circuitry to store the N sets of data elements in N vector destination registers identified by the at least one destination operand.

    TECHNIQUES FOR RANDOM OPERATIONS ON COMPRESSED DATA

    公开(公告)号:US20180373808A1

    公开(公告)日:2018-12-27

    申请号:US15634444

    申请日:2017-06-27

    Abstract: Techniques and apparatus for discrete compression and decompression processes are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to determine a compression configuration to compress source data, generate discrete compressed data comprising at least one high-level block comprising a header and at least one discrete block based on the compression configuration, and generate index information for accessing the at least one discrete block. Other embodiments are described and claimed.

Patent Agency Ranking