LOW LATENCY SYNCHRONIZATION FOR OPERATION CACHE AND INSTRUCTION CACHE FETCHING AND DECODING INSTRUCTIONS

    公开(公告)号:US20190391813A1

    公开(公告)日:2019-12-26

    申请号:US16014715

    申请日:2018-06-21

    Abstract: The techniques described herein provide an instruction fetch and decode unit having an operation cache with low latency in switching between fetching decoded operations from the operation cache and fetching and decoding instructions using a decode unit. This low latency is accomplished through a synchronization mechanism that allows work to flow through both the operation cache path and the instruction cache path until that work is stopped due to needing to wait on output from the opposite path. The existence of decoupling buffers in the operation cache path and the instruction cache path allows work to be held until that work is cleared to proceed. Other improvements, such as a specially configured operation cache tag array that allows for detection of multiple hits in a single cycle, also improve latency by, for example, improving the speed at which entries are consumed from a prediction queue that stores predicted address blocks.

    Efficient lossless data compression system, data compressor, and method therefor
    2.
    发明授权
    Efficient lossless data compression system, data compressor, and method therefor 有权
    高效无损数据压缩系统,数据压缩器及其方法

    公开(公告)号:US09219496B1

    公开(公告)日:2015-12-22

    申请号:US14462243

    申请日:2014-08-18

    CPC classification number: H03M7/30 G06F8/41 H03M7/6005

    Abstract: A data compressor for a lossless data compression system includes a hardware aware encoder and a key signal processor. The hardware aware encoder encodes a data value signal into a key signal according to a key assignment formed by determining a number of data values of a value space in which each data value comprises a plurality of bits, determining a size of the key to encode the number of data values of the value space, grouping the data values into a plurality of groups based on a fewest number of bit differences between data values in each group, and assigning fragments of the key based on a fewest number of bits that can differentiate groups based on remaining bits of the data values. The key signal processor has an output adapted to be coupled to a medium for providing a representation of the key signal to the output.

    Abstract translation: 用于无损数据压缩系统的数据压缩器包括硬件感知编码器和键信号处理器。 硬件感知编码器根据通过确定每个数据值包括多个比特的值空间的数据值的数量形成的密钥分配将数据值信号编码成密钥信号,确定密钥编码的大小 数值空间的数据值的数量,基于每组中的数据值之间的最小位数差异将数据值分组为多个组,并且基于可以区分组的最少位数来分配密钥的片段 基于数据值的剩余位。 键信号处理器具有适于耦合到介质的输出,用于向输出提供键信号的表示。

    Low latency synchronization for operation cache and instruction cache fetching and decoding instructions

    公开(公告)号:US10896044B2

    公开(公告)日:2021-01-19

    申请号:US16014715

    申请日:2018-06-21

    Abstract: The techniques described herein provide an instruction fetch and decode unit having an operation cache with low latency in switching between fetching decoded operations from the operation cache and fetching and decoding instructions using a decode unit. This low latency is accomplished through a synchronization mechanism that allows work to flow through both the operation cache path and the instruction cache path until that work is stopped due to needing to wait on output from the opposite path. The existence of decoupling buffers in the operation cache path and the instruction cache path allows work to be held until that work is cleared to proceed. Other improvements, such as a specially configured operation cache tag array that allows for detection of multiple hits in a single cycle, also improve latency by, for example, improving the speed at which entries are consumed from a prediction queue that stores predicted address blocks.

Patent Agency Ranking