Retire queue compression
    11.
    发明授权

    公开(公告)号:US12204911B2

    公开(公告)日:2025-01-21

    申请号:US17497572

    申请日:2021-10-08

    Abstract: Systems, apparatuses, and methods for compressing multiple instruction operations together into a single retire queue entry are disclosed. A processor includes at least a scheduler, a retire queue, one or more execution units, and control logic. When the control logic detects a given instruction operation being dispatched by the scheduler to an execution unit, the control logic determines if the given instruction operation meets one or more conditions for being compressed with one or more other instruction operations into a single retire queue entry. If the one or more conditions are met, two or more instruction operations are stored together in a single retire queue entry. By compressing multiple instruction operations together into an individual retire queue entry, the retire queue is able to be used more efficiently, and the processor can speculatively execute more instructions without the retire queue exhausting its supply of available entries.

    Data Reuse Cache
    12.
    发明公开
    Data Reuse Cache 审中-公开

    公开(公告)号:US20240111674A1

    公开(公告)日:2024-04-04

    申请号:US17955618

    申请日:2022-09-29

    CPC classification number: G06F12/0811 G06F12/0875 G06F12/0884

    Abstract: Data reuse cache techniques are described. In one example, a load instruction is generated by an execution unit of a processor unit. In response to the load instruction, data is loaded by a load-store unit for processing by the execution unit and is also stored to a data reuse cache communicatively coupled between the load-store unit and the execution unit. Upon receipt of a subsequent load instruction for the data from the execution unit, the data is loaded from the data reuse cache for processing by the execution unit.

    SCHEDULER QUEUE ASSIGNMENT
    13.
    发明申请

    公开(公告)号:US20220206798A1

    公开(公告)日:2022-06-30

    申请号:US17698955

    申请日:2022-03-18

    Abstract: Systems, apparatuses, and methods for implementing scheduler queue assignment logic are disclosed. A processor includes at least a decode unit, scheduler queue assignment logic, scheduler queues, pickers, and execution units. The assignment logic receives a plurality of operations from a decode unit in each clock cycle. The assignment logic includes a separate logical unit for each different type of operation which is executable by the different execution units of the processor. For each different type of operation, the assignment logic determines which of the possible assignment permutations are valid for assigning different numbers of operations to scheduler queues in a given clock cycle. The assignment logic receives an indication of how many operations to assign in the given clock cycle, and then the assignment logic selects one of the valid assignment permutations for the number of operations specified by the indication.

    Scheduler queue assignment
    14.
    发明授权

    公开(公告)号:US11294678B2

    公开(公告)日:2022-04-05

    申请号:US15991088

    申请日:2018-05-29

    Abstract: Systems, apparatuses, and methods for implementing scheduler queue assignment logic are disclosed. A processor includes at least a decode unit, scheduler queue assignment logic, scheduler queues, pickers, and execution units. The assignment logic receives a plurality of operations from a decode unit in each clock cycle. The assignment logic includes a separate logical unit for each different type of operation which is executable by the different execution units of the processor. For each different type of operation, the assignment logic determines which of the possible assignment permutations are valid for assigning different numbers of operations to scheduler queues in a given clock cycle. The assignment logic receives an indication of how many operations to assign in the given clock cycle, and then the assignment logic selects one of the valid assignment permutations for the number of operations specified by the indication.

    SHARED RESOURCE ALLOCATION IN A MULTI-THREADED MICROPROCESSOR

    公开(公告)号:US20210096920A1

    公开(公告)日:2021-04-01

    申请号:US16585424

    申请日:2019-09-27

    Abstract: An approach is provided for allocating a shared resource to threads in a multi-threaded microprocessor based upon the usefulness of the shared resource to each of the threads. The usefulness of a shared resource to a thread is determined based upon the number of entries in the shared resource that are allocated to the thread and the number of active entries that the thread has in the shared resource. Threads that are allocated a large number of entries in the shared resource and have a small number of active entries in the shared resource, indicative of a low level of parallelism, can operate efficiently with fewer entries in the shared resource, and have their allocation limit in the shared resource reduced.

    PRECHARGE DISABLE USING PREDECODED ADDRESS
    16.
    发明申请
    PRECHARGE DISABLE USING PREDECODED ADDRESS 有权
    使用预定地址预先禁用

    公开(公告)号:US20150058575A1

    公开(公告)日:2015-02-26

    申请号:US13970735

    申请日:2013-08-20

    Inventor: Matthew T. Sobel

    CPC classification number: G06F12/121 G11C7/12 G11C11/419

    Abstract: A memory can be a sum addressed memory (SAM) that receives, for each read access, two address values (e.g. a base address and an offset) having a sum that indicates the entry of the memory to be read (the read entry). A decoder adds the two address value to identify the read entry. Concurrently, a predecode module predecodes the two address values to identify a set of entries (e.g. two different entries) at the memory, whereby the set includes the entry to be read. The predecode module generates a precharge disable signal to terminate precharging at the set of entries which includes the entry to he read. Because the precharge disable signal is based on predecoded address information, it can be generated without waiting for a full decode of the read address entry.

    Abstract translation: 存储器可以是和寻址存储器(SAM),其针对每个读取访问接收具有指示要读取的存储器(读取条目)的条目的和的两个地址值(例如,基地址和偏移量)。 解码器将添加两个地址值以识别读取条目。 同时,预解码模块预先对两个地址值进行解码,以在存储器处标识一组条目(例如两个不同的条目),由此该集合包括要读取的条目。 预解码模块产生预充电禁止信号,以终止在包括他所读取的条目的条目集合处的预充电。 由于预充电禁止信号基于预编码地址信息,所以可以在不等待读取地址条目的完全解码的情况下生成预充电禁止信号。

Patent Agency Ranking