Instruction error handling
    1.
    发明授权

    公开(公告)号:US11740973B2

    公开(公告)日:2023-08-29

    申请号:US17173093

    申请日:2021-02-10

    Abstract: An instruction storage circuit within a processor that includes an instruction memory and a memory control circuit. The instruction memory is configured to store instructions of a program for the processor. The memory control circuit is configured to receive a particular instruction from the instruction memory, detect a data integrity error in the particular instruction, and generate and store a corrected version of the particular instruction in an error storage circuit within the instruction memory. A flush of an execution pipeline may be performed in response to the error. In response to a refetch of the particular instruction after the pipeline flush, the instruction storage circuit may be configured to cause the particular instruction to be provided from the error storage circuit to the execution pipeline to permit forward progress of the processor.

    Load-store unit dual tags and replays

    公开(公告)号:US11983538B2

    公开(公告)日:2024-05-14

    申请号:US17659569

    申请日:2022-04-18

    CPC classification number: G06F9/3834 G06F12/0855

    Abstract: Techniques are disclosed relating to a processor load-store unit. In some embodiments, the load-store unit is configured to execute load/store instructions in parallel using first and second pipelines and first and second tag memory arrays. In tag write conflict situations, the load-store unit may arbitrate between the first and second pipelines to ensure the first and second tag memory array contents remain identical. In some embodiments, a data cache tag replay scheme is utilized. In some embodiments, executing load/store instructions in parallel with fills, probes, and store-updates, using separate but identical tag memory arrays, may advantageously improve performance.

    Technique for Overriding Memory Attributes
    3.
    发明公开

    公开(公告)号:US20230350805A1

    公开(公告)日:2023-11-02

    申请号:US17661427

    申请日:2022-04-29

    CPC classification number: G06F12/0837 G06F12/0877 G06F9/30138

    Abstract: Techniques are disclosed relating to an apparatus that includes a plurality of memory access control registers that are programmable with respective address ranges within an address space. The apparatus further includes a memory access circuit configured to receive a command for performing a memory access, the command specifying an address corresponding to a location in a memory circuit. In response to the address being located within an address range of a particular one of the plurality of memory access control registers, the memory access circuit is configured to perform the command using override memory parameters that have been programmed into the particular memory access control register instead of a default set of attributes for the address space.

    Managing Multiple Cache Memory Circuit Operations

    公开(公告)号:US20230342296A1

    公开(公告)日:2023-10-26

    申请号:US17660775

    申请日:2022-04-26

    CPC classification number: G06F12/0811 G06F2212/62

    Abstract: A cache memory circuit capable of dealing with multiple conflicting requests to a given cache line is disclosed. In response to receiving an acquire request for the given cache line from a particular lower-level cache memory circuit, the cache memory circuit sends probe requests regarding the given cache line to other lower-level cache memory circuits. In situations where a different lower-level cache memory circuit is simultaneously trying to evict the given cache line at the particular lower-level cache memory circuit is trying to obtain a copy of the cache line, the cache memory circuit performs a series of operations to service both requests and ensure that the particular lower-level cache memory circuit receives a copy of the given cache line that includes any changes in the evicted copy of the given cache line.

    Program thread selection between a plurality of execution pipelines

    公开(公告)号:US11531550B2

    公开(公告)日:2022-12-20

    申请号:US17173067

    申请日:2021-02-10

    Abstract: Techniques are disclosed relating to an apparatus that includes a plurality of execution pipelines including first and second execution pipelines, a shared circuit that is shared by the first and second execution pipelines, and a decode circuit. The first and second execution pipelines are configured to concurrently perform operations for respective instructions. The decode circuit is configured to assign a first program thread to the first execution pipeline and a second program thread to the second execution pipeline. In response to determining that respective instructions from the first and second program threads that utilize the shared circuit are concurrently available for dispatch, the decode circuit is further configured to select between the first program thread and the second program thread.

    Queue Circuit For Controlling Access To A Memory Circuit

    公开(公告)号:US20230350605A1

    公开(公告)日:2023-11-02

    申请号:US17661402

    申请日:2022-04-29

    CPC classification number: G06F3/0659 G06F3/0604 G06F3/0679

    Abstract: A queue circuit that manages access to a memory circuit in a computer system includes multiple sets of entries for storing access requests. The entries in one set of entries are assigned to corresponding sources that generate access requests to the memory circuit. The entries in the other set of entries are floating entries that can be used to store requests from any of the sources. Upon receiving a request from a particular source, the queue circuit checks the entry assigned to the particular source and, if the entry is unoccupied, the queue circuit stores the request in the entry. If, however, the entry assigned to the particular source is occupied, the queue circuit stores the request in one of the floating entries.

    Load-Store Unit Dual Tags and Replays
    7.
    发明公开

    公开(公告)号:US20230333856A1

    公开(公告)日:2023-10-19

    申请号:US17659569

    申请日:2022-04-18

    CPC classification number: G06F3/0655 G06F3/0673 G06F3/0604

    Abstract: Techniques are disclosed relating to a processor load-store unit. In some embodiments, the load-store unit is configured to execute load/store instructions in parallel using first and second pipelines and first and second tag memory arrays. In tag write conflict situations, the load-store unit may arbitrate between the first and second pipelines to ensure the first and second tag memory array contents remain identical. In some embodiments, a data cache tag replay scheme is utilized. In some embodiments, executing load/store instructions in parallel with fills, probes, and store-updates, using separate but identical tag memory arrays, may advantageously improve performance.

    Circuit for fast interrupt handling

    公开(公告)号:US11507414B2

    公开(公告)日:2022-11-22

    申请号:US17173108

    申请日:2021-02-10

    Abstract: A circuit for fast interrupt handling is disclosed. An apparatus includes a processor circuit having an execution pipeline and a table configured to store a plurality of pointers that correspond to interrupt routines stored in a memory circuit. The apparatus further includes an interrupt redirect circuit configured to receive a plurality of interrupt requests. The interrupt redirect circuit may select a first interrupt request among a plurality of interrupt requests of a first type. The interrupt redirect circuit retrieves a pointer from the table using information associated with the request. Using the pointer, the execution pipeline retrieves first program instruction from the memory circuit to execute a particular interrupt routine.

    Queue circuit for controlling access to a memory circuit

    公开(公告)号:US12141474B2

    公开(公告)日:2024-11-12

    申请号:US17661402

    申请日:2022-04-29

    Abstract: A queue circuit that manages access to a memory circuit in a computer system includes multiple sets of entries for storing access requests. The entries in one set of entries are assigned to corresponding sources that generate access requests to the memory circuit. The entries in the other set of entries are floating entries that can be used to store requests from any of the sources. Upon receiving a request from a particular source, the queue circuit checks the entry assigned to the particular source and, if the entry is unoccupied, the queue circuit stores the request in the entry. If, however, the entry assigned to the particular source is occupied, the queue circuit stores the request in one of the floating entries.

    Managing multiple cache memory circuit operations

    公开(公告)号:US11960400B2

    公开(公告)日:2024-04-16

    申请号:US17660775

    申请日:2022-04-26

    CPC classification number: G06F12/0811 G06F2212/62

    Abstract: A cache memory circuit capable of dealing with multiple conflicting requests to a given cache line is disclosed. In response to receiving an acquire request for the given cache line from a particular lower-level cache memory circuit, the cache memory circuit sends probe requests regarding the given cache line to other lower-level cache memory circuits. In situations where a different lower-level cache memory circuit is simultaneously trying to evict the given cache line at the particular lower-level cache memory circuit is trying to obtain a copy of the cache line, the cache memory circuit performs a series of operations to service both requests and ensure that the particular lower-level cache memory circuit receives a copy of the given cache line that includes any changes in the evicted copy of the given cache line.

Patent Agency Ranking