Cache memory that supports tagless addressing

    公开(公告)号:US12124382B2

    公开(公告)日:2024-10-22

    申请号:US17992443

    申请日:2022-11-22

    申请人: Rambus Inc.

    摘要: The disclosed embodiments relate to a computer system with a cache memory that supports tagless addressing. During operation, the system receives a request to perform a memory access, wherein the request includes a virtual address. In response to the request, the system performs an address-translation operation, which translates the virtual address into both a physical address and a cache address. Next, the system uses the physical address to access one or more levels of physically addressed cache memory, wherein accessing a given level of physically addressed cache memory involves performing a tag-checking operation based on the physical address. If the access to the one or more levels of physically addressed cache memory fails to hit on a cache line for the memory access, the system uses the cache address to directly index a cache memory, wherein directly indexing the cache memory does not involve performing a tag-checking operation and eliminates the tag storage overhead.

    System for Memory Resident Data Movement Offload and Associated Methods

    公开(公告)号:US20240289281A1

    公开(公告)日:2024-08-29

    申请号:US18115607

    申请日:2023-02-28

    申请人: IntelliProp, Inc.

    摘要: A memory access engine is configured to receive a request comprising a command and to determine whether the command comprises an atomic command. If the command comprises the atomic command, the memory access engine determines whether the command includes a virtual address or a physical address. Based on determining that the command includes a virtual address, the memory access engine translates the virtual address to a corresponding physical address. The memory access engine determines an opcode included in the command and, based on the opcode, adds the command and the physical address to a particular queue of a plurality of queues. While a central processing unit (CPU) performs processing tasks, the memory access engine, based on the command, operates a memory fabric and, after receiving a message from the memory fabric indicating that the memory command has been completed, updates a status associated with the command to a completed status.

    EMBEDDED CONFIGURABLE ENGINE
    5.
    发明公开

    公开(公告)号:US20240281377A1

    公开(公告)日:2024-08-22

    申请号:US18443756

    申请日:2024-02-16

    申请人: XILINX, INC.

    摘要: Embodiments herein describe a configurable engine that is embedded into the cache hierarchy of a processor. The configurable engine can enable efficient data sharing between the main memory, cache memories, and the core. The configurable engine can perform operations that are more efficient to be done in the cache hierarchy. In one embodiment, the configurable engine is controlled (or configured) by software (e.g., the operating system (OS)), adapting to each application domain. That is, the OS can configure the engine according to a data flow profile of a particular application being executed by the processor.

    Dynamically foldable and unfoldable instruction fetch pipeline

    公开(公告)号:US12014180B2

    公开(公告)日:2024-06-18

    申请号:US17835409

    申请日:2022-06-08

    摘要: A dynamically-foldable instruction fetch pipeline receives a first fetch request that includes a fetch virtual address and includes first, second and third sub-pipelines that respectively include a translation lookaside buffer (TLB) that translates the fetch virtual address into a fetch physical address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a set index that selects a set of tag RAM tags for comparison with a tag portion of the fetch physical address to determine a correct way of the instruction cache, and a data RAM of the instruction cache that receives the set index and a way number that together specify a data RAM entry from which to fetch an instruction block. When a control signal indicates a folded mode, the sub-pipelines operate in a parallel manner. When the control signal indicates a unfolded mode, the sub-pipelines operate in a sequential manner.