Non-stalling, non-blocking translation lookaside buffer invalidation

    公开(公告)号:US12210459B2

    公开(公告)日:2025-01-28

    申请号:US18303183

    申请日:2023-04-19

    Inventor: Daniel Brad Wu

    Abstract: A method includes receiving, by a MMU for a processor core, an address translation request from the processor core and providing the address translation request to a TLB of the MMU; generating, by matching logic of the TLB, an address transaction that indicates whether a virtual address specified by the address translation request hits the TLB; providing the address transaction to a general purpose transaction buffer; and receiving, by the MMU, an address invalidation request from the processor core and providing the address invalidation request to the TLB. The method also includes, responsive to a virtual address specified by the address invalidation request hitting the TLB, generating, by the matching logic, an invalidation match transaction and providing the invalidation match transaction to one of the general purpose transaction buffer or a dedicated invalidation buffer.

    LINEFILL DELEGATION IN A CACHE HIERARCHY

    公开(公告)号:US20250021480A1

    公开(公告)日:2025-01-16

    申请号:US18350217

    申请日:2023-07-11

    Applicant: Arm Limited

    Abstract: Apparatuses, methods, systems, and chip-containing products are disclosed, which relate to an arrangement comprising a level N cache level and a level M cache level, where M is greater than N. The level N cache level comprises a plurality of linefill slots and performs a slot allocation procedure in response to a lookup miss in dependence on a linefill slot occupancy criterion. The slot allocation procedure comprises allocation of an available slot of the plurality of slots to a pending linefill request generated in response to the lookup miss. The level N cache level effects a modification of the slot allocation procedure in dependence on the linefill slot occupancy criterion and is responsive to the linefill slot occupancy criterion being fulfilled to cause a linefill delegation action to be instructed to the level M cache level.

    Latency reduction using stream cache

    公开(公告)号:US12182024B2

    公开(公告)日:2024-12-31

    申请号:US18508141

    申请日:2023-11-13

    Abstract: A system and method for a memory sub-system to reduce latency by prefetching data blocks and preloading them into host memory of a host system. An example system including a memory device and a processing device, operatively coupled with the memory device, to perform operations including: receiving a request of a host system to access a data block in the memory device; determining the data block stored in a first buffer in host memory is related to a set of one or more data blocks stored at the memory device; and storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.

    PREFETCHING BY LOGICAL ADDRESS OWNER IN RESPONSE TO PEER NODE MAPPING REQUEST

    公开(公告)号:US20240411696A1

    公开(公告)日:2024-12-12

    申请号:US18207364

    申请日:2023-06-08

    Abstract: In one embodiment, processing can include: receiving, at a first node, a read I/O to read content C1 of logical address LA, wherein the first node does not own LA; and responsive to determining a hash table of the first node does not include a matching entry for LA, performing processing including: sending a request from the first node to a second node that owns LA; sending, to the first node, a response including an address hint for LA and additional address hints for logical addresses, wherein LA and the additional address hints are included in a logical address subrange associated with a single metadata page used in mapping LA and the logical addresses to physical storage locations that store content of LA and the logical addresses; the first node adding entries to the hash table for the additional address hints; and obtaining C1 using the address hint for LA.

Patent Agency Ranking