SILENT ACTIVE PAGE MIGRATION FAULTS
    1.
    发明申请

    公开(公告)号:US20180307414A1

    公开(公告)日:2018-10-25

    申请号:US15495296

    申请日:2017-04-24

    Abstract: Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.

    TRANSLATE FURTHER MECHANISM
    2.
    发明申请

    公开(公告)号:US20180300253A1

    公开(公告)日:2018-10-18

    申请号:US15486745

    申请日:2017-04-13

    Abstract: Systems, apparatuses, and methods for implementing a translate further mechanism are disclosed herein. In one embodiment, a processor detects a hit to a first entry of a page table structure during a first lookup to the page table structure. The processor retrieves a page table entry address from the first entry and uses this address to perform a second lookup to the page table structure responsive to detecting a first indication in the first entry. The processor retrieves a physical address from the first entry and uses the physical address to access the memory subsystem responsive to not detecting the first indication in the first entry. In one embodiment, the first indication is a translate further bit being set. In another embodiment, the first indication is a page directory entry as page table entry field not being activated.

    Streaming translation lookaside buffer

    公开(公告)号:US10417140B2

    公开(公告)日:2019-09-17

    申请号:US15442487

    申请日:2017-02-24

    Abstract: Techniques are provided for using a translation lookaside buffer to provide low latency memory address translations for data streams. Clients of a memory system first prepare the address translation cache hierarchy by requesting that a translation pre-fetch stream is initialized. After the translation pre-fetch stream is initialized, the cache hierarchy returns an acknowledgment of completion to the client, which then begins to access memory. Pre-fetch streams are specified in terms of address ranges and are performed for large contiguous portions of the virtual memory address space.

    Silent active page migration faults

    公开(公告)号:US10365824B2

    公开(公告)日:2019-07-30

    申请号:US15495296

    申请日:2017-04-24

    Abstract: Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.

    Sharing translation lookaside buffer resources for different traffic classes

    公开(公告)号:US10114761B2

    公开(公告)日:2018-10-30

    申请号:US15442462

    申请日:2017-02-24

    Abstract: Techniques are provided for managing address translation request traffic where memory access requests can be made with differing quality-of-service levels, which specify latency and/or bandwidth requirements. The techniques involve translation lookaside buffers. Within the translation lookaside buffers, certain resources are reserved for specific quality-of-service levels. More specifically, translation lookaside buffer slots, which store the actual translations, as well as finite state machines in a work queue, are reserved for specific quality-of-service levels. The translation lookaside buffer receives multiple requests for address translation. The translation lookaside buffer selects requests having the highest quality-of-service level for which an available finite state machine is available. The fact that finite state machines are reserved to particular quality-of-service levels means that if all such finite state machines for a particular quality-of-service level are used by pending translation requests, then the translation lookaside buffer does not accept more translation requests for that quality-of-service level.

    CACHE VIRTUALIZATION
    6.
    发明申请

    公开(公告)号:US20250110893A1

    公开(公告)日:2025-04-03

    申请号:US18478757

    申请日:2023-09-29

    Abstract: An apparatus and method for efficiently performing address translation requests. An integrated circuit includes a system memory that stores address mappings, and the circuitry of one or more clients processes one or more applications and generate address translation requests. A translation lookaside buffer (TLB) stores, in multiple entries, address mappings retrieved from the system memory. Circuitry of a client processes one or more applications and generates address translation requests. The entries of the TLB stores address mappings corresponding to different address mapping types and different virtual functions to avoid searches of multiple other lower-level TLBs that are significantly larger and have larger access. In addition, the TLB is implemented with a relatively small number of entries and uses fully associative data storage arrangement to further reduce access latencies.

    CONCURRENT PROCESSING OF MEMORY MAPPING INVALIDATION REQUESTS

    公开(公告)号:US20220414016A1

    公开(公告)日:2022-12-29

    申请号:US17355820

    申请日:2021-06-23

    Abstract: A translation lookaside buffer (TLB) receives mapping invalidation requests from one or more sources, such as one or more processing units of a processing system. The TLB includes one or more invalidation processing pipelines, wherein each processing pipeline includes multiple processing states arranged in a pipeline, so that a given stage executes its processing operations concurrent with other stages of the pipeline executing their processing operations.

    FULLY VIRTUALIZED TLBS
    8.
    发明申请

    公开(公告)号:US20180307622A1

    公开(公告)日:2018-10-25

    申请号:US15495707

    申请日:2017-04-24

    Abstract: Systems, apparatuses, and methods for implementing a virtualized translation lookaside buffer (TLB) are disclosed herein. In one embodiment, a system includes at least an execution unit and a first TLB. The system supports the execution of a plurality of virtual machines in a virtualization environment. The system detects a translation request generated by a first virtual machine with a first virtual memory identifier (VMID). The translation request is conveyed from the execution unit to the first TLB. The first TLB performs a lookup of its cache using at least a portion of a first virtual address and the first VMID. If the lookup misses in the cache, the first TLB allocates an entry which is addressable by the first virtual address and the first VMID, and the first TLB sends the translation request with the first VMID to a second TLB.

    STREAMING TRANSLATION LOOKASIDE BUFFER
    9.
    发明申请

    公开(公告)号:US20180246816A1

    公开(公告)日:2018-08-30

    申请号:US15442487

    申请日:2017-02-24

    Abstract: Techniques are provided for using a translation lookaside buffer to provide low latency memory address translations for data streams. Clients of a memory system first prepare the address translation cache hierarchy by requesting that a translation pre-fetch stream is initialized. After the translation pre-fetch stream is initialized, the cache hierarchy returns an acknowledgment of completion to the client, which then begins to access memory. Pre-fetch streams are specified in terms of address ranges and are performed for large contiguous portions of the virtual memory address space.

    SHARING TRANSLATION LOOKASIDE BUFFER RESOURCES FOR DIFFERENT TRAFFIC CLASSES

    公开(公告)号:US20180246815A1

    公开(公告)日:2018-08-30

    申请号:US15442462

    申请日:2017-02-24

    Abstract: Techniques are provided for managing address translation request traffic where memory access requests can be made with differing quality-of-service levels, which specify latency and/or bandwidth requirements. The techniques involve translation lookaside buffers. Within the translation lookaside buffers, certain resources are reserved for specific quality-of-service levels. More specifically, translation lookaside buffer slots, which store the actual translations, as well as finite state machines in a work queue, are reserved for specific quality-of-service levels. The translation lookaside buffer receives multiple requests for address translation. The translation lookaside buffer selects requests having the highest quality-of-service level for which an available finite state machine is available. The fact that finite state machines are reserved to particular quality-of-service levels means that if all such finite state machines for a particular quality-of-service level are used by pending translation requests, then the translation lookaside buffer does not accept more translation requests for that quality-of-service level.

Patent Agency Ranking