SILENT ACTIVE PAGE MIGRATION FAULTS
    1.
    发明申请

    公开(公告)号:US20180307414A1

    公开(公告)日:2018-10-25

    申请号:US15495296

    申请日:2017-04-24

    摘要: Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.

    Method and apparatus for reformatting page table entries for cache storage
    2.
    发明授权
    Method and apparatus for reformatting page table entries for cache storage 有权
    用于重新格式化高速缓存存储的页表条目的方法和装置

    公开(公告)号:US09483412B2

    公开(公告)日:2016-11-01

    申请号:US14516192

    申请日:2014-10-16

    发明人: Wade K. Smith

    IPC分类号: G06F12/10 G06F12/08

    摘要: A device for and method of storing page table entries in a first cache. A first page table entry is received having a fragment field that contains address information for a requested first page and at least a second page logically adjacent to the first page. A second page table entry is generated from the first page table entry to be stored with the first page table entry. The second page table entry provides address information for the second page. The second page table entry has a configuration that is compatible with the first cache.

    摘要翻译: 用于将页表条目存储在第一高速缓存中的装置和方法。 接收到具有片段字段的第一页表条目,该片段字段包含所请求的第一页的地址信息和与第一页逻辑上相邻的至少第二页。 从第一页表条目生成第二页表条目以与第一页表条目一起存储。 第二页表项提供第二页的地址信息。 第二页表条目具有与第一高速缓存兼容的配置。

    TRANSLATE FURTHER MECHANISM
    4.
    发明申请

    公开(公告)号:US20180300253A1

    公开(公告)日:2018-10-18

    申请号:US15486745

    申请日:2017-04-13

    IPC分类号: G06F12/1009 G06F12/1027

    摘要: Systems, apparatuses, and methods for implementing a translate further mechanism are disclosed herein. In one embodiment, a processor detects a hit to a first entry of a page table structure during a first lookup to the page table structure. The processor retrieves a page table entry address from the first entry and uses this address to perform a second lookup to the page table structure responsive to detecting a first indication in the first entry. The processor retrieves a physical address from the first entry and uses the physical address to access the memory subsystem responsive to not detecting the first indication in the first entry. In one embodiment, the first indication is a translate further bit being set. In another embodiment, the first indication is a page directory entry as page table entry field not being activated.

    Streaming translation lookaside buffer

    公开(公告)号:US10417140B2

    公开(公告)日:2019-09-17

    申请号:US15442487

    申请日:2017-02-24

    摘要: Techniques are provided for using a translation lookaside buffer to provide low latency memory address translations for data streams. Clients of a memory system first prepare the address translation cache hierarchy by requesting that a translation pre-fetch stream is initialized. After the translation pre-fetch stream is initialized, the cache hierarchy returns an acknowledgment of completion to the client, which then begins to access memory. Pre-fetch streams are specified in terms of address ranges and are performed for large contiguous portions of the virtual memory address space.

    Silent active page migration faults

    公开(公告)号:US10365824B2

    公开(公告)日:2019-07-30

    申请号:US15495296

    申请日:2017-04-24

    摘要: Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.

    Sharing translation lookaside buffer resources for different traffic classes

    公开(公告)号:US10114761B2

    公开(公告)日:2018-10-30

    申请号:US15442462

    申请日:2017-02-24

    摘要: Techniques are provided for managing address translation request traffic where memory access requests can be made with differing quality-of-service levels, which specify latency and/or bandwidth requirements. The techniques involve translation lookaside buffers. Within the translation lookaside buffers, certain resources are reserved for specific quality-of-service levels. More specifically, translation lookaside buffer slots, which store the actual translations, as well as finite state machines in a work queue, are reserved for specific quality-of-service levels. The translation lookaside buffer receives multiple requests for address translation. The translation lookaside buffer selects requests having the highest quality-of-service level for which an available finite state machine is available. The fact that finite state machines are reserved to particular quality-of-service levels means that if all such finite state machines for a particular quality-of-service level are used by pending translation requests, then the translation lookaside buffer does not accept more translation requests for that quality-of-service level.

    METHOD AND APPARATUS FOR REFORMATTING PAGE TABLE ENTRIES FOR CACHE STORAGE
    8.
    发明申请
    METHOD AND APPARATUS FOR REFORMATTING PAGE TABLE ENTRIES FOR CACHE STORAGE 有权
    用于改写高速缓存存储页表的方法和装置

    公开(公告)号:US20150121009A1

    公开(公告)日:2015-04-30

    申请号:US14516192

    申请日:2014-10-16

    发明人: Wade K. Smith

    IPC分类号: G06F12/10 G06F12/08

    摘要: A device for and method of storing page table entries in a first cache. A first page table entry is received having a fragment field that contains address information for a requested first page and at least a second page logically adjacent to the first page. A second page table entry is generated from the first page table entry to be stored with the first page table entry. The second page table entry provides address information for the second page. The second page table entry has a configuration that is compatible with the first cache.

    摘要翻译: 用于将页表条目存储在第一高速缓存中的装置和方法。 接收到具有片段字段的第一页表条目,该片段字段包含所请求的第一页的地址信息和与第一页逻辑上相邻的至少第二页。 从第一页表条目生成第二页表条目以与第一页表条目一起存储。 第二页表项提供第二页的地址信息。 第二页表条目具有与第一高速缓存兼容的配置。

    CONCURRENT PROCESSING OF MEMORY MAPPING INVALIDATION REQUESTS

    公开(公告)号:US20220414016A1

    公开(公告)日:2022-12-29

    申请号:US17355820

    申请日:2021-06-23

    IPC分类号: G06F12/0891

    摘要: A translation lookaside buffer (TLB) receives mapping invalidation requests from one or more sources, such as one or more processing units of a processing system. The TLB includes one or more invalidation processing pipelines, wherein each processing pipeline includes multiple processing states arranged in a pipeline, so that a given stage executes its processing operations concurrent with other stages of the pipeline executing their processing operations.

    FULLY VIRTUALIZED TLBS
    10.
    发明申请

    公开(公告)号:US20180307622A1

    公开(公告)日:2018-10-25

    申请号:US15495707

    申请日:2017-04-24

    摘要: Systems, apparatuses, and methods for implementing a virtualized translation lookaside buffer (TLB) are disclosed herein. In one embodiment, a system includes at least an execution unit and a first TLB. The system supports the execution of a plurality of virtual machines in a virtualization environment. The system detects a translation request generated by a first virtual machine with a first virtual memory identifier (VMID). The translation request is conveyed from the execution unit to the first TLB. The first TLB performs a lookup of its cache using at least a portion of a first virtual address and the first VMID. If the lookup misses in the cache, the first TLB allocates an entry which is addressable by the first virtual address and the first VMID, and the first TLB sends the translation request with the first VMID to a second TLB.