HIDING PAGE TRANSLATION MISS LATENCY IN PROGRAM MEMORY CONTROLLER BY NEXT PAGE PREFETCH ON CROSSING PAGE BOUNDARY
    21.
    发明申请
    HIDING PAGE TRANSLATION MISS LATENCY IN PROGRAM MEMORY CONTROLLER BY NEXT PAGE PREFETCH ON CROSSING PAGE BOUNDARY 有权
    隐藏页面翻译错误在程序存储器控制器中的延迟下一页页面前缀在交叉页面边界

    公开(公告)号:US20160179699A1

    公开(公告)日:2016-06-23

    申请号:US14581487

    申请日:2014-12-23

    Abstract: This invention hides the page miss translation latency for program fetches. In this invention whenever an access is requested by CPU that crosses a memory page boundary, the L1I cache controller request a next page translation along with the current page. This pipelines requests to the μTLB without waiting for L1I cache controller to begin processing the next page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored locally in L1I cache controller and used when the access crosses the next page boundary.

    Abstract translation: 本发明隐藏程序提取的页面未命中转换延迟。 在本发明中,只要CPU通过存取页面边界的请求,L1I高速缓存控制器与当前页一起请求下一页翻译。 该管道请求到μTLB,而不等待L1I缓存控制器开始处理下一页请求。 这成为第二页翻译请求的确定性预取。 第二页的翻译信息本地存储在L1I高速缓存控制器中,当访问跨越下一页边界时使用。

    NON-VOLATILE MEMORY COMPRESSION FOR MEMORY REPAIR

    公开(公告)号:US20230409435A1

    公开(公告)日:2023-12-21

    申请号:US18239880

    申请日:2023-08-30

    CPC classification number: G06F11/1448 H03M7/30 G06F2201/82

    Abstract: One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.

    Non-volatile memory compression for memory repair

    公开(公告)号:US11748202B2

    公开(公告)日:2023-09-05

    申请号:US17901337

    申请日:2022-09-01

    CPC classification number: G06F11/1448 H03M7/30 G06F2201/82

    Abstract: One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.

    Zero latency prefetching in caches
    26.
    发明授权

    公开(公告)号:US11474944B2

    公开(公告)日:2022-10-18

    申请号:US17151857

    申请日:2021-01-19

    Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.

    Non-volatile memory compression for memory repair

    公开(公告)号:US11436090B2

    公开(公告)日:2022-09-06

    申请号:US17125244

    申请日:2020-12-17

    Abstract: One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.

    ZERO LATENCY PREFETCHING IN CACHES
    28.
    发明申请

    公开(公告)号:US20210141732A1

    公开(公告)日:2021-05-13

    申请号:US17151857

    申请日:2021-01-19

    Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.

Patent Agency Ranking