ZERO LATENCY PREFETCHING IN CACHES
    12.
    发明申请

    公开(公告)号:US20230004498A1

    公开(公告)日:2023-01-05

    申请号:US17940070

    申请日:2022-09-08

    Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.

    Integrated circuit with high-speed clock bypass before reset

    公开(公告)号:US11196424B2

    公开(公告)日:2021-12-07

    申请号:US17078708

    申请日:2020-10-23

    Abstract: An integrated circuit includes: a clock domain having a clock domain input; and clock management logic coupled to the clock domain. The clock management logic includes: a PLL having a reference clock input and a PLL clock output; a divider having a divider input and a divider output, the divider input coupled to the PLL clock output; and bypass logic having a first clock input, a second clock input, a bypass control input, and a bypass logic output, the first clock input coupled to divider output, the second clock input coupled to the reference clock input, and the bypass logic output coupled to the clock domain input. The bypass logic selectively bypasses the PLL and divider responsive to a bypass control signal triggered by a reset signal. The reset signal also triggers a reset control signal delayed relative to the bypass control signal.

    Zero latency prefetching in caches
    14.
    发明授权

    公开(公告)号:US10929296B2

    公开(公告)日:2021-02-23

    申请号:US15730874

    申请日:2017-10-12

    Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.

    ZERO LATENCY PREFETCHING IN CACHES
    16.
    发明申请

    公开(公告)号:US20190114263A1

    公开(公告)日:2019-04-18

    申请号:US15730874

    申请日:2017-10-12

    Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetchs the lower half level two cache line employing fewer resources than an ordinary prefetch.

    Hiding page translation miss latency in program memory controller by selective page miss translation prefetch
    19.
    发明授权
    Hiding page translation miss latency in program memory controller by selective page miss translation prefetch 有权
    通过选择性页面错误翻译预取隐藏程序存储器控制器中的页面翻译错误延迟

    公开(公告)号:US09514059B2

    公开(公告)日:2016-12-06

    申请号:US14579654

    申请日:2014-12-22

    Abstract: This invention hides the page miss translation latency for program fetches. In this invention whenever an access is requested by CPU, the L1I cache controller does a-priori lookup of whether the virtual address plus the fetch packet count of expected program fetches crosses a page boundary. If the access crosses a page boundary, the L1I cache controller will request a second page translation along with the first page. This pipelines requests to the μTLB without waiting for L1I cache controller to begin processing the second page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored locally in L1I cache controller and used when the access crosses the page boundary.

    Abstract translation: 本发明隐藏程序提取的页面未命中转换延迟。 在本发明中,只要CPU请求访问,L1I高速缓存控制器先验地查看虚拟地址加上预期程序提取的提取数据包数是否跨越页边界。 如果访问跨页面边界,则L1I缓存控制器将与第一页一起请求第二页翻译。 该管道请求到μTLB,而不等待L1I缓存控制器开始处理第二页请求。 这成为第二页翻译请求的确定性预取。 第二页的翻译信息本地存储在L1I高速缓存控制器中,并且当访问越过页面边界时使用。

Patent Agency Ranking