CACHE DIRECTED SEQUENTIAL PREFETCH
    61.
    发明申请
    CACHE DIRECTED SEQUENTIAL PREFETCH 失效
    高速缓存指令序列预选

    公开(公告)号:US20100030973A1

    公开(公告)日:2010-02-04

    申请号:US12185219

    申请日:2008-08-04

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0862 G06F2212/6026

    摘要: A technique for performing stream detection and prefetching within a cache memory simplifies stream detection and prefetching. A bit in a cache directory or cache entry indicates that a cache line has not been accessed since being prefetched and another bit indicates the direction of a stream associated with the cache line. A next cache line is prefetched when a previously prefetched cache line is accessed, so that the cache always attempts to prefetch one cache line ahead of accesses, in the direction of a detected stream. Stream detection is performed in response to load misses tracked in the load miss queue (LMQ). The LMQ stores an offset indicating a first miss at the offset within a cache line. A next miss to the line sets a direction bit based on the difference between the first and second offsets and causes prefetch of the next line for the stream.

    摘要翻译: 用于在高速缓冲存储器内执行流检测和预取的技术简化了流检测和预取。 高速缓存目录或高速缓存条目中的一点表示高速缓存行未被访问,并且另一位指示与高速缓存行相关联的流的方向。 当先前预取的高速缓存行被访问时,预取下一个高速缓存行,使得高速缓存总是尝试在检测到的流的方向上预取访问之前的一个高速缓存行。 响应于在负载未命中队列(LMQ)中跟踪的加载未命中,执行流检测。 LMQ存储指示高速缓存行内的偏移处的第一个未命中的偏移。 下一个未命中的线路将基于第一和第二个偏移量之间的差异设置方向位,并导致流的下一行的预取。

    Varying an amount of data retrieved from memory based upon an instruction hint
    63.
    发明授权
    Varying an amount of data retrieved from memory based upon an instruction hint 失效
    根据指令提示改变从存储器检索的数据量

    公开(公告)号:US08266381B2

    公开(公告)日:2012-09-11

    申请号:US12024170

    申请日:2008-02-01

    IPC分类号: G06F12/08

    摘要: In at least one embodiment, a processor detects during execution of program code whether a load instruction within the program code is associated with a hint. In response to detecting that the load instruction is not associated with a hint, the processor retrieves a full cache line of data from the memory hierarchy into the processor in response to the load instruction. In response to detecting that the load instruction is associated with a hint, a processor retrieves a partial cache line of data into the processor from the memory hierarchy in response to the load instruction.

    摘要翻译: 在至少一个实施例中,处理器在执行程序代码期间检测程序代码内的加载指令是否与提示相关联。 响应于检测到加载指令不与提示相关联,处理器响应于加载指令从存储器层次结构检索完整的高速缓存行数据到处理器。 响应于检测到加载指令与提示相关联,处理器响应于加载指令从存储器层次结构检索数据的部分高速缓存行到处理器中。

    Jump starting prefetch streams across page boundaries
    64.
    发明授权
    Jump starting prefetch streams across page boundaries 有权
    跨页边界跳转预取流

    公开(公告)号:US08140768B2

    公开(公告)日:2012-03-20

    申请号:US12024632

    申请日:2008-02-01

    IPC分类号: G06F12/08

    摘要: A method, processor, and data processing system for enabling utilization of a single prefetch stream to access data across a memory page boundary. A prefetch engine includes an active streams table in which information for one or more scheduled prefetch streams are stored. The prefetch engine also includes a victim table for storing a previously active stream whose next prefetch crosses a memory page boundary. The scheduling logic issues a prefetch request with a real address to fetch data from the lower level memory. Then, responsive to detecting that the real address of the stream's next sequential prefetch crosses the memory page boundary, the prefetch engine determines when the first prefetch stream can continue across the page boundary of the first memory page (via an effective address comparison). The PE automatically reinserts the first prefetch stream into the active stream table to jump start prefetching across the page boundary.

    摘要翻译: 一种方法,处理器和数据处理系统,用于使单个预取流能够跨存储器页面边界访问数据。 预取引擎包括活动流表,其中存储一个或多个调度预取流的信息。 预取引擎还包括用于存储先前活动的流的受害者表,其下一个预取跨越存储器页面边界。 调度逻辑发出具有真实地址的预取请求以从下层存储器获取数据。 然后,响应于检测到流的下一个顺序预取的实际地址与存储器页面边界相交,预取引擎确定第一预取流何时可以跨第一存储器页的页面边界(经由有效的地址比较)继续。 PE自动将第一个预取流重新插入到活动流表中,以跨页边界跳转开始预取。

    METHOD AND SYSTEM FOR SOURCING DIFFERING AMOUNTS OF PREFETCH DATA IN RESPONSE TO DATA PREFETCH REQUESTS
    65.
    发明申请
    METHOD AND SYSTEM FOR SOURCING DIFFERING AMOUNTS OF PREFETCH DATA IN RESPONSE TO DATA PREFETCH REQUESTS 失效
    用于根据数据预先要求采集预取数据的不同数据的方法和系统

    公开(公告)号:US20090198965A1

    公开(公告)日:2009-08-06

    申请号:US12024165

    申请日:2008-02-01

    IPC分类号: G06F9/312

    摘要: According to a method of data processing, a memory controller receives a prefetch load request from a processor core of a data processing system. The prefetch load request specifies a requested line of data. In response to receipt of the prefetch load request, the memory controller determines by reference to a stream of demand requests how much data is to be supplied to the processor core in response to the prefetch load request. In response to the memory controller determining to provide less than all of the requested line of data, the memory controller provides less than all of the requested line of data to the processor core.

    摘要翻译: 根据数据处理的方法,存储器控制器从数据处理系统的处理器核心接收预取负载请求。 预取加载请求指定所请求的数据行。 响应于接收到预取加载请求,存储器控制器通过参考需求请求流来确定响应于预取加载请求将多少数据提供给处理器核。 响应于存储器控制器确定提供少于全部所请求的数据行,存储器控制器将少于所有请求的数据行提供给处理器核。

    Jump Starting Prefetch Streams Across Page Boundaries
    66.
    发明申请
    Jump Starting Prefetch Streams Across Page Boundaries 有权
    跨页面边界跳转开始预取流

    公开(公告)号:US20090198909A1

    公开(公告)日:2009-08-06

    申请号:US12024632

    申请日:2008-02-01

    IPC分类号: G06F12/08

    摘要: A method, processor, and data processing system for enabling utilization of a single prefetch stream to access data across a memory page boundary. A prefetch engine includes an active streams table in which information for one or more scheduled prefetch streams are stored. The prefetch engine also includes a victim table for storing a previously active stream whose next prefetch crosses a memory page boundary. The scheduling logic issues a prefetch request with a real address to fetch data from the lower level memory. Then, responsive to detecting that the real address of the stream's next sequential prefetch crosses the memory page boundary, the prefetch engine determines when the first prefetch stream can continue across the page boundary of the first memory page (via an effective address comparison). The PE automatically reinserts the first prefetch stream into the active stream table to jump start prefetching across the page boundary.

    摘要翻译: 一种方法,处理器和数据处理系统,用于使单个预取流能够跨存储器页面边界访问数据。 预取引擎包括活动流表,其中存储一个或多个调度预取流的信息。 预取引擎还包括用于存储先前活动的流的受害者表,其下一个预取跨越存储器页面边界。 调度逻辑发出具有真实地址的预取请求以从下层存储器获取数据。 然后,响应于检测到流的下一个顺序预取的实际地址与存储器页面边界相交,预取引擎确定第一预取流何时可以跨第一存储器页的页面边界(经由有效的地址比较)继续。 PE自动将第一个预取流重新插入到活动流表中,以跨页边界跳转开始预取。

    Techniques for Multi-Level Indirect Data Prefetching
    67.
    发明申请
    Techniques for Multi-Level Indirect Data Prefetching 有权
    多级间接数据预取技术

    公开(公告)号:US20090198906A1

    公开(公告)日:2009-08-06

    申请号:US12024260

    申请日:2008-02-01

    IPC分类号: G06F12/00

    摘要: A technique for performing data prefetching using multi-level indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content that is included in a first data block (e.g., a first cache line of a memory) at the first memory address is then fetched. A second memory address is then determined based on the content at the first memory address. Content that is included in a second data block (e.g., a second cache line) at the second memory address is then fetched (e.g., from the memory or another memory). A third memory address is then determined based on the content at the second memory address. Finally, a third data block (e.g., a third cache line) that includes another pointer or data at the third memory address is fetched (e.g., from the memory or the another memory).

    摘要翻译: 使用多级间接数据预取来执行数据预取的技术包括确定与数据预取指令相关联的指针的第一存储器地址。 然后取出包含在第一存储器地址的第一数据块(例如,存储器的第一高速缓存行)中的内容。 然后基于第一存储器地址处的内容来确定第二存储器地址。 包含在第二存储器地址的第二数据块(例如,第二高速缓存行)中的内容然后被取出(例如,从存储器或另一个存储器)。 然后基于第二存储器地址处的内容来确定第三存储器地址。 最后,取出(例如,从存储器或另一个存储器)中包含第三存储器地址处的另一指针或数据的第三数据块(例如,第三高速缓存行)。

    Techniques for Prediction-Based Indirect Data Prefetching
    68.
    发明申请
    Techniques for Prediction-Based Indirect Data Prefetching 有权
    基于预测的间接数据预取技术

    公开(公告)号:US20090198905A1

    公开(公告)日:2009-08-06

    申请号:US12024248

    申请日:2008-02-01

    IPC分类号: G06F12/02

    摘要: A technique for data prefetching using indirect addressing includes monitoring data pointer values, associated with an array, in an access stream to a memory. The technique determines whether a pattern exists in the data pointer values. A prefetch table is then populated with respective entries that correspond to respective array address/data pointer pairs based on a predicted pattern in the data pointer values. Respective data blocks (e.g., respective cache lines) are then prefetched (e.g., from the memory or another memory) based on the respective entries in the prefetch table.

    摘要翻译: 使用间接寻址的数据预取技术包括在到存储器的访问流中监视与阵列相关联的数据指针值。 该技术确定数据指针值中是否存在模式。 然后基于数据指针值中的预测模式,填充与各个阵列地址/数据指针对相对应的条目的预取表。 然后,基于预取表中的相应条目,预取(例如,从存储器或另一存储器)分别的数据块(例如,相应的高速缓存行)。

    Techniques for Data Prefetching Using Indirect Addressing with Offset
    69.
    发明申请
    Techniques for Data Prefetching Using Indirect Addressing with Offset 有权
    使用偏移量进行间接寻址的数据预取技术

    公开(公告)号:US20090198904A1

    公开(公告)日:2009-08-06

    申请号:US12024246

    申请日:2008-02-01

    IPC分类号: G06F12/08

    摘要: A technique for performing data prefetching using indirect addressing includes determining a first memory address of a pointer associated with a data prefetch instruction. Content, that is included in a first data block (e.g., a first cache line) of a memory, at the first memory address is then fetched. An offset is then added to the content of the memory at the first memory address to provide a first offset memory address. A second memory address is then determined based on the first offset memory address. A second data block (e.g., a second cache line) that includes data at the second memory address is then fetched (e.g., from the memory or another memory). A data prefetch instruction may be indicated by a unique operational code (opcode), a unique extended opcode, or a field (including one or more bits) in an instruction.

    摘要翻译: 使用间接寻址执行数据预取的技术包括确定与数据预取指令相关联的指针的第一存储器地址。 然后取出包含在第一存储器地址的存储器的第一数据块(例如,第一高速缓存行)中的内容。 然后将偏移量添加到第一存储器地址处的存储器的内容以提供第一偏移存储器地址。 然后基于第一偏移存储器地址确定第二存储器地址。 包括第二存储器地址上的数据的第二数据块(例如,第二高速缓存行)然后被取出(例如,从存储器或另一个存储器)。 数据预取指令可以由指令中的唯一操作代码(操作码),唯一扩展操作码或字段(包括一个或多个位)来指示。

    Assigning memory to on-chip coherence domains
    70.
    发明授权
    Assigning memory to on-chip coherence domains 有权
    将内存分配给片上相干域

    公开(公告)号:US08612691B2

    公开(公告)日:2013-12-17

    申请号:US13454814

    申请日:2012-04-24

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: A mechanism for assigning memory to on-chip cache coherence domains assigns caches within a processing unit to coherence domains. The mechanism assigns chunks of memory to the coherence domains. The mechanism monitors applications running on cores within the processing unit to identify needs of the applications. The mechanism may then reassign memory chunks to the cache coherence domains based on the needs of the applications running in the coherence domains. When a memory controller receives the cache miss, the memory controller may look up the address in a lookup table that maps memory chunks to cache coherence domains. Snoop requests are sent to caches within the coherence domain. If a cache line is found in a cache within the coherence domain, the cache line is returned to the originating cache by the cache containing the cache line either directly or through the memory controller.

    摘要翻译: 将存储器分配给片上高速缓存一致性域的机制将处理单元内的高速缓存分配给相干域。 该机制将大块内存分配给一致性域。 该机制监视在处理单元内的核心上运行的应用程序,以识别应用程序的需求。 然后,该机制可以基于在相干域中运行的应用的需要将存储器块重新分配给高速缓存一致性域。 当存储器控制器接收高速缓存未命中时,存储器控制器可以查找映射存储器块到高速缓存一致性域的查找表中的地址。 侦听请求被发送到连贯域内的缓存。 如果在相干域内的高速缓存中找到高速缓存行,则通过直接或通过存储器控制器的高速缓存行的高速缓存将高速缓存行返回到始发高速缓存。