HIDING PAGE TRANSLATION MISS LATENCY IN PROGRAM MEMORY CONTROLLER BY SELECTIVE PAGE MISS TRANSLATION PREFETCH
    71.
    发明公开
    HIDING PAGE TRANSLATION MISS LATENCY IN PROGRAM MEMORY CONTROLLER BY SELECTIVE PAGE MISS TRANSLATION PREFETCH 有权
    程序存储器控制器隐藏页面翻译缺少延迟通过选择页错误翻译预览

    公开(公告)号:EP3238073A1

    公开(公告)日:2017-11-01

    申请号:EP15874355.9

    申请日:2015-12-22

    IPC分类号: G06F12/08

    摘要: Example embodiments hide the page miss translation latency for program fetches. In example embodiments, whenever an access is requested by a CPU, the L1l cache controller (111) does a-priori lookup of whether the virtual address plus the fetch packet count of expected program fetches crosses a page boundary (1614, 1622). If the access crosses a page boundary (1622), the L1l cache controller (111) will request a second page translation along with the first page. This pipelines requests to the μΤLΒ (1501) without waiting for L1l cache controller (111) to begin processing the second page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored (1624) locally in L1l cache controller (111) and used when the access crosses the page boundary.

    摘要翻译: 示例实施例隐藏程序提取的页面未命中翻译等待时间。 在示例实施例中,每当CPU请求访问时,L1 1高速缓存控制器(111)先前查找虚拟地址加预期程序提取的提取包计数是否跨页面边界(1614,1622)。 如果访问跨越页边界(1622),则L1l高速缓存控制器(111)将连同第一页一起请求第二页翻译。 该流水线向μTBL(1501)发送请求而不等待L1L高速缓存控制器(111)开始处理第二页面请求。 这成为第二页翻译请求的确定性预取。 第二页的翻译信息在L1l高速缓存控制器(111)中本地存储(1624),并在访问跨越页面边界时使用。

    FULLY ASSOCIATIVE CACHE MEMORY BUDGETED BY MEMORY ACCESS TYPE
    72.
    发明公开
    FULLY ASSOCIATIVE CACHE MEMORY BUDGETED BY MEMORY ACCESS TYPE 审中-公开
    按存储器访问类型预测的全关联高速缓存存储器

    公开(公告)号:EP3230874A1

    公开(公告)日:2017-10-18

    申请号:EP14891608.3

    申请日:2014-12-14

    IPC分类号: G06F12/08

    摘要: A fully associative cache memory, comprising: an array of storage elements; an allocation unit that allocates the storage elements in response to memory accesses that miss in the cache memory. Each memory access has an associated memory access type (MAT) of a plurality of predetermined MATs. Each valid storage element of the array has an associated MAT. For each MAT, the allocation unit maintains: a counter that counts of a number of valid storage elements associated with the MAT; and a corresponding threshold. The allocation unit allocates into any of the storage elements in response to a memory access that misses in the cache, unless the counter of the MAT of the memory access has reached the corresponding threshold, in which case the allocation unit replaces one of the valid storage elements associated with the MAT of the memory access.

    摘要翻译: 一种完全关联的高速缓冲存储器,包括:一个存储元件阵列; 分配单元,响应于在高速缓冲存储器中未命中的存储器访问而分配存储元件。 每个存储器访问具有多个预定MAT的相关存储器访问类型(MAT)。 阵列中的每个有效存储元素都有一个关联的MAT。 对于每个MAT,分配单元保持:计数器,其计数与MAT关联的有效存储元件的数量; 和相应的阈值。 响应于高速缓存中缺失的存储器访问,分配单元分配到任何存储元件中,除非存储器访问的MAT的计数器已经达到相应的阈值,在这种情况下,分配单元替换有效存储器中的一个 与内存访问的MAT关联的元素。

    NON-LINEAR CACHE LOGIC
    73.
    发明公开
    NON-LINEAR CACHE LOGIC 审中-公开
    非线性缓存逻辑

    公开(公告)号:EP3220276A1

    公开(公告)日:2017-09-20

    申请号:EP17160433.3

    申请日:2017-03-10

    发明人: FENNEY, Simon

    IPC分类号: G06F12/0864 G06F12/14

    摘要: Cache logic for generating a cache address from a binary memory address comprising a first binary sequence of a first predefined length and a second binary sequence of a second predefined length, the cache logic comprising: a plurality of substitution units each configured to receive a respective allocation of bits of the first binary sequence and to replace its allocated bits with a corresponding substitute bit string selected in dependence on the received allocation of bits; a mapping unit configured to combine the substitute bit strings output by the substitution units so as to form one or more binary strings of the second predefined length; and combination logic arranged to combine the one or more binary strings with the second binary sequence by a reversible operation so as to form a binary output string for use as at least part of a cache address in a cache memory.

    摘要翻译: 高速缓存逻辑,用于从二进制存储器地址生成高速缓存地址,所述二进制存储器地址包括第一预定义长度的第一二进制序列和第二预定义长度的第二二进制序列,所述高速缓存逻辑包括:多个替换单元, 并根据接收到的比特分配将其分配的比特替换为相应的替代比特串; 映射单元,用于将所述替换单元输出的替换位串合并形成所述第二预定义长度的一个或多个二进制串; 以及组合逻辑,所述组合逻辑被布置为通过可逆操作将所述一个或多个二进制串与所述第二二进制序列组合以便形成二进制输出串,以用作高速缓冲存储器中的高速缓存地址的至少一部分。

    CACHE REPLACEMENT POLICY THAT CONSIDERS MEMORY ACCESS TYPE
    75.
    发明公开
    CACHE REPLACEMENT POLICY THAT CONSIDERS MEMORY ACCESS TYPE 审中-公开
    RICHTLINIENFÜRCACHEERSATZ UNTERBERÜCKSICHTIGUNGDES SPEICHERZUGRIFFTYPS

    公开(公告)号:EP3055775A4

    公开(公告)日:2017-07-19

    申请号:EP14891609

    申请日:2014-12-14

    摘要: An associative cache memory, comprising: an array of storage elements arranged as M sets by N ways; an allocation unit allocates the storage elements in response to memory accesses that miss in the cache memory. Each memory access selects a set. Each memory access has an associated memory access type (MAT) of a plurality of predetermined MATs. Each valid storage element has an associated MAT; a mapping that includes, for each MAT, a MAT priority. In response to a memory access that misses in the array, the allocation unit: determines a most eligible way and a second most eligible way of the selected set for replacement based on a replacement policy; and replaces the second most eligible way rather than the most eligible way when the MAT priority of the most eligible way is greater than the MAT priority of the second most eligible way.

    摘要翻译: 一种关联高速缓冲存储器,包括:以N路方式排列成M组的存储元件阵列; 分配单元响应于在高速缓冲存储器中未命中的存储器访问来分配存储元件。 每个内存访问选择一个集合。 每个存储器访问具有多个预定MAT的相关存储器访问类型(MAT)。 每个有效的存储元素都有一个关联的MAT; 对每个MAT包含MAT优先级的映射。 响应于在阵列中未命中的存储器访问,分配单元:基于替换策略确定用于替换的所选集合的最合格的方式和第二最合适的方式; 并且当最符合条件的方式的MAT优先级大于第二个最合格的方式的MAT优先级时,取代第二个最合格的方式而不是最合格的方式。

    CACHE LINE COMPACTION OF COMPRESSED DATA SEGMENTS
    76.
    发明公开
    CACHE LINE COMPACTION OF COMPRESSED DATA SEGMENTS 有权
    压缩数据段的缓存线压缩

    公开(公告)号:EP3178005A1

    公开(公告)日:2017-06-14

    申请号:EP15742447.4

    申请日:2015-07-09

    IPC分类号: G06F12/08

    摘要: Methods, devices, and non-transitory process-readable storage media for compacting data within cache lines of a cache. An aspect method may include identifying, by a processor of the computing device, a base address (e.g., a physical or virtual cache address) for a first data segment, identifying a data size (e.g., based on a compression ratio) for the first data segment, obtaining a base offset based on the identified data size and the base address of the first data segment, and calculating an offset address by offsetting the base address with the obtained base offset, wherein the calculated offset address is associated with a second data segment. In some aspects, the method may include identifying a parity value for the first data segment based on the base address and obtaining the base offset by performing a lookup on a stored table using the identified data size and identified parity value.

    摘要翻译: 用于压缩高速缓存的高速缓存行内的数据的方法,设备和非瞬态过程可读存储介质。 方面方法可以包括由计算设备的处理器识别第一数据段的基地址(例如,物理或虚拟高速缓存地址),识别第一数据段的数据大小(例如,基于压缩比) 数据段,基于所识别的数据大小和第一数据段的基地址获得基础偏移量,以及通过用所获得的基础偏移量偏移基础地址来计算偏移地址,其中计算出的偏移地址与第二数据相关联 分割。 在一些方面中,该方法可以包括基于基地址识别第一数据段的奇偶校验值,并且通过使用识别的数据大小和识别的奇偶校验值在存储的表上执行查找来获得基准偏移量。

    CACHE OPERATIONS FOR MEMORY MANAGEMENT
    77.
    发明公开
    CACHE OPERATIONS FOR MEMORY MANAGEMENT 审中-公开
    CACHESPEICHEROPERATIONEN ZUR SPEICHERVERWALTUNG

    公开(公告)号:EP3049937A4

    公开(公告)日:2017-05-17

    申请号:EP13894149

    申请日:2013-09-27

    申请人: INTEL CORP

    IPC分类号: G06F12/08 G06F12/02 G06F12/12

    摘要: In accordance with the present description, cache operations for a memory-sided cache in front of a backing memory such as a byte-addressable non-volatile memory, include combining at least two of a first operation, a second operation and a third operation, wherein the first operation includes evicting victim cache entries from the cache memory in accordance with a replacement policy which is biased to evict cache entries having clean cache lines over evicting cache entries having dirty cache lines. The second operation includes evicting victim cache entries from the primary cache memory to a victim cache memory of the cache memory, and the third operation includes translating memory location addresses to shuffle and spread the memory location addresses within an address range of the backing memory. It is believed that various combinations of these operations may provide improved operation of a memory. Other aspects are described herein.

    摘要翻译: 根据本说明书,用于诸如字节可寻址的非易失性存储器之类的后备存储器之前的存储器侧高速缓存的高速缓存操作包括组合第一操作,第二操作和第三操作中的至少两个, 其中所述第一操作包括根据替代策略从所述高速缓存存储器中逐出受害者高速缓存条目,所述替换策略偏向于驱逐具有干净的高速缓存行的高速缓存条目,以驱逐具有脏高速缓存行的高速缓存条目。 第二操作包括将受害者高速缓存条目从主高速缓存存储器驱逐到高速缓存存储器的受害者高速缓存存储器,并且第三操作包括翻译存储器位置地址以混洗并在后备存储器的地址范围内扩展存储器位置地址。 相信这些操作的各种组合可以提供改善的存储器操作。 这里描述了其他方面。

    CACHE ARCHITECTURE
    78.
    发明公开
    CACHE ARCHITECTURE 审中-公开
    缓存SPEICHERARCHITEKTUR

    公开(公告)号:EP3149596A1

    公开(公告)日:2017-04-05

    申请号:EP15803327.4

    申请日:2015-06-01

    发明人: WALKER, Robert M.

    IPC分类号: G06F12/08

    摘要: The present disclosure includes apparatuses and methods for a cache architecture. An example apparatus that includes a cache architecture according to the present disclosure can include an array of memory cells configured to store multiple cache entries per page of memory cells; and sense circuitry configured to determine whether cache data corresponding to a request from a cache controller is located at a location in the array corresponding to the request, and return a response to the cache controller indicating whether cache data is located at the location in the array corresponding to the request.

    摘要翻译: 本公开包括用于高速缓存架构的装置和方法。 包括根据本公开的高速缓存架构的示例性设备可以包括被配置为存储每页存储器单元的多个高速缓存条目的存储器单元的阵列; 以及感测电路,其被配置为确定与来自高速缓存控制器的请求相对应的高速缓存数据是否位于与所述请求相对应的阵列中的位置,并且向高速缓存控制器返回指示高速缓存数据是否位于阵列中的位置的响应 对应于请求。

    SET ASSOCIATIVE CACHE MEMORY WITH HETEROGENEOUS REPLACEMENT POLICY
    79.
    发明公开
    SET ASSOCIATIVE CACHE MEMORY WITH HETEROGENEOUS REPLACEMENT POLICY 审中-公开
    使用多种替换策略设置关联缓存内存

    公开(公告)号:EP3129890A1

    公开(公告)日:2017-02-15

    申请号:EP14891610.9

    申请日:2014-12-14

    IPC分类号: G06F12/16 G06F12/08

    摘要: A set associative cache memory, comprising: an array of storage elements arranged as M sets by N ways; an allocation unit that allocates the storage elements in response to memory accesses that miss in the cache memory. Each memory access selects a set; for each parcel of a plurality of parcels, a parcel specifier specifies: a subset of ways of the N ways included in the parcel. The subsets of ways of parcels associated with a selected set are mutually exclusive; a replacement scheme associated with the parcel from among a plurality of predetermined replacement schemes. For each memory access, the allocation unit: selects the parcel specifier in response to the memory access; and uses the replacement scheme associated with the parcel to allocate into the subset of ways of the selected set included in the parcel.

    摘要翻译: 一种组相关高速缓冲存储器,包括:由N路排列为M组的存储元件阵列; 分配单元,响应于在高速缓冲存储器中未命中的存储器访问而分配存储元件。 每个内存访问选择一个集合; 对于多个包裹中的每个包裹,包裹说明符指定:包含在包裹中的N个路线的子集。 与选定集相关联的地块的子集方式是相互排斥的; 从多个预定替换方案中选择与该包相关联的替换方案。 对于每个内存访问,分配单元:响应内存访问选择宗地说明符; 并使用与包裹相关联的替换方案分配到包裹中包括的选定集合的路线的子集。

    THREAD AND DATA ASSIGNMENT IN MULTI-CORE PROCESSORS
    80.
    发明公开
    THREAD AND DATA ASSIGNMENT IN MULTI-CORE PROCESSORS 审中-公开
    在多伦多证券交易所的日内瓦

    公开(公告)号:EP3111333A1

    公开(公告)日:2017-01-04

    申请号:EP14884199.2

    申请日:2014-02-27

    发明人: SOLIHIN, Yan

    IPC分类号: G06F13/14

    摘要: Technologies are generally described for methods and systems to assign threads in a multi-core processor. In an example, a method to assign threads in a multi-core processor may include determining data relating to memory controllers fetching data in response to cache misses experienced by a first core and a second core. Threads may be assigned to cores based on the number of cache misses processed by respective memory controllers. Methods may further include determining that a thread is latency-bound or bandwidth-bound. Threads may be assigned to cores based on the determination of the thread as latency-bound or bandwidth-bound. In response to the assignment of the threads to the cores, data for the thread may be stored in the assigned cores.

    摘要翻译: 一般来说,为在多核处理器中分配线程的方法和系统描述技术。 在一个示例中,在多核处理器中分配线程的方法可以包括确定与存储器控制器有关的数据,以响应于由第一核心和第二核心经历的高速缓存未命中而获取数据。 可以根据各个存储器控制器处理的高速缓存未命中的数量将线程分配给内核。 方法还可以包括确定线程是等待时间限制的或带宽​​限制的。 线程可以根据线程的确定作为延迟限制或带宽限制分配给内核。 响应于将线程分配给内核,线程的数据可以存储在分配的内核中。