MICROPROCESSOR WITH REPEAT PREFETCH INDIRECT INSTRUCTION
    1.
    发明申请
    MICROPROCESSOR WITH REPEAT PREFETCH INDIRECT INSTRUCTION 有权
    具有REPEAT PREFETCH INDIRECT指令的微处理器

    公开(公告)号:US20110035551A1

    公开(公告)日:2011-02-10

    申请号:US12579931

    申请日:2009-10-15

    Abstract: A microprocessor includes an instruction decoder for decoding a repeat prefetch indirect instruction that includes address operands used to calculate an address of a first entry in a prefetch table having a plurality of entries, each including a prefetch address. The repeat prefetch indirect instruction also includes a count specifying a number of cache lines to be prefetched. The memory address of each of the cache lines is specified by the prefetch address in one of the entries in the prefetch table. A count register, initially loaded with the count specified in the prefetch instruction, stores a remaining count of the cache lines to be prefetched. Control logic fetches the prefetch addresses of the cache lines from the table into the microprocessor and prefetches the cache lines from the system memory into a cache memory of the microprocessor using the count register and the prefetch addresses fetched from the table.

    Abstract translation: 微处理器包括用于解码重复预取间接指令的指令解码器,该指令包括用于计算具有多个条目的预取表中的第一条目的地址的地址操作数,每个条目包括预取地址。 重复预取间接指令还包括指定要预取的高速缓存行数的计数。 每个高速缓存行的存储器地址由预取表中的一个条目中的预取地址指定。 最初加载预取指令中指定的计数的计数寄存器存储要预取的高速缓存行的剩余计数。 控制逻辑将缓存行的预取地址从表中提取到微处理器中,并使用计数寄存器和从表中提取的预取地址,将高速缓存行从系统存储器预取到微处理器的高速缓存中。

    DATA PREFETCHER WITH COMPLEX STRIDE PREDICTOR
    2.
    发明申请
    DATA PREFETCHER WITH COMPLEX STRIDE PREDICTOR 有权
    数据预处理器与复杂的预测器

    公开(公告)号:US20140006718A1

    公开(公告)日:2014-01-02

    申请号:US13535062

    申请日:2012-06-27

    Abstract: A hardware data prefetcher includes a queue of indexed storage elements into which are queued strides associated with a stream of temporally adjacent load requests. Each stride is a difference between cache line offsets of memory addresses of respective adjacent load requests. Hardware logic calculates a current stride between a current load request and a newest previous load request. The hardware logic compares the current stride and a stride M in the queue and compares the newest of the queued strides with a queued stride M+1, which is older than and adjacent to stride M. When the comparisons match, the hardware logic prefetches a cache line whose offset is the sum of the offset of the current load request and a stride M−1. Stride M−1 is newer than and adjacent to stride M in the queue.

    Abstract translation: 硬件数据预取器包括索引存储元件的队列,其中与时间上相邻的加载请求的流相关联的排队步骤。 每个步幅是各个相邻负载请求的存储器地址的高速缓存行偏移之间的差异。 硬件逻辑计算当前加载请求和最新的加载请求之间的当前步幅。 硬件逻辑将队列中的当前步幅和步幅M进行比较,并将最新的排队步幅与步幅M之前和之后的排队步幅M + 1进行比较。当比较匹配时,硬件逻辑预取 缓存线,其偏移量是当前负载请求与步幅M-1的偏移量之和。 步行M-1队列中的步幅M更新且相邻。

    Bounding box prefetcher
    4.
    发明授权
    Bounding box prefetcher 有权
    边框预取器

    公开(公告)号:US08762649B2

    公开(公告)日:2014-06-24

    申请号:US13033765

    申请日:2011-02-24

    CPC classification number: G06F9/3814 G06F12/0862 G06F2212/602 G06F2212/6026

    Abstract: A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.

    Abstract translation: 具有高速缓冲存储器的微处理器中的数据预取器接收每个存储器块内的地址的存储器访问。 访问地址作为时间的函数是非单调递增或递减的。 当接收到访问时,预取器维护访问的最大地址和最小地址以及对最大和最小地址的更改的计数,并维护由存储器块内的访问地址所牵涉的最近访问的高速缓存行的历史。 预取器还基于计数确定主要的访问方向,并且基于历史确定主要的访问模式。 预取器还根据主要访问模式在主存取方向上预取入高速缓冲存储器,历史指示的存储器块的高速缓存行尚未被最近访问。

    Bounding box prefetcher
    5.
    发明授权

    公开(公告)号:US08656111B2

    公开(公告)日:2014-02-18

    申请号:US13033765

    申请日:2011-02-24

    Abstract: A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.

    ELECTRICAL MACHINES
    6.
    发明申请
    ELECTRICAL MACHINES 有权
    电机

    公开(公告)号:US20130200734A1

    公开(公告)日:2013-08-08

    申请号:US13639230

    申请日:2011-03-24

    Abstract: The invention relates to a component such as a rotor or stator for an electrical machine. The component includes a plurality of axially adjacent stacks of laminations. At least one pair of adjacent stacks are spaced apart in the axial direction by spacer means such that a passageway or duct for cooling fluid, e.g. air, is formed therebetween. The spacer means comprises a porous structural mat of metal fibres. The cooling fluid may flow through the spaces or voids between the fibres.

    Abstract translation: 本发明涉及一种诸如用于电机的转子或定子的部件。 该组件包括多个轴向相邻的叠片堆叠。 至少一对相邻的堆叠件在轴向方向上通过间隔件间隔开,使得用于冷却流体的通道或管道,例如, 空气。 间隔件包括金属纤维的多孔结构垫。 冷却流体可以流过纤维之间的空间或空隙。

    Microprocessor with repeat prefetch indirect instruction
    7.
    发明授权
    Microprocessor with repeat prefetch indirect instruction 有权
    具有重复预取间接指令的微处理器

    公开(公告)号:US08364902B2

    公开(公告)日:2013-01-29

    申请号:US12579931

    申请日:2009-10-15

    Abstract: A microprocessor includes an instruction decoder for decoding a repeat prefetch indirect instruction that includes address operands used to calculate an address of a first entry in a prefetch table having a plurality of entries, each including a prefetch address. The repeat prefetch indirect instruction also includes a count specifying a number of cache lines to be prefetched. The memory address of each of the cache lines is specified by the prefetch address in one of the entries in the prefetch table. A count register, initially loaded with the count specified in the prefetch instruction, stores a remaining count of the cache lines to be prefetched. Control logic fetches the prefetch addresses of the cache lines from the table into the microprocessor and prefetches the cache lines from the system memory into a cache memory of the microprocessor using the count register and the prefetch addresses fetched from the table.

    Abstract translation: 微处理器包括用于解码重复预取间接指令的指令解码器,该指令包括用于计算具有多个条目的预取表中的第一条目的地址的地址操作数,每个条目包括预取地址。 重复预取间接指令还包括指定要预取的高速缓存行数的计数。 每个高速缓存行的存储器地址由预取表中的一个条目中的预取地址指定。 最初加载预取指令中指定的计数的计数寄存器存储要预取的高速缓存行的剩余计数。 控制逻辑将缓存行的预取地址从表中提取到微处理器中,并使用计数寄存器和从表中提取的预取地址,将高速缓存行从系统存储器预取到微处理器的高速缓存中。

    BOUNDING BOX PREFETCHER
    8.
    发明申请
    BOUNDING BOX PREFETCHER 有权
    边框预选

    公开(公告)号:US20110238922A1

    公开(公告)日:2011-09-29

    申请号:US13033765

    申请日:2011-02-24

    CPC classification number: G06F9/3814 G06F12/0862 G06F2212/602 G06F2212/6026

    Abstract: A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.

    Abstract translation: 具有高速缓冲存储器的微处理器中的数据预取器接收每个存储器块内的地址的存储器访问。 访问地址作为时间的函数是非单调递增或递减的。 当接收到访问时,预取器维护访问的最大地址和最小地址以及对最大和最小地址的更改的计数,并维护由存储器块内的访问地址所牵涉的最近访问的高速缓存行的历史。 预取器还基于计数确定主要的访问方向,并且基于历史确定主要的访问模式。 预取器还根据主要访问模式在主存取方向上预取入高速缓冲存储器,历史指示的存储器块的高速缓存行尚未被最近访问。

    BOUNDING BOX PREFETCHER WITH REDUCED WARM-UP PENALTY ON MEMORY BLOCK CROSSINGS
    9.
    发明申请
    BOUNDING BOX PREFETCHER WITH REDUCED WARM-UP PENALTY ON MEMORY BLOCK CROSSINGS 有权
    边框预制器,在内存块交叉处减少加重罚款

    公开(公告)号:US20110238920A1

    公开(公告)日:2011-09-29

    申请号:US13033848

    申请日:2011-02-24

    CPC classification number: G06F12/0862 G06F9/383 G06F2212/6026

    Abstract: A microprocessor includes a cache memory and a data prefetcher. The data prefetcher detects a pattern of memory accesses within a first memory block and prefetch into the cache memory cache lines from the first memory block based on the pattern. The data prefetcher also observes a new memory access request to a second memory block. The data prefetcher also determines that the first memory block is virtually adjacent to the second memory block and that the pattern, when continued from the first memory block to the second memory block, predicts an access to a cache line implicated by the new request within the second memory block. The data prefetcher also responsively prefetches into the cache memory cache lines from the second memory block based on the pattern.

    Abstract translation: 微处理器包括高速缓冲存储器和数据预取器。 数据预取器检测第一存储器块内的存储器访问模式,并且基于该模式从第一存储器块预存入高速缓冲存储器高速缓存行。 数据预取器还观察到对第二存储器块的新的存储器访问请求。 数据预取器还确定第一存储器块实际上与第二存储器块相邻,并且当从第一存储器块继续到第二存储器块时,该模式预测对在该存储块内的新请求所涉及的高速缓存行的访问 第二个内存块。 数据预取器还基于该模式从第二存储器块响应地预取到高速缓存存储器高速缓存行。

    Data prefetcher with complex stride predictor
    10.
    发明授权
    Data prefetcher with complex stride predictor 有权
    具有复杂步幅预测器的数据预取器

    公开(公告)号:US09032159B2

    公开(公告)日:2015-05-12

    申请号:US13535062

    申请日:2012-06-27

    Abstract: A hardware data prefetcher includes a queue of indexed storage elements into which are queued strides associated with a stream of temporally adjacent load requests. Each stride is a difference between cache line offsets of memory addresses of respective adjacent load requests. Hardware logic calculates a current stride between a current load request and a newest previous load request. The hardware logic compares the current stride and a stride M in the queue and compares the newest of the queued strides with a queued stride M+1, which is older than and adjacent to stride M. When the comparisons match, the hardware logic prefetches a cache line whose offset is the sum of the offset of the current load request and a stride M−1. Stride M−1 is newer than and adjacent to stride M in the queue.

    Abstract translation: 硬件数据预取器包括索引存储元件的队列,其中与时间上相邻的加载请求的流相关联的排队步骤。 每个步幅是各个相邻负载请求的存储器地址的高速缓存行偏移之间的差异。 硬件逻辑计算当前加载请求和最新的加载请求之间的当前步幅。 硬件逻辑将队列中的当前步幅和步幅M进行比较,并将最新的排队步幅与步幅M之前和之后的排队步幅M + 1进行比较。当比较匹配时,硬件逻辑预取 缓存线,其偏移量是当前负载请求与步幅M-1的偏移量之和。 步行M-1队列中的步幅M更新且相邻。

Patent Agency Ranking