Abstract:
A microprocessor includes an instruction decoder for decoding a repeat prefetch indirect instruction that includes address operands used to calculate an address of a first entry in a prefetch table having a plurality of entries, each including a prefetch address. The repeat prefetch indirect instruction also includes a count specifying a number of cache lines to be prefetched. The memory address of each of the cache lines is specified by the prefetch address in one of the entries in the prefetch table. A count register, initially loaded with the count specified in the prefetch instruction, stores a remaining count of the cache lines to be prefetched. Control logic fetches the prefetch addresses of the cache lines from the table into the microprocessor and prefetches the cache lines from the system memory into a cache memory of the microprocessor using the count register and the prefetch addresses fetched from the table.
Abstract:
A hardware data prefetcher includes a queue of indexed storage elements into which are queued strides associated with a stream of temporally adjacent load requests. Each stride is a difference between cache line offsets of memory addresses of respective adjacent load requests. Hardware logic calculates a current stride between a current load request and a newest previous load request. The hardware logic compares the current stride and a stride M in the queue and compares the newest of the queued strides with a queued stride M+1, which is older than and adjacent to stride M. When the comparisons match, the hardware logic prefetches a cache line whose offset is the sum of the offset of the current load request and a stride M−1. Stride M−1 is newer than and adjacent to stride M in the queue.
Abstract:
The invention relates to a component such as a rotor or stator for an electrical machine. The component includes a plurality of axially adjacent stacks of laminations. At least one pair of adjacent stacks are spaced apart in the axial direction by spacer means such that a passageway or duct for cooling fluid, e.g. air, is formed therebetween. The spacer means comprises a porous structural mat of metal fibers. The cooling fluid may flow through the spaces or voids between the fibers.
Abstract:
A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.
Abstract:
A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.
Abstract:
The invention relates to a component such as a rotor or stator for an electrical machine. The component includes a plurality of axially adjacent stacks of laminations. At least one pair of adjacent stacks are spaced apart in the axial direction by spacer means such that a passageway or duct for cooling fluid, e.g. air, is formed therebetween. The spacer means comprises a porous structural mat of metal fibres. The cooling fluid may flow through the spaces or voids between the fibres.
Abstract:
A microprocessor includes an instruction decoder for decoding a repeat prefetch indirect instruction that includes address operands used to calculate an address of a first entry in a prefetch table having a plurality of entries, each including a prefetch address. The repeat prefetch indirect instruction also includes a count specifying a number of cache lines to be prefetched. The memory address of each of the cache lines is specified by the prefetch address in one of the entries in the prefetch table. A count register, initially loaded with the count specified in the prefetch instruction, stores a remaining count of the cache lines to be prefetched. Control logic fetches the prefetch addresses of the cache lines from the table into the microprocessor and prefetches the cache lines from the system memory into a cache memory of the microprocessor using the count register and the prefetch addresses fetched from the table.
Abstract:
A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.
Abstract:
A microprocessor includes a cache memory and a data prefetcher. The data prefetcher detects a pattern of memory accesses within a first memory block and prefetch into the cache memory cache lines from the first memory block based on the pattern. The data prefetcher also observes a new memory access request to a second memory block. The data prefetcher also determines that the first memory block is virtually adjacent to the second memory block and that the pattern, when continued from the first memory block to the second memory block, predicts an access to a cache line implicated by the new request within the second memory block. The data prefetcher also responsively prefetches into the cache memory cache lines from the second memory block based on the pattern.
Abstract:
A hardware data prefetcher includes a queue of indexed storage elements into which are queued strides associated with a stream of temporally adjacent load requests. Each stride is a difference between cache line offsets of memory addresses of respective adjacent load requests. Hardware logic calculates a current stride between a current load request and a newest previous load request. The hardware logic compares the current stride and a stride M in the queue and compares the newest of the queued strides with a queued stride M+1, which is older than and adjacent to stride M. When the comparisons match, the hardware logic prefetches a cache line whose offset is the sum of the offset of the current load request and a stride M−1. Stride M−1 is newer than and adjacent to stride M in the queue.