Abstract:
Systems and methods of cache flushing include receiving, from a software application, a first cache flush request to perform a range-based cache flush of a contiguous virtual address range within a virtual memory that maps to a physical memory. A single cache walk is triggered via a second cache flush request to a cache. The single cache walk performs the range-based cache flush for the contiguous physical address range from a beginning address of the contiguous physical address range to an ending address of the contiguous physical address range in response to the first cache flush request.
Abstract:
A memory includes two or more portions of memory circuitry and two or more processor in memory (PIM) functional blocks, each PIM functional block associated with a respective portion of the memory circuitry. In operation, at least one other PIM functional block other than a particular PIM functional block copies data from a source location accessible to the other PIM functional block. The other PIM functional block then provides the data to the particular PIM functional block. The particular acquires and stores the data in a destination location accessible to the particular PIM functional block. The particular PIM functional block next performs one or more PIM operations using the data.
Abstract:
A method and apparatus for performing memory prefetching includes determining whether to initiate prefetching. Upon a determination to initiate prefetching, a first memory row is determined as a suitable prefetch candidate, and it is determined whether a particular set of one or more cachelines of the first memory row is to be prefetched.
Abstract:
A method and apparatus for performing memory prefetching includes determining whether to initiate prefetching. Upon a determination to initiate prefetching, a first memory row is determined as a suitable prefetch candidate, and it is determined whether a particular set of one or more cachelines of the first memory row is to be prefetched.
Abstract:
A cache coherence bridge protocol provides an interface between a cache coherence protocol of a host processor and a cache coherence protocol of a processor-in-memory, thereby decoupling coherence mechanisms of the host processor and the processor-in-memory. The cache coherence bridge protocol requires limited change to existing host processor cache coherence protocols. The cache coherence bridge protocol may be used to facilitate interoperability between host processors and processor-in-memory devices designed by different vendors and both the host processors and processor-in-memory devices may implement coherence techniques among computing units within each processor. The cache coherence bridge protocol may support different granularity of cache coherence permissions than those used by cache coherence protocols of a host processor and/or a processor-in-memory. The cache coherence bridge protocol uses a shadow directory that maintains status information indicating an aggregate view of copies of data cached in a system external to a processor-in-memory containing that data.
Abstract:
A method, a system, and a non-transitory computer readable medium for generating application code to be executed on an active storage device are presented. The parts of an application that can be executed on the active storage device are determined. The parts of the application that will not be executed on the active storage device are converted into code to be executed on a host device. The parts of the application that will be executed on the active storage device are converted into code of an instruction set architecture of a processor in the active storage device.
Abstract:
Systems and methods of cache flushing include receiving, from a software application, a first cache flush request to perform a range-based cache flush of a contiguous virtual address range within a virtual memory that maps to a physical memory. A single cache walk is triggered via a second cache flush request to a cache. The single cache walk performs the range-based cache flush for the contiguous physical address range from a beginning address of the contiguous physical address range to an ending address of the contiguous physical address range in response to the first cache flush request.
Abstract:
An electronic device includes a processor having a micro-operation queue, multiple scheduler entries, and scheduler compression logic. When a pair of micro-operations in the micro-operation queue is compressible in accordance with one or more compressibility rules, the scheduler compression logic acquires the pair of micro-operations from the micro-operation queue and stores information from both micro-operations of the pair of micro-operations into different portions in a single scheduler entry. In this way, the scheduler compression logic compresses the pair of micro-operations into the single scheduler entry.
Abstract:
A processor partitions a coherency directory into different regions for different processor cores and manages the number of entries allocated to each region based at least in part on monitored recall costs indicating expected resource costs for reallocating entries. Examples of monitored recall costs include a number of cache evictions associated with entry reallocation, the hit rate of each region of the coherency directory, and the like, or a combination thereof. By managing the entries allocated to each region based on the monitored recall costs, the processor ensures that processor cores associated with denser memory access patterns (that is, memory access patterns that more frequently access cache lines associated with the same memory pages) are assigned more entries of the coherency directory.
Abstract:
A processing unit decomposes a matrix for partial processing at a processor-in-memory (PIM) device. The processing unit receives a matrix to be used as an operand in an arithmetic operation (e.g., a matrix multiplication operation). In response, the processing unit decomposes the matrix into two component matrices: a sparse component matrix and a dense component matrix. The processing unit itself performs the arithmetic operation with the dense component matrix, but sends the sparse component matrix to the PIM device for execution of the arithmetic operation. The processing unit thereby offloads at least some of the processing overhead to the PIM device, improving overall efficiency of the processing system.