Abstract:
In one embodiment, the present invention is directed to a processor having a plurality of cores and a cache memory coupled to the cores and including a plurality of partitions. The processor can further include a logic to dynamically vary a size of the cache memory based on a memory boundedness of a workload executed on at least one of the cores. Other embodiments are described and claimed.
Abstract:
A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.
Abstract:
A multi-boundary address protection range is provided to prevent key operations from interfering with a data move performed by a dynamic memory relocation (DMR) move operation. Any key operation address that is within the move boundary address range gets rejected back to the hypervisor. Further, logic exists across a set of parallel slices to synchronize the DMR move operation as it crosses a protected boundary address range.
Abstract:
An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
Abstract:
In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
Abstract:
A multi-queue cache is configured with an initial configuration, where the initial configuration includes one or more queues for storing data items. Each of the one or more queues has an initial size. Thereafter, the multi-queue cache is operated according to a multi-queue cache replacement algorithm. During operation, access patterns for the multi-queue cache are analyzed. Based on the access patterns, an updated configuration for the multi-queue cache is determined. Thereafter, the configuration of the multi-queue cache is modified during operation. The modifying includes adjusting the size of at least one of the one or more queues according to the determined updated configuration for the multi-queue cache.
Abstract:
A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.
Abstract:
A method, system and computer program product are provided for implementing coherent accelerator function isolation for virtualization in an input/output (IO) adapter in a computer system. A coherent accelerator provides accelerator function units (AFUs), each AFU is adapted to operate independently of the other AFUs to perform a computing task that can be implemented within application software on a processor. The AFU has access to system memory bound to the application software and is adapted to make copies of that memory within AFU memory-cache in the AFU. As part of this memory coherency domain, each of the AFU memory-cache and processor memory-cache is adapted to be aware of changes to data commonly in either cache as well as data changed in memory of which the respective cache contains a copy.
Abstract:
An approach is provided for segmenting a cache into one or more cache segments and synchronizing the cache segments. An cache platform causes, at least in part, a segmentation of at least one cache into one or more cache segments. The cache platform further determines that at least one cache segment of the one or more cache segments is invalid. The cache platform also causes, at least in part, a synchronization of the at least one cache segment. The approach allows for a dynamic optimization of the synchronization of the cache segments based on one or more characteristics associated with the devices and/or the connection associated with the cache synchronization.
Abstract:
Technology is provided for partitioning a shared unified cache in a multi-processor computer system. The technology can receive a request to allocate a portion of a shared unified cache memory for storing only executable instructions, partition the cache memory into multiple partitions, and allocate one of the partitions for storing only executable instructions. The technology can further determine the size of the portion of the cache memory to be allocated for storing only executable instructions as a function of the size of the multi-processor's L1 instruction cache and the number of cores in the multi-processor.