Abstract:
A system and method for effectively scheduling read and write operations among a plurality of solid-state storage devices. A computer system comprises client computers and data storage arrays coupled to one another via a network. A data storage array utilizes solid-state drives and Flash memory cells for data storage. A storage controller within a data storage array comprises an I/O scheduler. The characteristics of corresponding storage devices are used to schedule I/O requests to the storage devices in order to maintain relatively consistent response times at predicted times. In order to reduce a likelihood of unscheduled behaviors of the storage devices, the storage controller is configured to schedule proactive operations on the storage devices that will reduce a number of occurrences of unscheduled behaviors.
Abstract:
The disclosed embodiments relate to a method for dynamically changing a prefetching configuration in a computer system, wherein the prefetching configuration specifies how to change an ahead distance that specifies how many references ahead to prefetch for each stream. During operation of the computer system, the method keeps track of one or more stream lengths, wherein a stream is a sequence of memory references with a constant stride. Next, the method dynamically changes the prefetching configuration for the computer system based on observed stream lengths in a most-recent window of time.
Abstract:
Embodiments of the invention describe an apparatus, system and method for workload adaptive address mapping. Embodiments of the invention may receive a request to initialize a system memory including a plurality of memory banks. Using a plurality of memory address mapping schemes for memory settings for the system memory, a system characterization workload is executed during the initialization of the system memory, the system characterization workload including a plurality of transactions directed towards the system memory. Embodiments of the invention may monitor target addresses of the plurality of transactions directed towards the system memory. One of the plurality of memory address mapping schemes is selected based, at least in part, on the target addresses of the plurality of transactions.
Abstract:
Storing data in persistent hybrid memory includes promoting a memory block from non-volatile memory to a cache based on a usage of said memory block according to a promotion policy, tracking modifications to the memory block while in the cache, and writing the memory block back into the non-volatile memory after the memory block is modified in the cache based on a writing policy that keeps a number of the memory blocks that are modified at or below a number threshold while maintaining the memory block in the cache.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for labeled caching techniques. In one aspect, a method includes placing a plurality of items into a cache, each item having a label based on metadata associated with the item. A number of accesses are performed to respective items in the cache. A per-label stack distance histogram is determined for each label, including, for each label, determining a plurality of stack distances for accesses to items having the label. The cache is adjusted using the per-label stack distance histograms.
Abstract:
Embodiments disclosed in the detailed description include hybrid write-through/write-back cache policy managers, and related systems and methods. A cache write policy manager is configured to determine whether at least two caches among a plurality of parallel caches are active. If all of one or more other caches are not active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-hack cache policy. In this manner, the cache write policy manager may conserve power and/or increase performance of a singly active processor core. If any of the one or more other caches are active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-through cache policy. In this manner, the cache write policy manager facilitates data coherency among the parallel caches when multiple processor cores are active.
Abstract:
An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.
Abstract:
Example methods, apparatus, and articles of manufacture to access memory are disclosed. A disclosed example method involves receiving at least one runtime characteristic associated with accesses to contents of a memory page and dynamically adjusting a memory fetch width for accessing the memory page based on the at least one runtime characteristic.
Abstract:
In one embodiment, the present invention includes a prefetching engine to detect when data access strides in a memory fall into a range, to compute a predicted next stride, to selectively prefetch a cache line using the predicted next stride, and to dynamically control prefetching. Other embodiments are also described and claimed.
Abstract:
Method and apparatus for flushing cached writeback data to a storage array. Sets of writeback data are accumulated in a cache memory in an array with a view toward maintaining a substantially uniform distribution of the data across different locations of the storage array. The arrayed sets of data are thereafter transferred from the cache memory to the storage array substantially at a rate at which additional sets of writeback data are provided to the cache memory by a host. Each set of writeback data preferably comprises a plurality of contiguous data blocks, and are preferably written (flushed) to the storage in conjunction with the operation of a separate access command within a selected proximity range of the data with respect to the storage array. A stripe data descriptor (SDD) is preferably maintained for each set of writeback data in the array.