Abstract:
An apparatus is described that includes a memory controller to interface to a multi-level system memory. The memory controller includes least recently used (LRU) circuitry to keep track of least recently used cache lines kept in a higher level of the multi-level system memory. The memory controller also includes idle time predictor circuitry to predict idle times of a lower level of the multi-level system memory. The memory controller is to write one or more lesser used cache lines from the higher level of the multi-level system memory to the lower level of the multi-level system memory in response to the idle time predictor circuitry indicating that an observed idle time of the lower level of the multi-level system memory is expected to be long enough to accommodate the write of the one or more lesser used cache lines from the higher level of the multi-level system memory to the lower level of the multi-level system memory.
Abstract:
A method and controller for implementing enhanced storage adapter write cache management, and a design structure on which the subject controller circuit resides are provided. The controller includes a hardware write cache engine implementing hardware acceleration for storage write cache management. The controller manages write cache data and metadata with minimum or no firmware involvement for greatly enhancing performance.
Abstract:
Embodiments herein pre-load memory translations used to perform virtual to physical memory translations in a computing system that switches between virtual machines (VMs). Before a processor switches from executing the current VM to the new VM, a hypervisor may retrieve previously saved memory translations for the new VM and load them into cache or main memory. Thus, when the new VM begins to execute, the corresponding memory translations are in cache rather than in storage. Thus, when these memory translations are needed to perform virtual to physical address translations, the processor does not have to wait to pull the memory translations for slow storage devices (e.g., a hard disk drive).
Abstract:
A method for managing a multiple level cache of a host comprising a primary cache which is a volatile memory such as a DRAM memory and a secondary cache which is a non-volatile memory such as a SSD memory. The method comprises, if a segment identification data has been computed in said segment hash table, a corresponding processing core checks whether a corresponding packet is stored in a first portion of a primary cache or in a second portion of a secondary cache, - if the packet is stored in said first portion, said corresponding packet is sent back to a requester and a request counter is incremented, a DRAM segment map pointer entering in a DRAM-LRU linked list, the DRAM segment map pointer being prioritized by being moved on top of said DRAM-LRU linked list, - if the packet is stored in said second portion, said corresponding packet is passed to an SSD core so as to copy the entire given segment from the secondary cache to the primary cache; then said request is passed back to said corresponding processing core in order to create the DRAM segment map pointer for pointing to the first portion storing said corresponding packet so as to be entered in said DRAM-LRU linked list, the SSD segment map pointer being also entered in said SSD-LRU linked list, the DRAM segment map pointer and the SSD segment map pointer being respectively prioritized by being respectively moved on top of said DRAM-LRU linked list and said SSD-LRU linked list; then said corresponding packet is sent back to said requester.
Abstract:
Methods and apparatus related to improving storage cache performance by using compressibility of the data as a criteria for cache insertion or allocation and deletion are described. In one embodiment, memory stores one or more cache lines corresponding to a compressed version of data (e.g., in response to a determination that the data is compressible). It is determined whether the one or more cache lines are to be retained or inserted in the memory based at least in part on an indication of compressibility of the data. Other embodiments are also disclosed and claimed.
Abstract:
A method include receiving, at a first cache device ( 135A-D), a request to send a first asset to a second device (110, 135A-D); determining whether the first asset is stored at the first cache device; when the determining whether the first asset is stored at the first cache device indicates that first asset is not stored at the first cache device, obtaining, at the first cache device, the first asset, performing a comparison operation based on an average inter- arrival time of the first asset with respect to the first cache device and a characteristic time of the first cache device, the characteristic time of the first cache device being an average period of time assets cached at the first cache device are cached before being evicted from the first cache device, and determining whether or not to cache the obtained first asset at the first cache device based on the comparison; and sending the obtained first asset to the second device.
Abstract:
Techniques and mechanism to provide a cache of cache tags in determining an access to cached data. In an embodiment, a tag storage stores a first set including tags associated with respective data locations of a cache memory. A cache of cache tags stores a subset of tags stored by the tag storage. Where a tag of the first set is to be stored to the cache of cache tags, all tags of the first set are stored to the first portion. In another embodiment, any storage of tags of the first set to the cache of cache tags includes storage of the tags of the first set to only a first portion of the cache of cache tags. A replacement table is maintained for use in evicting or replacing cached tags based on an indicated level of activity for a set of the cache of cache tags.
Abstract:
Technologies are generally described manage MRAM cache writes in processors. In some examples, when a write request is received with data to be stored in an MRAM cache, the data may be evaluated to determine whether the data is to be further processed. In response to a determination that the data is to be further processed, the data may be stored in a write cache associated with the MRAM cache. In response to a determination that the data is not to be further processed, the data may be stored in the MRAM cache.
Abstract:
In some examples of a virtual computing environment, multiple virtual machines may execute on a physical computing device while sharing the hardware components corresponding to the physical computing device. A hypervisor corresponding to the physical computing device may be configured to designate a portion of a cache to one of the virtual machines for storing data. The hypervisor may be further configured to identify hostile activities executed in the designated portion of cache and, further still, to implement security measures on those virtual machines on which the identified hostile activities are executed.