Abstract:
The configuration of a cache is adjusted within a computer system that includes at least one entity that submits a stream of references, each reference corresponding to a location identifier corresponding to data storage locations in a storage system. The reference stream is spatially sampled using reference hashing. Cache utility values are determined for each of a plurality of caching simulations and an optimal configuration is selected based on the results of the simulations.
Abstract:
Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by writing supplemental data to the unfilled portions of the cache line. A cache memory controller may receive a cache memory access request with a supplemental write command for data smaller than a cache line. The cache memory controller may write supplemental to the portions of the cache line not filled by the data in response to a write cache memory access request or a cache miss during a read cache memory access request. In the event of a cache miss, the cache memory controller may retrieve the data from the main memory, excluding any overfetch data, and write the data and the supplemental data to the cache line. Eliminating overfetching reduces bandwidth and power required to retrieved data from main memory.
Abstract:
A computing device may allocate a plurality of blocks in the memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory. The computing device may further store a plurality of bandwidth-compressed graphics data into the respective plurality of blocks in the memory, wherein one or more of the plurality of bandwidth-compressed graphics data each has a size that is smaller than the fixed size. The computing device may further store data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.
Abstract:
According to one embodiment, a storage device includes a nonvolatile storage medium, a volatile memory and a controller. The volatile memory includes a cache area and a cache management area. The cache area is used to store, as write cache data, write data to be written to a user data area of the nonvolatile storage medium. The cache management area is used to store management information associated with the write cache data and including a compression size for the write cache data. The compression size is calculated in accordance with reception of a write command. The controller compresses, based on the management information, write cache data which is not saved to a save area and is needed to be compressed, and writes the compressed write cache data to the save area.
Abstract:
Methods, systems, and apparatuses are described for provisioning storage devices. An example method includes specifying a logical zone granularity for logical space associated with a disk drive. The method further includes provisioning a zone of a physical space of the disk drive based at least in part on the specified logical zone granularity. The method also includes storing compressed data in the zone in accordance with the provisioning.
Abstract:
As disclosed herein a method, executed by a computer, for enabling live partition mobility using ordered memory migration includes receiving a request to initialize a migration of a logical partition (LPAR) to a destination system. The method further includes creating a list which includes memory page identifiers corresponding to memory pages of the LPAR. The memory page identifiers of the list are ordered according to a page transfer priority. The method further includes identifying memory pages of the LPAR that will be unmodified during an estimated duration of time of the migration. The method further includes updating the list, based on the identified memory pages of the LPAR that will be unmodified during the estimated duration of time of the migration. The method further includes migrating the LPAR based on the list. A computer system, and a computer program product corresponding to the method are also disclosed herein.
Abstract:
Methods and apparatus related to efficient Solid State Drive (SSD) data compression scheme and layout are described. In one embodiment, logic, coupled to non-volatile memory, receives data (e.g., from a host) and compresses the data to generate compressed data prior to storage of the compressed data in the non-volatile memory. The compressed data includes a compressed version of the data, size of the compressed data, common meta information, and final meta information. Other embodiments are also disclosed and claimed.
Abstract:
A binary memory image in system is modified. The system may or may not already have virtual memory management enabled. Virtual memory management is enabled and/or modified by inserting a sub-OS virtual memory management layer in the binary memory image. Part of the binary memory image may be compressed to make room for the sub-OS virtual memory management layer.
Abstract:
Aspects of the present disclosure are directed to apparatuses and methods involving the detection of signal characteristics. As may be implemented in accordance with one or more embodiments, an apparatus includes a radar or sonar transceiver that transmits signals and receives reflections of the transmitted signals. A data compression circuit determines a compression factor based on characteristics of the signals, such as may relate to a channel over which the signal passes and/or related aspects of an object from which the signals are reflected (e.g., velocity, trajectory and distance). Data representing the signals is compressed as a function of the determined compression factor.
Abstract:
A memory manager in a computing device allocates memory to programs running on the computing device, the amount of memory allocated to a program being a memory commit for the program. When a program is in a state where the program can be terminated, the content of the memory pages allocated to the program is compressed, and an amount of the memory commit for the program that can be released is determined. This amount of memory commit is the amount that was committed to the program less any amount still storing (in compressed format) information (e.g., data or instructions) for the program. The determined amount of memory commit is released, allowing that amount of memory to be consumed by other programs as appropriate.