Abstract:
A method of managing a stack includes detecting, by a stack manager of a processor, that a size of a frame to be allocated exceeds available space of a first stack. The first stack is used by a particular task executing at the processor. The method also includes designating a second stack for use by the particular task. The method further includes copying metadata associated with the first stack to the second stack. The metadata enables the stack manager to transition from the second stack to the first stack upon detection that the second stack is no longer in use by the particular task. The method also includes allocating the frame in the second stack.
Abstract:
Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.
Abstract:
A system for data decompression may include a processor coupled to a remote memory having a remote dictionary stored thereon and coupled to a decompression logic having a local memory with a local dictionary. The processor may decompress data during execution by accessing the local dictionary, and if necessary, the remote dictionary.
Abstract:
Data compression systems, methods, and computer program products are disclosed. For each successive input word of an input stream, it is determined whether the input word matches an entry in a lookback table. The lookback table is updated in response to the input word. Input words may be of a number of data types, including zero runs and full or partial matches with an entry in the lookback table. A codeword is generated by entropy encoding a data type corresponding to the input word. The lookback table may be indexed by the position of the input word in the input stream.
Abstract:
A multi-pass compression iteratively removes combinations of bits from locations in each word of a cache line of an uncompressed data stream. For each combination of removed bits, the remaining bits in the word values of the cache line are analyzed to generate a compression score. A highest compression score triggers the building of a dictionary from the remaining bits in the word values of the cache line. After a dictionary is built, the method may continue iteratively to create subsequent dictionaries from the words that remain uncompressed in the cache line. To decompress a word, a first bit section of the compressed word is used to identify a dictionary that is then queried for bits indexed in a second bit section of the compressed word. The uncompressed word is reconstructed by interleaving the queried bits with the removed combination of bits from a third bit section of the word.
Abstract:
A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
Abstract:
A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
Abstract:
Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.
Abstract:
Aspects disclosed relate to a priority-based access of compressed memory lines in a processor-based system. In an aspect, a memory access device in the processor-based system receives a read access request for memory. If the read access request is higher priority, the memory access device uses the logical memory address of the read access request as the physical memory address to access the compressed memory line. However, if the read access request is lower priority, the memory access device translates the logical memory address of the read access request into one or more physical memory addresses in memory space left by the compression of higher priority lines. In this manner, the efficiency of higher priority compressed memory accesses is improved by removing a level of indirection otherwise required to find and access compressed memory lines.
Abstract:
Aspects for generating compressed data streams with lookback pre-fetch instructions are disclosed. A data compression system is provided and configured to receive and compress an uncompressed data stream as part of a lookback-based compression scheme. The data compression system determines if a current data block was previously compressed. If so, the data compression system is configured to insert a lookback instruction corresponding to the current data block into the compressed data stream. Each lookback instruction includes a lookback buffer index that points to an entry in a lookback buffer where decompressed data corresponding to the data block will be stored during a separate decompression scheme. Once the data blocks have been compressed, the data compression system is configured to move a lookback buffer index of each lookback instruction in the compressed data stream into a lookback pre-fetch instruction located earlier than the corresponding lookback instruction in the compressed data stream.