Abstract:
An apparatus having a fabric interconnect that supports multiple topologies and method for using the same are disclosed. In one embodiment, the apparatus comprises mode memory to store information indicative of one of the plurality of modes; and a first fabric operable in a plurality of modes, where the fabric comprises logic coupled to the mode memory to control processing of read and write requests to memory received by the first fabric according to the mode identified by the information indicative.
Abstract:
Technologies for high-performance single-stream data compression include a computing device that updates an index data structure based on an input data stream. The input data stream is divided into multiple chunks. Each chunk has a predetermined length, such as 136 bytes, and overlaps the previous chunk by a predetermine amount, such as eight bytes. The computing device processes multiple chunks in parallel using the index data to generate multiple token streams. The tokens include literal tokens and reference tokens that refer to matching data from earlier in the input data stream. The computing device thus searches for matching data in parallel. The computing device merges the token streams to generate a single output token stream. The computing device may merge a pair of tokens from two different chunks to generate one or more synchronized tokens that are output to the output token stream. Other embodiments are described and claimed.
Abstract:
A processor includes a decoder to decode an instruction to compress an input data stream and an execution unit for executing the instruction. The execution unit to generate metadata for a current input of the input data stream, the metadata comprises a first hint based on a portion of a current input that represents the input data stream at a current offset, select a first pointer to identify a location in a history buffer in a hash chain, determine whether the metadata generated for the current input matches metadata previously generated for the first pointer, and filter the first pointer from a search for a best match for the current input in the history buffer based on the determination that at least a portion of the metadata for the current input does not match a portion of the metadata for the first pointer.
Abstract:
Technologies for efficiently compressing data with run detection include a compute device. The compute device is to produce a hash as a function of a symbol at a present position and a predefined number of symbols after the present position in an input stream, determine whether the symbol at the present position is part of a run, obtain, from a hash table, a chain of pointers to previous positions in the input stream associated with the hash, determine, as a function of whether the symbol is part of a run and to identify a matched string, a number of strings referenced by the chain of pointers to compare to a string associated with the present position in the input stream, and output, in response to an identification of a matched string, a reference to the matched string in a set of compressed output data.
Abstract:
A processing device includes an accelerator circuit to identify a byte in a byte stream, determine whether a first byte string starting from a first byte position of the byte matches a second byte string starting from a second byte position, responsive to determining that the first byte string matches the second byte string, generate a token comprising a first symbol encoding a length of the first byte string and a second symbol encoding a byte distance between the first byte position and the second byte position, and responsive to determining that the first byte string does not match another byte string, generate the token comprising the first symbol comprising the byte and a second symbol encoding a determined value.
Abstract:
A processing system is provided that includes a memory for storing an input bit stream and a processing logic, operatively coupled to the memory, to generate a first score based on: a first set of matching data related to a match between a first bit subsequence and a candidate bit subsequence within the input bit stream, and a first distance of the candidate bit subsequence from the first set of matching data. A second score is generated based on a second set of matching data related to a match between a second bit subsequence and the candidate bit subsequence, and a second distance of the candidate bit subsequence from the second set of matching data. A code to replace the first or second bit subsequence in an output bit stream is identified. Selection of the one of the bit subsequences to replace is based on a comparison of the scores.
Abstract:
A processing device includes a storage device to store data and a processor, operably coupled to the storage device, the processor to receive a token stream comprising a plurality of tokens generated based on a byte stream comprising a plurality of bytes, wherein each token in the token stream comprises at least one symbol associated with a respective byte in the byte stream, and wherein the at least one symbol represents one of the respective byte, a length of a first byte string starting from the respective byte, or a byte distance between the first byte string and a matching second byte string, generate a graph comprising a plurality of nodes and edges based on the token stream, wherein each token in the token stream is associated with a respective node connected by at least one edge to another node, and wherein the at least one edge is associated with a cost function to encode the at least one symbol stored in the each token, identify, based on the graph, a path between a first node associated with a beginning token of the token stream and an end node associated with a last token of the token stream, wherein the path comprises a subset of nodes and edges linking the subset of nodes, and perform variable-length encoding of a subset of tokens associated with the subset of nodes to generate an output data.
Abstract:
A compute device to generate deterministic compressed streams receives a current string to be matched to one or more prior instances of the current string, the current string being located within an input buffer and the one or more prior instances located within a history buffer. The compute device identifies a limited subset of index memory designated for storing pointers to the prior instances, identifying a reserved slop region in the index memory, and compares the current string to a prior instance, locating the at least one prior instance using at least one pointer to the at least one prior instance. The at least one pointer is stored within the limited subset of the index memory, and the compute device also prohibits use of any pointers stored in the reserved slop region of the index memory. Other embodiments are described and claimed.
Abstract:
In one embodiment, an apparatus comprises a first compression engine to receive a first compressed data block from a second compression engine that is to generate the first compressed data block by compressing a first plurality of repeated instances of data that each have a length greater than or equal to a first length. The first compression engine is further to compress a second plurality of repeated instances of data of the first compressed data block that each have a length greater than or equal to a second length, the second length being shorter than the first length, wherein each compressed repeated instance of the first and second pluralities of repeated instances comprises a location and length of a data instance that is repeated. The apparatus further comprises a memory buffer to store the compressed first and second plurality of repeated instances of data.
Abstract:
Technologies for efficiently compressing data with run detection include a compute device. The compute device is to produce a hash as a function of a symbol at a present position and a predefined number of symbols after the present position in an input stream, determine whether the symbol at the present position is part of a run, obtain, from a hash table, a chain of pointers to previous positions in the input stream associated with the hash, determine, as a function of whether the symbol is part of a run and to identify a matched string, a number of strings referenced by the chain of pointers to compare to a string associated with the present position in the input stream, and output, in response to an identification of a matched string, a reference to the matched string in a set of compressed output data.