Abstract:
Technologies for data decompression include a computing device that reads a symbol tag byte from an input stream. The computing device determines whether the symbol can be decoded using a fast-path routine, and if not, executes a slow-path routine to decompress the symbol. The slow-path routine may include data-dependent branch instructions that may be unpredictable using branch prediction hardware. For the fast-path routine, the computing device determines a next symbol increment value, a literal increment value, a data length, and an offset based on the tag byte, without executing an unpredictable branch instruction. The computing device sets a source pointer to either literal data or reference data as a function of the tag byte, without executing an unpredictable branch instruction. The computing device may set the source pointer using a conditional move instruction. The computing device copies the data and processes remaining symbols. Other embodiments are described and claimed.
Abstract:
One embodiment provides an apparatus. The apparatus includes a single instruction multiple data (SIMD) hash module configured to apportion at least a first portion of a message of length L to a number (S) of segments, the message including a plurality of sequences of data elements, each sequence including S data elements, a respective data element in each sequence apportioned to a respective segment, each segment including a number N of blocks of data elements and to hash the S segments in parallel, resulting in S segment digests, the S hash digests based, at least in part, on an initial value and to store the S hash digests; a padding module configured to pad a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; and a non-SIMD hash module configured to hash the padded remainder, resulting in an additional hash digest and to store the additional hash digest.
Abstract:
A method of an aspect includes receiving an instruction. The instruction indicates a first source of a first packed data including state data elements ai, bi, ei, and fi for a current round (i) of a secure hash algorithm 2 (SHA2) hash algorithm. The instruction indicates a second source of a second packed data. The first packed data has a width in bits that is less than a combined width in bits of eight state data elements ai, bi, ci, di, ei, fi, gi, hi of the SHA2 hash algorithm. The method also includes storing a result in a destination indicated by the instruction in response to the instruction. The result includes updated state data elements ai+, bi+, ei+, and fi+ that have been updated from the corresponding state data elements ai, bi, ei, and fi by at least one round of the SHA2 hash algorithm.
Abstract:
A flexible aes instruction set for a general purpose processor is provided. The instruction set includes instructions to perform a “one round” pass for aes encryption or decryption and also includes instructions to perform key generation. An immediate may be used to indicate round number and key size for key generation for 128/192/256 bit keys. The flexible aes instruction set enables full use of pipelining capabilities because it does not require tracking of implicit registers.
Abstract:
A flexible aes instruction for a general purpose processor is provided that performs aes encryption or decryption using n rounds, where n includes the standard aes set of rounds {10, 12, 14}. A parameter is provided to allow the type of aes round to be selected, that is, whether it is a “last round”. In addition to standard aes, the flexible aes instruction allows an AES-like cipher with 20 rounds to be specified or a “one round” pass.
Abstract:
A flexible aes instruction set for a general purpose processor is provided. The instruction set includes instructions to perform a “one round” pass for aes encryption or decryption and also includes instructions to perform key generation. An immediate may be used to indicate round number and key size for key generation for 128/192/256 bit keys. The flexible aes instruction set enables full use of pipelining capabilities because it does not require tracking of implicit registers.
Abstract:
A method of one aspect may include receiving a rotate instruction. The rotate instruction may indicate a source operand and a rotate amount. A result may be stored in a destination operand indicated by the rotate instruction. The result may have the source operand rotated by the rotate amount. Execution of the rotate instruction may complete without reading a carry flag.
Abstract:
A method of one aspect may include receiving a rotate instruction. The rotate instruction may indicate a source operand and a rotate amount. A result may be stored in a destination operand indicated by the rotate instruction. The result may have the source operand rotated by the rotate amount. Execution of the rotate instruction may complete without reading a carry flag.
Abstract:
Techniques for processing packets on a network interface controller (NIC) with memory chiplets are disclosed. In an illustrative embodiment, a NIC includes a disaggregated memory with several high-bandwidth memory chiplets spread out in various locations on the NIC. The disaggregated nature of the memory can improve latency, throughput, and scalability as well as improve thermal performance by distributing heat generation to different locations on the NIC. In use, ports of the NIC can be configured to identify packets associated with certain flows and direct those packets to queues on the NIC. Direct memory access circuitry can copy the packets from queues on the NIC to queues on the system memory. This chain of copying packets from the port to the system memory creates a kind of virtual circuit, delivering packets directly to applications with low latency.
Abstract:
Methods and apparatus for managing state in accelerators. An accelerator performs processing operations on a data chunk relating to a job submitted to the accelerator. During or following processing the data chunk, the accelerator generates state information corresponding to its current state and stores the state information or, optionally, the accelerator state information is obtained and stored by privileged software. In connection with continued processing of the current data chunk or a next job and next data chunk, the accelerator accesses previously stored state information identified by the job and validates the state information was generated by itself, another accelerator, or privileged software. Valid state information is then reloaded to restore the state of the accelerator/process state, and processing continues. The chunk processing, accelerator state store, validation, and restore operations are repeated to process subsequent jobs. An accelerator and/or privileged software may use a MAC (Message Authentication Code) algorithm to generate a MAC over a message comprising the accelerator state information. The MAC is then used to validate previously stored state information.