Abstract:
A processor including instruction support for implementing the Data Encryption Standard (DES) block cipher algorithm may issue, for execution, programmer-selectable instructions from a defined instruction set architecture (ISA). The processor may include a cryptographic unit that may receive instructions for execution. The instructions include one or more DES instructions defined within the ISA. In addition, the DES instructions may be executable by the cryptographic unit to implement portions of an DES cipher that is compliant with Federal Information Processing Standards Publication 46-3 (FIPS 46-3). In response to receiving a DES key expansion instruction defined within the ISA, the cryptographic unit may generate one or more expanded cipher keys of the DES cipher key schedule from an input key.
Abstract:
In accordance with one embodiment, an enhanced chip multiprocessor permits an L1 cache to request ownership of a data line from a shared L2 cache. A determination is made whether to deny or grant the request for ownership based on the sharing of the data line. In one embodiment, the sharing of the data line is determined from an enhanced L2 cache directory entry associated with the data line. If ownership of the data line is granted, the current data line is passed from the shared L2 to the requesting L1 cache and an associated enhanced L1 cache directory entry and the enhanced L2 cache directory entry are updated to reflect the L1 cache ownership of the data line. Consequently, updates of the data line by the L1 cache do not go through the shared L2 cache, thus reducing transaction pressure on the shared L2 cache.
Abstract:
Provided is an apparatus and method for accelerating cryptographic hash computations. For example, in a cryptographic hash computation such as SHA-1, multiple execution units in a processor can process loosely coupled data. Specifically, after preprocessing a message with a particular bit length and parsing the padded message into multiple blocks, a first execution unit can begin processing the blocks for a message schedule computation. While the first block is processed, the first execution unit produces a partial result for the computation of the compression function in the second execution unit. By simultaneously processing the blocks on multiple execution units, the cryptographic hash computation performance can improve.
Abstract:
The storage of data line in one or more L1 caches and/or a shared L2 cache of a chip multiprocessor is dynamically optimized based on the sharing of the data line. In one embodiment, an enhanced L2 cache directory entry associated with the data line is generated in an L2 cache directory of the shared L2 cache. The enhanced L2 cache directory entry includes a cache mask indicating a storage state of the data line in the one or more L1 caches and the shared L2 cache. In some embodiments, where the data line is stored in the shared L2 cache only, a portion of the cache mask indicates a storage history of the data line in the one or more L2 caches.
Abstract:
One embodiment of the present invention provides a system that improves the effectiveness of prefetching during execution of instructions in scout mode. Upon encountering a non-data dependent stall condition, the system performs a checkpoint and commences execution of instructions in scout mode, wherein instructions are speculatively executed to prefetch future memory operations, but wherein results are not committed to the architectural state of a processor. When the system executes a load instruction during scout mode, if the load instruction causes a lower-level cache miss, the system allows the load instruction to access a higher-level cache. Next, the system places the load instruction and subsequent dependent instructions into a deferred queue, and resumes execution of the program in scout mode. If the load instruction ultimately causes a hit in the higher-level cache, the system replays the load instruction and subsequent dependent instructions in the deferred queue, whereby the value retrieved from the higher-level cache can help in generating prefetches during scout mode.
Abstract:
Predicting prefetch data sources for runahead execution triggering read operations eliminates the latency penalties of missing read operations that typically are not addressed by runahead execution mechanisms. Read operations that most likely trigger runahead execution are identified. The code unit that includes those triggering read operations is modified so that the code unit branches to a prefetch predictor. The prefetch predictor observes sequence patterns of data sources of triggering read operations and develops prefetch predictions based on the observed data source sequence patterns. After a prefetch prediction gains reliability, the prefetch predictor supplies a predicted data source to a prefetcher coincident with triggering of runahead execution.
Abstract:
A method and processor supporting architected instructions for tracking and determining set membership, such as by implementing Bloom filters are disclosed. The apparatus includes storage arrays (e.g., registers) and an execution core configured to store an indication that a given value is a member of a set, including by executing an architected instruction having an operand specifying the given value, wherein executing comprises applying a hash function to the value to determine an index into one of the storage arrays and setting a bit of the storage array corresponding to the index. An architected query instruction is later executed to determine if a query value is not a member of the set, including by applying the hash function to the query value to determine an index into the storage array and determining whether a bit at the index of the storage array is set.
Abstract:
Systems and methods for efficient instruction support of an multiple features for opcodes of an instruction set. A processor detects a fetched instruction of a computer program comprises an opcode corresponding to a plurality of functions. Each function corresponds to a different type of operation. The processor determines the received instruction corresponds to a feature requested by the computer program, such as a cryptographic algorithm. A determination is made as to whether hardware support exists for the feature. If hardware support exists for the feature, the instruction is executed on-chip by the hardware. Otherwise, software performs the operation corresponding to the instruction.
Abstract:
A processor including instruction support for implementing the Advanced Encryption Standard (AES) block cipher algorithm may issue, for execution, programmer-selectable instructions from a defined instruction set architecture (ISA). The processor may include a cryptographic unit that may receive instructions for execution. The instructions include one or more AES instructions defined within the ISA. In addition, the AES instructions may be executable by the cryptographic unit to implement portions of an AES cipher that is compliant with Federal Information Processing Standards Publication 197 (FIPS 197). In response to receiving a first AES encryption round instruction defined within the ISA, the cryptographic unit may perform an encryption round of the AES cipher on a first group of columns of cipher state having a plurality of rows and columns. A maximum number of columns included in the first group may be fewer than all of the columns of the cipher state.
Abstract:
The generation of Fletcher/Alder partial checksums are transformed from a space that requires integer multiplications and additions to a space that requires only integer additions and shifts on a single SIMD pipeline capable processor. This transformation permits the use of Fletcher/Alder checksums on processors where the performance of SIMD instructions are sub-optimal, on CMT processors that support a single SIMD pipeline as well as other processors that can be configured by executing software to implement SIMD operations for a single SIMD pipeline. The implementation of the process with this transformation on a general-purpose computer system transforms that general-purpose computer system into a special-purpose computer system that uses a single SIMD pipeline to generate a Fletcher/Alder checksum. The elimination of integer multiplications in the generation of the partial checksums results in a significant improvement in performance.