摘要:
An apparatus including first and second reservation stations. The first reservation station dispatches a load micro instruction, and indicates on a hold bus if the load micro instruction is a specified load micro instruction directed to retrieve an operand from a prescribed resource other than on-core cache memory. The second reservation station is coupled to the hold bus, and dispatches one or more younger micro instructions therein that depend on the load micro instruction for execution after a number of clock cycles following dispatch of the first load micro instruction, and if it is indicated on the hold bus that the load micro instruction is the specified load micro instruction, the second reservation station is configured to stall dispatch of the one or more younger micro instructions until the load micro instruction has retrieved the operand. The resources include an input/output (I/O) unit, configured to perform I/O operations via an I/O bus coupling an out-of-order processor to I/O resources.
摘要:
A flexible aes instruction set for a general purpose processor is provided. The instruction set includes instructions to perform a 'one round' pass for aes encryption or decryption and also includes instructions to perform key generation. An immediate may be used to indicate round number and key size for key generation for 128/192/256 bit keys. The flexible aes instruction set enables full use of pipelining capabilities because it does not require tracking of implicit registers.
摘要:
A hybrid system-on-chip provides a plurality of memory and processor die mounted on a semiconductor carrier chip that contains a fully integrated power management system that switches DC power at speeds that match or approach processor core clock speeds, thereby allowing the efficient transfer of data between off-chip physical memory and processor die.
摘要:
Methods and apparatus are disclosed to cache code in non-volatile memory. A disclosed example method includes identifying an instance of a code request for first code, identifying whether the first code is stored on non-volatile (NV) random access memory (RAM) cache, and when the first code is absent from the NV RAM cache, adding the first code to the NV RAM cache when a first condition associated with the first code is met and preventing storage of the first code to the NV RAM cache when the first condition is not met.
摘要:
A hybrid system-on-chip provides a plurality of memory and processor die mounted on a semiconductor carrier chip that contains a fully integrated power management system that switches DC power at speeds that match or approach processor core clock speeds, thereby allowing the efficient transfer of data between off-chip physical memory and processor die.
摘要:
A method of identifying instructions including accessing a plurality of instructions that comprise multiple branch instructions. For each branch instruction of the multiple branch instructions, a respective first mask is generated representing instructions that are executed if a branch is taken. A respective second mask is generated representing instructions that are executed if the branch is not taken. A prediction output is received that comprises a respective branch prediction for each branch instruction. For each branch instruction, the prediction output is used to select a respective resultant mask from among the respective first and second masks. For each branch instruction, a resultant mask of a subsequent branch is invalidated if a previous branch is predicted to branch over said subsequent branch. A logical operation is performed on all resultant masks to produce a final mask. The final mask is used to select a subset of instructions for execution.
摘要:
A digitally signed file system in which data, metadata and files are objects, each object having a globally unique and content-derived fingerprint and wherein object references are mapped by the fingerprints; the file system has a root object comprising a mapping of all object fingerprints in the file system, such that a change to the file system results in a change in the root object, and tracking changes in the root object provides a history of file system activity.
摘要:
A mechanism which manages variable length instructions in cache is comprised of three cooperating elements designed to optimize self modifying code and anticipate next instructions for branch operand management. A content addressable memory (CAM) stores addresses of lines which have been accessed for instruction fetching. In a system having modifiable instruction stream (i.e., store to instruction stream), when the CAM matches, the system must retire certain instructions, flush instructions and then fetch the modified instruction stream. Boundary identification logic examines a field in each cache byte to determine the nature of the byte. This field is initially cleared at the time the cache line is loaded and filled with the line is fetched. An anticipation buffer designed to minimize the circuitry necessary for fetches across cache lines is loaded with sequentially anticipated prefetched instructions from the cache. These anticipated instructions can then be concatenated by a fetch aligner.