Abstract:
A processor core for supporting the concurrent execution of mixed integer and floating point operations includes integer functional units (110) utilizing 32-bit operand data and a floating point functional unit (22) utilizing up to 82-bit operand data. Eight operand busses (30, 31) connect to the functional units to furnish operand data, and five result busses (32) are connected to the functional units to return results. The width of the operand busses is 41 bits, which is sufficient to communicate either integer or floating point data. This is done using an instruction decoder (18) to apportion a floating point operation which operates on 82-bit floating point operand data into multiple suboperations each associated with a 41-bit suboperand. The operand busses and result busses have an expanded data-handling dimension from the standard integer data width of 32 bits to 41 bits for handling the floating point operands. The floating point functional unit recombines the suboperand data into 82-bits for execution of the floating point operation, and partitions the 82-bit result for output to the result busses. In addition, the excess capacity of the result busses during integer transfers is used to communicate integer flags.
Abstract:
In a processor (110) that performs multiple instructions in a single cycle, predicts outcomes of branch conditions and speculatively executes instructions based on the branch predictions, a method and apparatus for operating a data stack utilize a remap array (674) to support a stack exchange capability. The remap array is used to correlate a stack pointer (672) to data elements (700) within the stack. A lookahead stack pointer (502) and remap array (504) are updated to preserve the processor's state of operation while speculative instructions are executed.
Abstract:
Operations of a pipeline processor (110) are resynchronized under designated conditions. The processor updates a fetch program counter (210) and, as directed by the counter, fetches instructions from a memory (114). The processor concurrently dispatches, in the fetched order, multiple instructions to designated functional units (170, 171, 172, 173, 174 and 175). Dispatched instructions are queued in functional unit reservation stations. Result entries corresponding to the queued instructions are allocated in a reorder buffer 126 queue in their order of dispatch. Instructions are executed out of their fetched order and results are entered in the allocated result entries when execution is complete. Allocated result entries at the head of the reorder buffer queue are retired and an instruction pointer (620) is updated. The processor is resynchronized when it detects a resynchronization condition and acknowledges the resynchronization condition in the allocated result entry corresponding to the instruction that detected the condition. When the reorder buffer entry holding the resynchronization acknowledgement is retired, the processor flushes the reorder buffer and the reservation stations of the functional units and redirects the fetch program counter to the instruction addressed by the instruction pointer.
Abstract:
Operations of a pipeline processor (110) are resynchronized under designated conditions. The processor updates a fetch program counter (210) and, as directed by the counter, fetches instructions from a memory (114). The processor concurrently dispatches, in the fetched order, multiple instructions to designated functional units (170, 171, 172, 173, 174 and 175). Dispatched instructions are queued in functional unit reservation stations. Result entries corresponding to the queued instructions are allocated in a reorder buffer 126 queue in their order of dispatch. Instructions are executed out of their fetched order and results are entered in the allocated result entries when execution is complete. Allocated result entries at the head of the reorder buffer queue are retired and an instruction pointer (620) is updated. The processor is resynchronized when it detects a resynchronization condition and acknowledges the resynchronization condition in the allocated result entry corresponding to the instruction that detected the condition. When the reorder buffer entry holding the resynchronization acknowledgement is retired, the processor flushes the reorder buffer and the reservation stations of the functional units and redirects the fetch program counter to the instruction addressed by the instruction pointer.
Abstract:
A hierarchical encoding format for coding repairs to devices within a computing system. A device, such as a cache memory, is logically partitioned into a plurality of sub-portions. Various portions of the sub-portions are identifiable as different levels of hierarchy of the device. A first sub-portion may corresponds to a particular cache, a second sub-portion may correspond to a particular way of the cache, and so on. The encoding format comprises a series of bits with a first portion corresponding to a first level of the hierarchy, and a second portion of the bits corresponds to a second level of the hierarchy. Each of the first and second portions of bits are preceded by a different valued bit which serves to identify the hierarchy to which the following bits correspond. A sequence of repairs are encoded as string of bits. The bit which follows a complete repair encoding indicates whether a repair to the currently identified cache is indicated or whether a new cache is targeted by the following repair. Therefore, certain repairs may be encoded without respecifying the entire hierarchy.
Abstract:
A superscalar processor may issue multiple instructions per clock cycle. Included in a superscalar processor may be a reorder buffer which stores information corresponding to concurrently dispatched instructions. Dependencies may exist among the instructions which are concurrently dispatched. To resolve this dependency, when a dependency is detected amongst a group of concurrently dispatched instructions, an indication of the dependency, along with an indication of the position of the dependency, is conveyed to the corresponding reservation station. When the reservation station receives the indication of the dependency, the operand tag associated with the dependency may be replaced with the correct tag. Advantageously, the circuitry needed to resolve the dependency may be moved out of the critical path of the processor; thus, improving the performance of the processor by allowing it to operate at an increased frequency.
Abstract:
In a processor a reorder buffer maintains a load/store (LS) fault address register (LSFAR). When the processor's load/store unit reports most LS exceptions, the reorder buffer redirects the microcode unit of the processor to execute a fault handler indicated by an address stored in the LSFAR. The LSFAR may be mapped into the register space of the processor. It may be written by a microcode routine with the address of a specific fault handler at the beginning of a microcode routine or at any time during a microcode routine. As the reorder buffer retires instructions it checks for writes to the LSFAR. If one exists, the reorder buffer loads the result data of that write into the LSFAR. In a preferred embodiment the reorder buffer retires instructions in program order and the LSFAR is not updated speculatively. Also, in a preferred embodiment, when a microcode routine exits, the LSFAR is automatically returned to a default value which indicates a generic fault handling routine.
Abstract:
A enable circuit (700), employing a "circular carry lookahead" technique to increase its speed performance, is provided for applying two pointers to a circular buffer--an enabling pointer (tail (218)) and a disabling pointer (head (216))--and for generating a multiple-bit enable, ENA (722) in accordance with the pointer values. The pointers designate enable bit boundaries for isolating enable bits of one logic level from enable bits of an opposite logic level. The enable circuit includes several lookahead cells (702, 704, 706 and 708) arranged in an hierarchical array, each of the cells including bits that continue the hierarchical significance. Each cell receives an hierarchical portion of the enabling pointer 218 and the disabling pointer head and a carry. From these pointers, the cell derives a generate, a propagate and the enable bits with a corresponding hierarchical significance. The propagates, generates and carries for all of the lookahead cells are interconnected using a circular propagate carry circuit (710) that provides for asserting a carry to a lookahead cell unless an intervening cell having a nonasserted propagate is interposed in the order of hierarchical significance between the cell and a cell in which enablement is generated.
Abstract:
The present invention relates to expansison anchors for solid wall installation and is specifically concerned with providing a self cutting expansion anchor which can be installed in one continuous motion by utilizing combined cutting blades and wall gripping members which cut their own undercut portion within a wall bore into which the gripping members are then permanently further expanded in positive locking engagement. Such dual-stage installation is achieved by utilizing an anchor mounting assembly having a pair or opposite-hand screw-threaded portions thereon which separately mount a blade expanding thrust member and a camming ramp on which the blades are initially expanded by axial movement of the thrust member toward the ramp and in the second stage causing the ramp to axially move toward the thrust member in further expanding relation to the blades.
Abstract:
The foam tube for pipe insulations has an external surface and an internal surface. The internal surface is provided with an adhesively bonded layer of fibers. The fibers are a material having a melt temperature that is higher than that of the polymeric foam. The fibers are adhesively bonded to the internal surface such as to stand up from the internal surface. The fibers are substantially uniformly distributed over the internal surface providing a surface coverage of 2 to 20 percent. Further, the fibers have a linear density of 0.5 to 25 dtex and a length of 0.2 to 5 mm. With this fiber layer the polymeric foam tube has an improved thermal resistance and thermal conductivity.