摘要:
A technique is provided for use in testing replicated components (e.g., identical circuit components) of an electronic device for defects. In one aspect of this testing technique, the same test inputs may be broadcast, in parallel, from a single test interface to each of the replicated components of the electronic device under test. Respective test outputs generated by the replicated components in response to the test inputs may be supplied to a comparator, comprised in the electronic device, that compares the respective test outputs to each other and generates a fault signal if corresponding test outputs are not identical. This fault signal may be supplied to an external test interface pin of the single test interface, and its assertion may indicate that one or more of the replicated components may be defective. The respective test outputs may be multiplexed to permit output via an external interface of respective test outputs from a selected component. These respective test outputs may be compared to expected values therefor whereby to determine presence and/or nature of defects in the replicated components.
摘要:
A parallel processor is provided that includes integrated debugging capabilities. The processor includes a pipelined processing engine, having an array of processing element complex stages, and input and output header buffers. A debug system is provided that, when triggered, may put some or all of the processing element complexes into a debug mode of operation. When a complex is in debug mode, examination of internal stages of the component circuits of the complex may occur, in order to facilitate debugging of software and hardware errors that may occur during operation of the processor.
摘要:
A technique is provided for use in computerized modeling of an electronic system. The technique bases simulation of the system's operation (e.g., timing operation) upon both actual physical characteristics of a part of the system, and hierarchical analysis-based models of the rest of the system.
摘要:
A multi-port switching device architecture decouples decode logic circuitry of each port of a network switch from its respective state machine logic circuitry and organizes the state machine logic as pools of transmit/receive engine resources that are shared by each of the decode logic circuits. Intermediate priority logic of the switching device cooperates with the decode logic and pooled resources to allocate frames among available resources in accordance with predetermined ordering and fairness policies. These policies prevent misordering of frames from a single source while ensuring that all ports in the device are serviced fairly.
摘要:
A system and technique facilitate fast context switching among processor complex stages of a pipelined processing engine. Each processor complex comprises a central processing unit (CPU) core having a plurality of internal context switchable registers that are connected to respective registers within CPU cores of the pipelined stages by a processor bus. The technique enables fast context switching by sharing the context switchable registers between upstream and downstream CPUs to, inter alia, force program counters into the downstream registers. In one aspect of the inventive technique, the system automatically reflects (shadows) the contents of an upstream CPU's context switchable registers at respective registers of a downstream CPU over the processor bus. In another aspect of the invention, the system redirects instruction execution by the downstream CPU to an appropriate routine based on processing performed by the upstream CPU.
摘要:
A processor complex architecture facilitates accurate passing of transient data among processor complex stages of a pipelined processing engine. The processor complex comprises a central processing unit (CPU) coupled to an instruction memory and a pair of context data memory structures via a memory manager circuit. The context memories store transient “context” data for processing by the CPU in accordance with instructions stored in the instruction memory. The architecture further comprises data mover circuitry that cooperates with the context memories and memory manager to provide a technique for efficiently passing data among the stages in a manner that maintains data coherency in the processing engine. An aspect of the architecture is the ability of the CPU to operate on the transient data substantially simultaneously with the passing of that data by the data mover.
摘要:
A processor complex architecture facilitates accurate passing of transient data among processor complex stages of a pipelined processing engine. The processor complex comprises a central processing unit (CPU) coupled to an instruction memory and a pair of context data memory structures via a memory manager circuit. The context memories store transient “context” data for processing by the CPU in accordance with instructions stored in the instruction memory. The architecture further comprises data mover circuitry that cooperates with the context memories and memory manager to provide a technique for efficiently passing data among the stages in a manner that maintains data coherency in the processing engine. An aspect of the architecture is the ability of the CPU to operate on the transient data substantially simultaneously with the passing of that data by the data mover.
摘要:
A processor isolation technique enhances debug capability in a multiprocessor circuit. A bypass register has a bit location which may indicate that a processor is to be bypassed. A code entry point is selected to permit a downstream processor to do the work of the bypassed processor. The processors may be arrayed in a pipeline.
摘要:
A processor complex architecture facilitates accurate passing of transient data among processor complex stages of a pipelined processing engine. The processor complex comprises a central processing unit (CPU) coupled to an instruction memory and a pair of context data memory structures via a memory manager circuit. The context memories store transient “context” data for processing by the CPU in accordance with instructions stored in the instruction memory. The architecture further comprises data mover circuitry that cooperates with the context memories and memory manager to provide a technique for efficiently passing data among the stages in a manner that maintains data coherency in the processing engine. An aspect of the architecture is the ability of the CPU to operate on the transient data substantially simultaneously with the passing of that data by the data mover.
摘要:
A programmable processing engine processes transient data within an intermediate network station of a computer network. The engine comprises an array of processing elements symmetrically arrayed as rows and columns, and embedded between input and output buffer units with a plurality of interfaces from the array to an external memory. The external memory stores non-transient data organized within data structures, such as forwarding and routing tables, for use in processing the transient data. Each processing element contains an instruction memory that allows programming of the array to process the transient data as processing element stages of baseline or extended pipelines operating in parallel.