摘要:
An instruction memory system is shared by a plurality of processors and the system utilizes an increased bandwidth to support the combined number of processors. The total instruction address space is divided into code segments according to the disjoint tasks to be performed. The instruction codes of each processor are consolidated into one copy for control instructions and duplicate copies for other disjoint tasks such as inbound requests and outbound requests that have greater processor contention. Interleaving of the memory arrays for certain disjoint tasks serves to provide a larger number of instructions for these tasks. The system utilizes arbiters to receive all disjoint tasks and to control multiplexors that send addresses to memory arrays.
摘要:
A network switch apparatus, components for such an apparatus, and methods of operating such an apparatus in which data flow handling and flexibility is enhanced by the cooperation of a control point and a plurality of interface processors formed on a semiconductor substrate. The control point and interface processors together form a network processor capable of cooperating with other elements including an optional switching fabric device in executing instructions directing the flow of data in a network.
摘要:
An apparatus is disclosed for transporting control information in a communications system. The apparatus comprises a network processor, a control point processor operatively coupled to the network processor, and a guided frame generated by the control point processor. The guided frame comprises a first section in which frame control information is placed and is used by the network processor to update at least one control register within the network processor; a second section carrying correlators assigned by the control point processor to correlate guided frame responses with their requests; a third section carrying one or a sequence of guided commands; and an End delimiter guided command.
摘要:
A control sub system, a plurality of interface processors, a plurality of media interfaces a plurality of queues are operatively coupled and responsive to a control signal to move data from a memory to a selected one of the plurality of queues.
摘要:
A network switch apparatus, components for such an apparatus, and methods of operating such an apparatus in which data flow handling and flexibility is enhanced by the cooperation of a control point and a plurality of interface processors formed on a semiconductor substrate. The control point and interface processors together form a network processor capable of cooperating with other elements including an optional switching fabric device in executing instructions directing the flow of data in a network.
摘要:
Novel data structures, methods and apparatus for a Software Managed Tree (SMT) which provides a mechanism to create tree structures that follow a search mechanism defined by a control point processor. The search mechanism does not require storage on the previous pointer and uses only a forward pointer along with a next bit or group of bits to test thereby reducing storage space for nodes. The search mechanism processes multiple filter rules for an application without requiring multiple searches and also allows various filter rules to be chained. Two patterns of the same length are stored in each leaf to define a range compare. A compare at the end operation is either a compare under range or a compare under mask. In a compare under range, the input key is checked to determine if it is in the range defined by the two patterns. In a compare under mask, the bits in the input key are compared with the bits in a first leaf pattern under a mask specified in a second leaf pattern.
摘要:
The transport protocol for communicating between general purpose processors acting as contact points and network processors in a packet processing environment such as Ethernet is provided. In such an environment, there is at least one single control point processor (CP) and a plurality of network processors (NP), sometimes referred to as blades. A typical system could contain two to sixteen network processors, and each network processor connects to a plurality of devices which communicate with each other over a network transport, such as Ethernet. The CP typically controls the functionality and the functioning of the network processors to function in a way that connects one end user with another, whether or not the end user is on the same network processor or a different network processor. There are three types of communication provided; first, there is communication generally referred to as control services and normally there will be only one pico processor which operates as a GCH (guided cell handler) and only one that operates as a guided tree handler (GTH). A path is provided for the controls to the GCH and the GTH commands, and a separate path is provided for the data frames between the GDH's (general data handler) and the CP.
摘要:
A data processing system and method in a computer network are disclosed for improving performance of a link aggregation system included in the network. Parameters are established which are utilized to determine performance criteria of the link aggregation system. A performance of the link aggregation system is determined by determining the performance criteria. The performance of the link aggregation system changes in response to a flow traffic burden on the link aggregation system changing. The link aggregation system dynamically modifies the parameters in response to the changing performance of the link aggregation system. The link aggregation system is self-tuning and capable of automatically adjusting to a changing flow traffic burden on the link aggregation system.
摘要:
An embedded processor complex contains multiple protocol processor units (PPUs). Each unit includes at least one, and preferably two independently functioning core language processors (CLPs). Each CLP supports dual threads thread which interact through logical coprocessor execution or data interfaces with a plurality of special purpose coprocessors that serve each PPU. Operating instructions enable the PPU to identify long and short latency events and to control and shift priority for thread execution based on this identification. The instructions also enable the conditional execution of specific coprocessor operations upon the occurrence or non occurrence of certain specified events.
摘要:
A method is provided for utilizing a memory system which allows for the fast and efficient writing and reading of objects to and from diverse memory chips. A computer system and memory system complex according to method is also provided. The invention defines objects in terms of “shapes.” The shape of an object is defined by two parameters: “Width” and “Height.” Memory system memory chips may comprise sets of different kinds of memory modules which vary in terms of access speed, latency and memory width, such as for example DRAM or SRAM memory modules. The Height of an object denotes the number of consecutive address locations at which the object is stored on a memory module. The Width of an object denotes the number of memory modules at which the object is stored. An advantage of the invention is that objects are defined in terms of “sub-objects” optimized for the memory system memory modules. The sub-objects match the line-width of the memory, thereby allowing the objects to be efficiently written to different memories or memory banks. A further advantage of the invention is that the sub-object shape is transparent to the requester (i.e. transparent to application assembly language). A further advantage of the invention is that sub-objects are handled independently by the memory arbiter, i.e. they can be written to the memory or read from the memory in any order. The complex may further comprise a Tree Search Memory (TSM) system that utilizes a “tree” object hierarchy to perform high-speed memory lookups. Shaping is used to specify how an object is stored in the TSM. The trees consist of different kinds of objects with different shapes. An important advantage of the invention is that the concept of shapes can be used for memory bandwidth distribution and performance increase, allowing objects that are frequently read from memory to be distributed in specific sub-object ordering.