摘要:
A network device routes data packets by storing the packets in a switching memory as a function of a destination address of the packet. The switching memory comprises switching memory queues that are mapped to ports of the device. A header of a received packet is examined to determine the network destination address to which it is to be routed, and a destination queue is assigned to the packet based on the destination address. Thereafter, the packet is divided into cells, and the cells are written to contiguous locations in the destination queue.
摘要:
A method for routing and switching data packets from one or more incoming links to one or more outgoing links of a router. The method comprises receiving a data packet from the incoming link, assigning at least one outgoing link to the data packet based on the destination address of the data packet, and after the assigning operation, storing the data packet in a switching memory based on the assigned outgoing link. The data packet extracted from the switching memory, and transmitted along the assigned outgoing link. The router may include a network processing unit having one or more systolic array pipelines for performing the assigning operation.
摘要:
A method for routing and switching data packets from one or more incoming links to one or more outgoing links of a router. The method comprises receiving a data packet from the incoming link, assigning at least one outgoing link to the data packet based on the destination address of the data packet, and after the assigning operation, storing the data packet in a switching memory based on the assigned outgoing link. The data packet extracted from the switching memory, and transmitted along the assigned outgoing link. The router may include a network processing unit having one or more systolic array pipelines for performing the assigning operation.
摘要:
A processor for use in a router, the processor having a systolic array pipeline for processing data packets to determine to which output port of the router the data packet should be routed. In one embodiment, the systolic array pipeline includes a plurality of programmable functional units and register files arranged sequentially as stages, for processing packet contexts (which contain the packet's destination address) to perform operations, under programmatic control, to determine the destination port of the router for the packet. A single stage of the systolic array may contain a register file and one or more functional units such as adders, shifters, logical units, etc., for performing, in one example, very long instruction word (vliw) operations. The processor may also include a forwarding table memory, on-chip, for storing routing information, and a cross bar selectively connecting the stages of the systolic array with the forwarding table memory.
摘要:
Multi-chassis fabric-backplane enterprise servers include a plurality of chassis managed collectively to form one or more provisioned servers. A central client coordinates gathering of provisioning and management information from the chassis, and arranges for distribution of control information to the chassis. One of the chassis may perform as a host or proxy with respect to information and control communication between the client and the chassis. Server provisioning and management information and commands move throughout the chassis via an Open Shortest Path First (OSPF) protocol. Alternatively, the client may establish individual communication with a subset of the chassis, and directly communicate with chassis in the subset. Server provisioning and management information includes events generated when module status changes, such as when a module is inserted and becomes available, and when a module fails and is no longer available. Each chassis includes a switch fabric enabling communication between chassis modules.
摘要:
Pluggable modules communicate via a switch fabric dataplane accessible via a backplane. Various embodiments are comprised of varying numbers and arrangements of the pluggable modules in accordance with a system architecture that provides for provisioning virtual servers and clusters of servers from underlying hardware and software resources. The system architecture is a unifying solution for applications requiring a combination of computation and networking performance. Resources may be pooled, scaled, and reclaimed dynamically for new purposes as requirements change, using dynamic reconfiguration of virtual computing and communication hardware and software.
摘要:
The present invention provides a computer-readable medium and system for selecting a set of n-grams for indexing string data in a DBMS system. Aspects of the invention include providing a set of candidate n-grams, each n-gram comprising a sequence of characters; identifying sample queries having character strings containing the candidate n-grams; and based on the set of candidate n-grams, the sample queries, database records, and an n-gram space constraint, automatically selecting, given the space constraint, a minimal set of an n-grams from the set of candidate n-grams that minimizes the number of false hits for the set of sample queries had the sample queries been executed against the database records.
摘要:
Pluggable modules communicate via a switch fabric dataplane accessible via a backplane. Various embodiments are comprised of varying numbers and arrangements of the pluggable modules in accordance with a system architecture that provides for provisioning virtual servers and clusters of servers from underlying hardware and software resources. The system architecture is a unifying solution for applications requiring a combination of computation and networking performance. Resources may be pooled, scaled, and reclaimed dynamically for new purposes as requirements change, using dynamic reconfiguration of virtual computing and communication hardware and software.
摘要:
There is provided a proactive routing system and method. In some embodiments, the method includes determining slack for a net in a netlist, applying a routing condition to the net, calculating an extra delay related to the routing condition, determining a criticality of the net considering the extra delay and the determined slack, and setting a soft constraint based at least partially on the criticality.
摘要:
Routing systems and methods are provided having various strategies for optimizing and evaluating possible routes for netlist connections. In one embodiment, a data structure or matrix provides cost related data weighted to evaluate the impact proposed a connection or segment will have upon an attribute of interest such as, for example, speed, manufacturability or noise tolerance. This cost information can be related to terrain costs as well as shape costs to provide multidimensional cost information for connections. Processing such higher information cost data is made more efficient with an additive process that is less demanding than a computationally intensive iterative multiplication process. Various methods are also disclosed for shifting and adjusting routing grids to improve use of available space or reduce run time in routing. In another embodiment, a parallel processing scheme is used to process multiple regions on multiple processors simultaneously without creating conflicts, that could arise, for example, when two processors try to route a trace on the same gridpoint.