摘要:
Techniques for receiving a packet at a first packet forwarding device in a stack of packet forwarding devices, providing a port aggregation table having a plurality of entries, wherein at least one entry identifies a plurality of ports associated with at least two packet forwarding devices in the stack, and using the packet and the port aggregation table to select a port of a packet forwarding device in the stack for sending the packet to a device external to the stack.
摘要:
According to embodiments of the present invention, an adaptable traffic control system, method, article of manufacture, and apparatus receive a user-programmed value representing an amount of target traffic allowed through a connectivity device port and a user-programmed value representing a time interval during which to receive the allowed amount of target traffic. The two values define a percentage of target traffic allowed through the port for a particular port speed. One embodiment determines that port speed changed by a factor of N, scales the time interval by a factor of 1/N, and based on the allowed amount of target traffic and the scaled time interval, drops incoming target traffic when the received percentage of incoming target traffic is equal to (or greater than) the defined percentage of target traffic allowed through the port.
摘要:
A method according to one embodiment may include transmitting a plurality of packets through control pipeline circuitry of an integrated circuit of a switch. The control pipeline circuitry may be capable of making a plurality of memory requests to memory of the switch in response to the plurality of packets. The method may further comprise staggering the plurality of memory requests so that each of the plurality of memory requests occurs during a different one of a plurality of time slots. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.
摘要:
The inventive subject matter provides various apparatus and methods to perform high-speed memory read accesses on dynamic random access memories (“DRAMs”) for read-intensive memory applications. In an embodiment, at least one input/output (“I/O”) channel of a memory controller is coupled to a pair of DRAM chips via a common address/control bus and via two independent data busses. Each DRAM chip may include multiple internal memory banks. In an embodiment, identical data is stored in each of the DRAM banks controlled by a given channel. In another embodiment, data is substantially uniformly distributed in the DRAM banks controlled by a given channel, and read accesses are uniformly distributed to all of such banks. Embodiments may achieve 100% read utilization of the I/O channel by overlapping read accesses from alternate banks from the DRAM pair.
摘要:
A technique for promoting determinism among bus agents within a point-to-point (PtP) network. More particularly, embodiments of the invention relate to techniques to compensate for link latency, data skew, and clock shift within a PtP network of common system interface (CSI) bus agents.
摘要:
A method according to one embodiment may include communicating with at least one external device using at least one port. The method may also include storing a multicast data packet and a master device vector in memory. The method may also include de-queueing the master device vector from memory, generating at least one additional device vector based at least in part on the master device vector, and transmitting the multicast data packet and at least one additional device vector to at least one external device via at least one port. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.
摘要:
A method of packet tracing includes triggering tracer devices. Each tracer device corresponds to an associated processing stage within a packet processor. The method also includes storing an indication after a packet completes an associated processing stage. The method may further include sending contents of a register to an application.
摘要:
A technique for promoting determinism among bus agents within a point-to-point (PtP) network. More particularly, embodiments of the invention relate to techniques to compensate for link latency, data skew, and clock shift within a PtP network of common system interface (CSI) bus agents.
摘要:
In one embodiment, the present invention includes a method for associating a first plurality of current sources with a first tap coefficient and associating a second plurality of current sources with a second tap coefficient. A first plurality of output switches coupled to the first plurality of current sources is gated using the first tap coefficient and a second plurality of output switches coupled to the second plurality of current sources is gated using the second tap coefficient. In such manner, the first and second plurality of equalized current sources may be driven onto an interconnect. Other embodiments are described and claimed.