摘要:
In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.
摘要:
In an embodiment, a timer unit may be provided that may be programmed to a selected time interval, or wakeup interval. A processor may execute a wait for event instruction, and enter a low power state for the thread that includes the instruction. The timer unit may signal a timer event at the expiration of the wakeup interval, and the processor may exit the low power state in response to the timer event. The thread may continue executing with the instruction following the wait for event instruction. In an embodiment, the processor/timer unit may be used to implement a power-managed lock acquisition mechanism, in which the processor is awakened a number of times to check the lock and execute the wait for event instruction if the lock is not free, after which the thread may block until the lock is free.
摘要:
In one embodiment, a switch is configured to be coupled to an interconnect. The switch comprises a plurality of storage locations and an arbiter control circuit coupled to the plurality of storage locations. The plurality of storage locations are configured to store a plurality of requests transmitted by a plurality of agents. The arbiter control circuit is configured to arbitrate among the plurality of requests stored in the plurality of storage locations. A selected request is the winner of the arbitration, and the switch is configured to transmit the selected request from one of the plurality of storage locations onto the interconnect. In another embodiment, a system comprises a plurality of agents, an interconnect, and the switch coupled to the plurality of agents and the interconnect. In another embodiment, a method is contemplated.
摘要:
A method and apparatus for processing out-of-order processing of packets is described. In one embodiment, the method includes receiving a packet, which is in an original position relative to other packets; and attempting, by a primary processing unit, to classify and process the packet. The method also includes determining whether the packet was completely classified and processed by the primary processing unit; and upon determining that the primary processing unit completely classified and processed the packet, bypassing a secondary processing unit. The method also includes transmitting the packet onto a network, the packet being transmitted in its original position relative to the other packets.
摘要:
In one embodiment, an apparatus comprises a first interface circuit, a direct memory access (DMA) controller coupled to the first interface circuit, and a host coupled to the DMA controller. The first interface circuit is configured to communicate on an interface according to a protocol. The host comprises at least one address space mapped, at least in part, to a plurality of memory locations in a memory system of the host. The DMA controller is configured to perform DMA transfers between the first interface circuit and the address space, and the DMA controller is further configured to perform DMA transfers between a first plurality of the plurality of memory locations and a second plurality of the plurality of memory locations.
摘要:
In one embodiment, a switch is configured to be coupled to an interconnect. The switch comprises a plurality of storage locations and an arbiter control circuit coupled to the plurality of storage locations. The plurality of storage locations are configured to store a plurality of requests transmitted by a plurality of agents. The arbiter control circuit is configured to arbitrate among the plurality of requests stored in the plurality of storage locations. A selected request is the winner of the arbitration, and the switch is configured to transmit the selected request from one of the plurality of storage locations onto the interconnect. In another embodiment, a system comprises a plurality of agents, an interconnect, and the switch coupled to the plurality of agents and the interconnect. In another embodiment, a method is contemplated.
摘要:
These and other aspects of the present invention will be better described with reference to the Detailed Description and the accompanying figures. A method and apparatus for out-of-order processing of packets using linked lists is described. In one embodiment, the method includes receiving packets in a global order, the packets being designated for different ones of a plurality of reorder contexts. The method also includes storing information regarding each of the packets in a shared reorder buffer. The method also includes for each of the plurality of reorder contexts, maintaining a reorder context linked list that records the order in which those of the packets that were designated for that reorder context and that are currently stored in the shared reorder buffer were received relative to the global order. The method also includes completing processing of at least certain of the packets out of the global order and retiring the packets from the shared reorder buffer out of the global order for at least certain of the packets.
摘要:
In one embodiment, an apparatus comprises a first interface circuit, a direct memory access (DMA) controller coupled to the first interface circuit, and a host coupled to the DMA controller. The first interface circuit is configured to communicate on an interface according to a protocol. The host comprises at least one address space mapped, at least in part, to a plurality of memory locations in a memory system of the host. The DMA controller is configured to perform DMA transfers between the first interface circuit and the address space, and the DMA controller is further configured to perform DMA transfers between a first plurality of the plurality of memory locations and a second plurality of the plurality of memory locations.
摘要:
A distributed arbitration scheme for a network. Ports in a network device determine which port in a set of ports may broadcast a packet onto a bus in the network device. A method of transmitting data between a set of ports sharing a bus in hub is described. The set of ports includes a first port, and the method comprises the first port receiving a packet, the first port requesting the bus, and, if another port is requesting the bus, the first port transmitting the packet to the bus if the first port has not transmitted a packet later than the another port requesting the bus. A system using two clocks of different speeds in a network device. The hub has at least a port. The port has an internal data path having a first width. A bus is coupled to the port. The bus has a data path that has a second width. The second width is greater than the first width. The hub includes a first clock that has a first frequency and is coupled to circuitry in the port for clocking internal data transfers. The hub includes a second clock that has a second frequency less than the first frequency, and the second clock is coupled to circuitry in the port for qualifying data transfers with the bus.
摘要:
A method for tuning an adaptive equalizer in order to receive digital signals from a transmission medium both coarse and fine tuning methods to adaptively equalize a signal received from the transmission medium. The coarse tuning method adjusts an equalizer such that the post equalized signal starts to resemble a known data pattern, such as an MLT3 data pattern. The coarse tuning method monitors and corrects for several things: illegal transitions, over equalization, statistical data pattern anomalies and saturation conditions. Fine tuning methods operate concurrently with the coarse tuning methods and function from the point at which the coarse tuning methods stop being efficient. Additionally, the fine tuning methods hold the waveform locked in. In addition to coarse tuning and fine tuning of the equalizer, one embodiment also adjusts gain of the overall signal such that the post equalized signal is always a certain amplitude. One embodiment corrects for offsets that may get superimposed on the signal as it passes through the receive channel and which may lead to erroneous bit decisions. The method is applicable to a variety of data communication standards including 100 Base-X, FDDI and ATM-155.