摘要:
In one embodiment, an interrupt controller may implement an interrupt distribution scheme for distributing interrupts among multiple processors. The scheme may take into account various processor state in determining which processor should receive a given interrupt. For example, the processor state may include whether or not the processor is in a sleep state, whether or not interrupts are enabled, whether or not the processor has responded to previous interrupts, etc. The interrupt controller may implement timeout mechanisms to detect that an interrupt is being delayed (e.g. after being offered to a processor). The interrupt may be re-evaluated at the expiration of a timeout, and potentially offered to another processor. The interrupt controller may be configured to automatically, and atomically, mask an interrupt in response to delivering an interrupt vector for the interrupt to a responding processor.
摘要:
In one embodiment, an apparatus comprises a first interface circuit, a direct memory access (DMA) controller coupled to the first interface circuit, and a host coupled to the DMA controller. The first interface circuit is configured to communicate on an interface according to a protocol. The host comprises at least one address space mapped, at least in part, to a plurality of memory locations in a memory system of the host. The DMA controller is configured to perform DMA transfers between the first interface circuit and the address space, and the DMA controller is further configured to perform DMA transfers between a first plurality of the plurality of memory locations and a second plurality of the plurality of memory locations.
摘要:
In one embodiment, an interrupt controller may implement an interrupt distribution scheme for distributing interrupts among multiple processors. The scheme may take into account various processor state in determining which processor should receive a given interrupt. For example, the processor state may include whether or not the processor is in a sleep state, whether or not interrupts are enabled, whether or not the processor has responded to previous interrupts, etc. The interrupt controller may implement timeout mechanisms to detect that an interrupt is being delayed (e.g. after being offered to a processor). The interrupt may be re-evaluated at the expiration of a timeout, and potentially offered to another processor. The interrupt controller may be configured to automatically, and atomically, mask an interrupt in response to delivering an interrupt vector for the interrupt to a responding processor.
摘要:
In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.
摘要:
In one embodiment, a switch is configured to be coupled to an interconnect. The switch comprises a plurality of storage locations and an arbiter control circuit coupled to the plurality of storage locations. The plurality of storage locations are configured to store a plurality of requests transmitted by a plurality of agents. The arbiter control circuit is configured to arbitrate among the plurality of requests stored in the plurality of storage locations. A selected request is the winner of the arbitration, and the switch is configured to transmit the selected request from one of the plurality of storage locations onto the interconnect. In another embodiment, a system comprises a plurality of agents, an interconnect, and the switch coupled to the plurality of agents and the interconnect. In another embodiment, a method is contemplated.
摘要:
In one embodiment, a system comprises at least one processor and a peripheral interface controller coupled to the processor. Further coupled to receive transactions from a peripheral interface, the peripheral interface controller is configured to accumulate freed credits for a given transaction type of a plurality of transaction types that are not yet returned to a transmitter on the peripheral interface. The peripheral interface controller is also configured to cause transmission of a flow control update transaction on the peripheral interface responsive to a number of the freed credits exceeding a threshold amount that is less than a total number of credits allocated to the given transaction type.
摘要:
In one embodiment, an apparatus comprises a first interface circuit, a direct memory access (DMA) controller coupled to the first interface circuit, and a host coupled to the DMA controller. The first interface circuit is configured to communicate on an interface according to a protocol. The host comprises at least one address space mapped, at least in part, to a plurality of memory locations in a memory system of the host. The DMA controller is configured to perform DMA transfers between the first interface circuit and the address space, and the DMA controller is further configured to perform DMA transfers between a first plurality of the plurality of memory locations and a second plurality of the plurality of memory locations.
摘要:
A method for tuning an adaptive equalizer in order to receive digital signals from a transmission medium both coarse and fine tuning methods to adaptively equalize a signal received from the transmission medium. The coarse tuning method adjusts an equalizer such that the post equalized signal starts to resemble a known data pattern, such as an MLT3 data pattern. The coarse tuning method monitors and corrects for several things: illegal transitions, over equalization, statistical data pattern anomalies and saturation conditions. Fine tuning methods operate concurrently with the coarse tuning methods and function from the point at which the coarse tuning methods stop being efficient. Additionally, the fine tuning methods hold the waveform locked in. In addition to coarse tuning and fine tuning of the equalizer, the present invention also adjusts gain of the overall signal such that the post equalized signal is always a certain amplitude. It also corrects for offsets that may get superimposed on the signal as it passes through the receive channel and which may lead to erroneous bit decisions. The method is applicable to a variety of data communication standards including 100 Base-X, FDDI and ATM-155.
摘要:
In one embodiment, an apparatus comprises a first interface circuit, a direct memory access (DMA) controller coupled to the first interface circuit, and a host coupled to the DMA controller. The first interface circuit is configured to communicate on an interface according to a protocol. The host comprises at least one address space mapped, at least in part, to a plurality of memory locations in a memory system of the host. The DMA controller is configured to perform DMA transfers between the first interface circuit and the address space, and the DMA controller is further configured to perform DMA transfers between a first plurality of the plurality of memory locations and a second plurality of the plurality of memory locations.
摘要:
In an embodiment, a timer unit may be provided that may be programmed to a selected time interval, or wakeup interval. A processor may execute a wait for event instruction, and enter a low power state for the thread that includes the instruction. The timer unit may signal a timer event at the expiration of the wakeup interval, and the processor may exit the low power state in response to the timer event. The thread may continue executing with the instruction following the wait for event instruction. In an embodiment, the processor/timer unit may be used to implement a power-managed lock acquisition mechanism, in which the processor is awakened a number of times to check the lock and execute the wait for event instruction if the lock is not free, after which the thread may block until the lock is free.