摘要:
Various implementations of the described subject associate a plurality of threads that are sorted based on thread priority with a run queue in a deterministic amount of time. The run queue includes a first plurality of threads, which are sorted based on thread priority. The second plurality of threads is associated with the run queue in a bounded, or deterministic amount of time that is independent of the number of threads in the associated second plurality. Thus, the various implementations of the described subject matter allow an operating system to schedule other threads for execution within deterministic/predetermined time parameters.
摘要:
An out of order processor. The processor includes a virtual load store queue for allocating a plurality of loads and a plurality of stores, wherein more loads and more stores can be accommodated beyond an actual physical size of the load store queue of the processor; wherein the processor allocates other instructions besides loads and stores beyond the actual physical size limitation of the load/store queue; and wherein the other instructions can be dispatched and executed even though intervening loads or stores do not have spaces in the load store queue.
摘要:
A processing system comprises processing circuitry (102) and memory circuitry (104) coupled to the processing circuitry (102). The memory circuitry (104) is configurable to maintain at least one queue structure representing a list of data units (e.g., pointers to packets stored in a packet memory) (106). The queue structure is partitioned into two or more blocks (e.g., chunks) wherein at least some of the blocks of the queue structure include two or more data units. Further, at least some of the blocks of the queue structure may include a pointer to a next block of the queue structure (e.g., a next chunk pointer). Given such a queue structure, the processing circuitry (102) is configurable to address a first block of the queue structure, and then address a next block of the queue structure by setting the next block pointer of the first block to point to the next block.
摘要:
A partitioned memory (45) is divided into a number of large buffers (60), and one or more of the large buffers is divided to create an equal number of small buffers (65). Each remaining large buffer is associated with one small buffer, and the paired buffers may be addressed by a single pointer. The pointers are stored in a first-in-first-out unit to create a pool of available buffer pairs.
摘要:
Data is transferred from a host system to a subsystem connected to the host by a system bus in an efficient manner using one or more virtual first in first out (FIFO) registers in host memory and a corresponding set of virtual FIFOs located in the subsystem memory. A transmission controller controls the transfer of data from the host FIFOs to the subsystem FIFOs while the subsystem processor reads and processes data from the subsystem FIFO. By accumulating data in the host FIFOs before transfer to the subsystem, overhead associated with starting and stopping data transfers over the system bus is substantially reduced.
摘要:
An apparatus and method for transferring data in a data processing system to and from a host system (11). A communication adapter (10) or input/output controller device is provided in which queues (36) are utilized to transfer information between the adapter or controller and the host system (11). In order to minimize the amount of time a system or I/O bus or network is used during transfer of data between the adapter or controller and the host system, and reduce the amount of work that must be performed by the host system (11), the number of interrupts by the adapter (10) or controller of the host system is limited to the minimum amount necessary by using an interrupt arm mechanism (44) and by keeping track of completion indices stored in the host system.
摘要:
A buffer device capable of dealing with multiple priority levels in which the efficiency of the memory capacity utilization can be improved such that the priority levels can be handled at the higher efficiency with smaller memory capacities, and which is adaptable to a high speed buffer implementation. The device includes a data register array (10) containing empty data registers and imaginary FIFO queues, and an administrative register array (11) comprised of a two port RAM (11a,11b) for storing pointer chains specifying the imaginary FIFO queues. The input of data is accompanied by the modification of the pointer chain to extend it, whereas the output of data is accompanied by the modification of the pointer chain to shorten it, so that the imaginary FIFO queues are administered in flexible manner in order to achieve efficient memory capacity utilization. The procedure for controlling the imaginary FIFO queues can be executed in parallel because of the independency of read and write operations in the two port RAM.
摘要:
Disclosed is an input data control system having a plurality of buffers for storing input data transmitted from a terminal, and management information storage regions for storing management information on the input data storage regions and the input data stored therein. A data I/O management program permits the corresponding input data storage region to store the data given from the terminal on the basis of the management information stored in the management information storage regions and updates the corresponding management information.