摘要:
The present disclosure relates to a lockless buffer resource management scheme. In the proposed scheme, a buffer pool is configured to have an allocation list and a de-allocation list. The allocation list includes one or more buffer objects linked by a next pointer in a previous buffer object to a next buffer object, and a head pointer pointing to a buffer object at the head of the allocation list. The de-allocation list includes one or more buffer objects linked by a next pointer in a previous buffer object to a next buffer object, a head pointer pointing to a buffer object at the head of the de-allocation list, and a tail pointer pointing to a next pointer of a buffer object at the end of the de-allocation list, wherein the tail pointer is a pointer's pointer.
摘要:
A system for reducing clock speed and power consumption in a network chip. The system has a core that transmits and receives signals at a first clock speed. A receive buffer is in communication with the core and configured to transmit the signals to the core at the first clock speed. A transmit buffer is in communication with the core and configured to receive signals from the core at the first clock speed. A sync is configured to receive signals in the receive buffer at a second clock speed and to transmit the signals from the transmit buffer at the second clock speed. The sync is in communication with the transmit buffer and the receive buffer.
摘要:
A partitioned memory (45) is divided into a number of large buffers (60), and one or more of the large buffers is divided to create an equal number of small buffers (65). Each remaining large buffer is associated with one small buffer, and the paired buffers may be addressed by a single pointer. The pointers are stored in a first-in-first-out unit to create a pool of available buffer pairs.
摘要:
A microprocessor system which includes a processor unit with system memory and a separate buffer memory, one or more subsystem adapter units with memory, optional I/O devices which may attach to the adapters, and a bus interface. The memory in the processor and the memory in the adapters are used by the system as a shared memory (106,112) which is configured as a distributed FIFO circular queue (a pipe). Unit to unit asynchronous communication is accomplished by placing control elements (104,116) on the pipe which represent requests, replies, and status information. The units (622,624) send and receive control elements (104,116) independent of the other units which allows free flowing asynchronous delivery of control information and data between units (622,624). The shared memory (106,112) can be organised as pipe pairs between each pair of units to allow full duplex operation by using one pipe for outbound control elements (104,116) and the other pipe for inbound control elements (104,116). The control elements (104,116) have standard fixed header fields with variable fields following the fixed header. The fixed header allows a common interface protocol to be used by different hardware adapters. The combination of the pipe and the common interface protocol allows many different types of hardware adapters to asynchronously communicate, resulting in higher overall throughput due to lower interrupt overhead.
摘要:
Disclosed is an input data control system having a plurality of buffers for storing input data transmitted from a terminal, and management information storage regions for storing management information on the input data storage regions and the input data stored therein. A data I/O management program permits the corresponding input data storage region to store the data given from the terminal on the basis of the management information stored in the management information storage regions and updates the corresponding management information.
摘要:
The subject device manages the access to message queues in a memory (6) by an enqueuer 2 and a degiieuer 7 when the enqueuer has priority over the dequeuer. It solves the contention problem raised when the dequeuer dequeues the last message from a queue while the enqueuer is enqueuing a new one. A queue control block QCB and queue status bits E, A, D are assigned to each queue and stored in memories 20 and 22. Each time dequeuer 7 performs a dequeuing operation it sets its D bit (dequeuer active) before updating the queue head field in the QCB block. When the enqueuer performs an enqueuing operation it sets an abort bit A, if it founds the D bit active and E bit active indicating that the queue contains at least one message to warn the dequeuer that it has to abort its proces if it is dequeuing the last message from the queue.
摘要:
There is disclosed herein a RAM buffer controller for managing the address input lines of a RAM buffer to simulate the operation of two FIFO's therein. There is also disclosed apparatus for allowing random access by a node processor in a local area network node using the RAM buffer controller to manage transmit and receive FIFO's to have random access to any address in the address space of the buffer without restriction to FIFO boundaries. There is also disclosed appratus for transmitting packets from said buffer organized into one or two linked lists. Further, there is disclosed apparatus for allowing independent initialization of any of the pointers in the RAm buffer controller which are not currently selected, and for allowing software requests for read or write access by the node processor. Further, there is disclosed apparatus and a method for recording status and length information at the end of a packet instead of in front thereof and for allowing any incoming packet to be flushed without saving status information or to be flushed while saving its status information.
摘要:
Die die Warteschlange bildenden Speicherelemente (z.B. EL1-EL3) sind über Adresseneintragsfelder (AD-NEL und AD-VEL) zu einer Ringkette verknüpft. Außerdem weist jedes Speicherelement ein Steuereintragsfeld (CONTF) auf, dessen Eintrag entweder mit dem Eintrag (AD-NEL) des auf das nächstfolgende Speicherelement verweisenden Adresseneintragsfeldes übereinstimmt, wenn dieses Speicherelement zur Übernahme von zwischenzuspeichernden Informationen im Datenfeld (DF) bereitsteht, oder aber als Sperreintrag (F) das Ende der Warteschlange anzeigt, was zur Zurückweisung von weiteren Zwischenspeicherungsanforderungen führt. Die Ansteuerung der Speicherelemente erfolgt durch zwei zentrale Adressenzeiger: Der Eintragszeiger (EP...) zeigt immer auf das Steuereintragsfeld (CONTF) eines Speicherelementes (z.B. EL1), über das das zu belegende nächstfolgende Speicherelement (z.B. EL2) zur Zwischenspeicherung der Information (E1) ansteuerbar ist. Der Austragszeiger (AP...) zeigt immer auf den Beginn des Datenfeldes (DF) des mit der auszuspeichernden Information belegten Speicherelementes (z.B. E2). Die Zeiger werden am Ende der Einspeicherung bzw. der Ausspeicherung auf das durch den zugehörigen Adresseneintrag (AD-NEL) gekennzeichnete nächstfolgende Speicherelement eingestellt. Der Sperreintrag (F) kann auch zur dynamischen Erweiterung der Warteschlange durch Einketten eines weiteren Speicherelementes herangezogen werden. Ebenso ist eine Wiederauskettung von Speicherelementen möglich.