Abstract:
An out of order processor. The processor includes a virtual load store queue for allocating a plurality of loads and a plurality of stores, wherein more loads and more stores can be accommodated beyond an actual physical size of the load store queue of the processor; wherein the processor allocates other instructions besides loads and stores beyond the actual physical size limitation of the load/store queue; and wherein the other instructions can be dispatched and executed even though intervening loads or stores do not have spaces in the load store queue.
Abstract:
A method and apparatus for handling incoming data frames within a network interface controller. The network interface controller comprises at least one controller component operably coupled to at least one memory element. The at least one controller component is arranged to identify a next available buffer pointer from a pool of buffer pointers stored within a first area of memory within the at least one memory element, receive an indication that a start of a data frame has been received via a network interface, and allocate the identified next available buffer pointer to the data frame.
Abstract:
A memory system for a network device is described. The memory system includes a main memory configured to store one or more data elements. Further, the memory system includes a link memory that is configured to maintain one or more pointers to interconnect the one or more data elements stored in the main memory. The memory system also includes a free-entry manager that is configured to generate an available bank set including one or more locations in the link memory. In addition, the memory system includes a context manager that is configured to maintain metadata for a list of the one or more data elements.
Abstract:
A system and method can support input/output (I/O) virtualization in a computing environment. The system can comprise a chip, which is associated with a server on a network fabric. Additionally, the chip is associated with an external memory that contains a plurality of packet buffers. Moreover, an on-chip memory maintains a state of one or more packets that contain disk-read data received from a physical host bus adaptor (HBA). Furthermore, the chip operates to en-queue said one or more packets in the plurality of packet buffers on the external memory, read out said one or more packets from the external memory based on the state of said one or more packets, and send said one or more packets to the server.
Abstract:
A work conserving scheduler can be implemented based on a ranking system to provide the scalability of time stamps while avoiding the fast search associated with a traditional time stamp implementation. Each queue can be assigned a time stamp that is initially set to zero. The time stamp for a queue can be incremented each time a data packet from the queue is processed. To provide varying weights to the different queues, the time stamp for the queues can be incremented at varying rates. The data packets can be processed from the queues based on the tier rank order of the queues as determined from the time stamp associated with each queue. To increase the speed at which the ranking is determined, the ranking can be calculate from a subset of the bits defining the time stamp rather than the entire bit set.
Abstract:
Embodiments of the present invention provide a buffer manager and a buffer management method based on an address pointer linked list. In the embodiments, address pointers of all buffer blocks in a buffer are divided into several groups, lower bits of address pointers in each group are used to record a linked list between the address pointers in the same group, and an address pointer which is pointed by one predetermined address pointer of each group and is in a different group is further recorded to upbuild a linked list between the groups. Thereby, an address linked list can still be stored without a RAM with a width equal to a pointer depth and with a depth equal to the total number of buffer blocks in the buffer as required by the conventional art, which greatly reduces hardware resources required.
Abstract:
The present invention extends to methods, systems, and computer program products for maintaining a count for lock-free stack access. A numeric value representative of the total count of nodes in a linked list is maintained at the head node for the linked list. Commands for pushing and popping nodes appropriately update the total count at a new head node when nodes are added to and removed from the linked list. Thus, determining the count of nodes in a linked list is an order 1 (or O(1)) operation, and remains constant even when the size of a linked list changes
Abstract:
A buffer is disclosed for storing data being transferred using a plurality of control channels, a data item of said data being transferred between a data source and a data destination using one of said plurality of control channels, said buffer comprising: a data input port operable to receive said data being transferred using said plurality of control channels; a data output port operable to output data to be transferred using said plurality of control channels; and a data store operable to store data received from said data input port prior to it being output by said data output port, said data store comprising a plurality of storage locations each operable to store a data item, said storage locations being arranged in groups, a storage location being allocated to a group in dependence on the control channel that a data item that it stores is received from, such that each group comprises storage locations storing data items received from a same one of said plurality of control channels. Free storage locations are not allocated to any of the plurality of groups, so that new data items received can be stored in any of the free storage locations, these locations then being allocated to the group corresponding to the channel being used.
Abstract:
A buffer architecture enables linked lists to be used to administer virtual output queue buffering. The buffer has three random access memories (RAMs). A data RAM holds data. A free RAM holds a linked list of entries defining free space in the data RAM. Destination RAM holds a linked list of entries defining data in the data RAM to be forwarded to a destination.
Abstract:
A data structure is disclosed. The data structure includes a data descriptor record. In turn, the data descriptor record includes a type field, a base address field, an offset field, wherein the, and a length field. The type field may be configured, for example, to indicate a data structure type. The data structure type may be configured to assume a values indicating one of a contiguous buffer, a scatter-gather list and a linked list structure, among other such data structures. The base address field may be configured, for example, to store a base address, with the base address being a starting address of a secondary data structure associated with the data descriptor record. The offset field may be configured, for example, to indicate a starting address of data within a secondary data structure pointed to by a base address stored in the base address field. The length field is configured to indicate a length of data stored in a secondary data structure pointed to by a base address stored in the base address field.