Abstract:
An improved computer system may include a controller including a computer processor. The system may also include a selector apparatus in communication with the controller to choose a table having a higher collision quality index than other tables under consideration by the selector apparatus. The system may further include an exchanger apparatus to configure a standby table that replaces the table chosen by the selector apparatus. The system may additionally include a switch that changes a hash function based upon the exchanger apparatus' replacement of the chosen table to enable the controller to reduce insertion times and/or collisions when interfacing with new components introduced to the controller.
Abstract:
The invention provides a method for adding specific hardware on both receive and transmit sides that will hide to the software most of the effort related to buffer and pointers management. At initialization, a set of pointers and buffers is provided by software, in quantity large enough to support expected traffic. A Send Queue Replenisher (SQR) and Receive Queue Replenisher (RQR) hide RQ and SQ management to software. RQR and SQR fully monitor pointers queues and perform recirculation of pointers from transmit side to receive side.
Abstract:
An assignment constraint matrix is used in assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. The assignment constraint matrix is implemented as a plurality of qualifier matrixes adapted to operate simultaneously in parallel. Each of the plurality of qualifier matrixes is adapted to determine sources in a subset of supported sources that are qualified to provide work to a set of sinks based on assignment constraints. The determination of qualified sources may be based sink availability information that may be provided for a set of sinks on a single chip or distributed on multiple chips.
Abstract:
A data path for streaming data includes a plurality of sequential data registers, each of the plurality of sequential data registers comprising a plurality of data fields, wherein the streaming data moves sequentially through the sequential data registers; and a multiplexing unit, the multiplexing unit configured such that the multiplexing unit has access to each of the plurality of data fields of the plurality of sequential data registers, and wherein the multiplexing unit is configured to extract data from the streaming data as the streaming data moves through the sequential data registers in response to a data request.
Abstract:
Translating between a first communication protocol used by a first network component and a second communication protocol used by a second network, where translating includes: receiving, by a network engine adapter operating independently from the first and second network components, data packets from the first and second network components; and performing, by the network engine, a combined communication protocol based on the first communication protocol and the second communication protocol, including manipulating data packets of at least one of the first communication protocol or the second communication protocol, thereby offloading performance requirements for the combined communication protocol from the first and second network components.
Abstract:
A mechanism for offloading the management of receive queues in a split (e.g. split socket, split iSCSI, split DAFS) stack environment, including efficient queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates receive work queues and completion queues that are utilized by an Internet Protocol Suite Offload Engine (IPSOE) and the ULP to transfer information and carry out send operations. As consumers initiate receive operations, receive work queue entries (RWQEs) are created by the ULP and written to the receive work queue (RWQ). The ISPOE is notified of a new entry to the RWQ and it subsequently reads this entry that contains pointers to the data that is to be received. After the data is received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the RWQ and CQ. The number of entries available in the RWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ.
Abstract:
An addressing model is provided where devices, including I/O devices, are addressed with internet protocol (IP) addresses, which are considered part of the virtual address space. A task, such as an application, may be assigned an effective address range, which corresponds to addresses in the virtual address space. The virtual address space is expanded to include Internet protocol addresses. Thus, the page frame tables are also modified to include entries for IP addresses and additional properties for devices and I/O. Thus, a processing element, such as an I/O adapter or even a printer, for example, may also be addressed using IP addresses without the need for library calls, device drivers, pinning memory, and so forth. This addressing model also provides full virtualization of resources across an IP interconnect, allowing a process to access an I/O device across a network.
Abstract:
A technique for operating a high performance computing (HPC) cluster includes monitoring communication between threads assigned to multiple processors included in the HPC cluster. The HPC cluster includes multiple nodes that each include two or more of the multiple processors. One or more of the threads are moved to a different one of the multiple processors based on the communication between the threads.
Abstract:
An integrated processor design includes physical interface macros supporting heterogeneous electrical properties. The processor design comprises a plurality of processing cores and a plurality of physical interfaces to connect to a memory interface, a peripheral component interconnect express (PCI Express or PCIe) interface for input/output, an Ethernet interface for network communication, and/or a serial attached SCSI (SAS) interface for storage. Each physical interface may be programmatically connected to a selected interface controller, such as a memory controller, a PCI Express controller, or an Ethernet controller, for example. A plurality of such controllers may be connected to a switch within the processor design, with the switch also being connected to each physical interface macro. Thus, the physical interface macros may be programmatically connected to a subset of the plurality of controllers.
Abstract:
Techniques and articles of manufacture are provided comprising computer readable programs that, when executed on the computer, cause the computer to delete a leaf from a Patricia tree having a direct table and a plurality of PSCB's which decode portions of the pattern of a leaf in the tree without shutting down the functioning of the tree. A leaf having a pattern is identified as a leaf to be deleted. Using the pattern, the tree is walked to identify the location of the leaf to be deleted. The leaf to be deleted is identified and deleted, and any relevant PSCB modified, if necessary. The technique also is applicable to deleting a prefix of a prefix.