Abstract:
A system, method, and computer program product are provided for sending a message from a first queue to a second queue associated with a receiver agent in response to a request. In operation, a message is sent from a sender agent to a first queue. Additionally, a request is received at the first queue from a receiver agent. Furthermore, the message is sent from the first queue to a second queue associated with the receiver agent, in response to the request.
Abstract:
Disclosed is an approach for configuring devices for a multiprocessor system, where the devices pertaining to the different processors are viewed as connecting to a standardized common bus. Regardless of the specific processor to which a device is directly connected, that device can be generally identified and accessed along the standardized common bus. PCIe is an example of a suitable standardized bus type that can be employed, where the devices for each processor node are represented as PCIe devices. Therefore, each of the devices would appear to the system software as a PCIe device. A PCIe controller can then be used to access the device by referring to the appropriate device identifier. This permits any device to be accessed on any of the processor nodes, without separate and individualized configurations or drivers for each separate processor node.
Abstract:
A system and method are provided for reducing a latency associated with timestamps in a multi-core, multi threaded processor. A processor capable of simultaneously processing a plurality of threads is provided. The processor includes a plurality of cores, a plurality of network interfaces for network communication, and a timer circuit for reducing a latency associated with timestamps used for synchronization of the network communication utilizing a precision time protocol.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
A system and method are provided for sharing data between a network including one or more network nodes. The network includes a number of individual network nodes and a home network node communicating with one another. The individual network nodes and the home network node include a plurality of processors and memory caches. The memory caches consist of private caches corresponding to individual processors, as well as shared caches which are shared among the plurality of processors of an individual node and accessible by the processors of the other network nodes. Each network node is capable of executing a hierarchy of data requests that originate in the private caches of an individual local network node. If no cache hits occur within the local network node, a conditional request is sent to the home network node to request data through the shared caches of the other network nodes.
Abstract:
A method is provided for offloading packet protocol encapsulation from software. In operation, pointer information is received. Furthermore, packet protocol encapsulation is offloaded from software by assembling packets in hardware, using the pointer information.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores configured to support a plurality of software generated read or write instructions for interfacing with a star topology serial bus interface. The multiple-core processor has at least one of an internal fast messaging network or an interface switch interconnect configured to link the processor cores together such that each processor core has a data pathway to each of the other processor cores without going through memory. The fast messaging network or interface switch is also configured to be operably coupled to the star topology serial bus interface. In one aspect of an embodiment of the invention, the fast messaging network connects to a high-bandwidth star-topology serial bus such as a PCI express (PCIe) interface capable of supporting multiple high-bandwidth PCIe lanes.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.