Abstract:
Disclosed is an approach for implementing a flexible parser for a networking system. A micro-core parser is implemented to process packets in a networking system. The micro-cores of the parser read the packet headers, and perform any suitably programmed tasks upon those packets and packet headers. One or more caches may be associated with the micro-cores to hold the packet headers. A dependency list is used to maintain proper ordering of packets being processed by the micro-cores.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
A processor includes a plurality of processor cores, a networking output, and a packet ordering device. The packet ordering device determines an ordering for packets that are processed by the processor cores. The packets are released to the networking output in a determined order.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. The data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. In one aspect of an embodiment of the invention, the messaging network connects to a high-bandwidth star-topology serial bus such as a PCI express (PCIe) interface capable of supporting multiple high-bandwidth PCIe lanes. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. The data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. In one aspect of an embodiment of the invention, the messaging network connects to a high-bandwidth star-topology serial bus such as a PCI express (PCIe) interface capable of supporting multiple high-bandwidth PCIe lanes. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
A system and method are provided for reducing a latency associated with timestamps in a multi-core, multi threaded processor. A processor capable of simultaneously processing a plurality of threads is provided. The processor includes a plurality of cores, a plurality of network interfaces for network communication, and a timer circuit for reducing a latency associated with timestamps used for synchronization of the network communication utilizing a precision time protocol.
Abstract:
An apparatus to implement Huffman decoding in an INFLATE process in a compression engine. An embodiment of the apparatus includes a bit buffer, a set of comparators, and a lookup table. The bit buffer stores a portion of a compressed data stream. The set of comparators compares the portion of the compressed data stream with a plurality of predetermined values. The lookup table stores a plurality of LZ77 code segments and out puts one of the LZ77 code segments corresponding to an index at least partially derived from a comparison result from the set of comparators.
Abstract:
An advanced, processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging net work is coupled to each of the processor cores and a plurality of communication ports. The data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. In one aspect of an embodiment of the invention, the messaging network connects to a high-bandwidth star-topology serial bus such as a PCI express (PCIe) interface capable of supporting multiple high-bandwidth PCIe lanes. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
Abstract:
An advanced, processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.