摘要:
A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Further, a collapsed TLB provides an additional cache storing collapsed translations derived from the MTLB.
摘要:
A circuit operates to manage transmittal of packets in a network packet processor. The circuit includes a packet descriptor manager (PDM), a packet scheduling engine (PSE), and a packet engines and buffering module (PEB). The PDM generates a metapacket and a descriptor from a command signal, where the command signal identifies a packet to be transmitted by the circuit. The PSE determines an order in which to transmit the packet among a number of packets, where the PSE determines the order based on information indicated in the metapacket. Once the packet is scheduled for transmission, the PEB performs processing operations on the packet to produce a processed packet based on instructions indicated in the descriptor. The PEB then causes the processed packet to be transmitted toward the destination.
摘要:
According to at least one example embodiment, a multi-chip system includes multiple chip devices configured to communicate to each other and share resources. According to at least one example embodiment, a method of memory allocation in the multi-chip system comprises managing, by each of one or more free-pool allocator (FPA) coprocessors in the multi-chip system, a corresponding list of pools of free-buffer pointers. Based on the one or more lists of free-buffer pointers managed by the one or more FPA coprocessors, a memory allocator (MA) hardware component allocates a free buffer, associated with a chip device of the multiple chip devices, to data associated with a work item. According to at least one aspect, the data associated with the work item represents a data packet.
摘要:
Implementations of wide atomic sequences are achieved by augmenting a load operation designed to initiate an atomic sequence and augmenting a conditional storing operation that typically terminates the atomic sequence. The augmented load operation is designed to further allocate a memory buffer besides initiating the atomic sequence. The conditional storing operation is augmented to check the allocated memory buffer for any data stored therein. If one or more data words are detected in the memory buffer, the conditional storing operation stores the detected data word(s) and another word provided as operand in a concatenation of memory locations. The achieved wide atomic sequences enable the hardware system to support wide memory operations and wide operations in general.
摘要:
A circuit manages and controls access requests to a register, such as a control and status register (CSR) among a number of devices. In particular, the circuit selectively forwards or suspends off-chip access requests and forwards on-chip access requests independent of the status of off-chip requests. The circuit receives access requests at a plurality of buses, one or more of which can be dedicated to exclusively on-chip requests and/or exclusively off-chip requests. Based on the completion status of previous off-chip access requests, further off-chip access requests are selectively forwarded or suspended, while on-chip access request are sent independently of off-chip request status.
摘要:
A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Further, a collapsed TLB is an additional cache storing collapsed translations derived from the MTLB. Entries in the MTLB, the collapsed TLB, and other caches can be maintained for consistency.
摘要:
In one embodiment, a system includes a memory, and a memory controller coupled to the memory via an address bus, a data bus, and an error code bus. The memory stores data at an address and stores an error code at the address. The error code is generated based on a function of the corresponding data and address.
摘要:
A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Lookups to the caches of the MTLB can be selectively bypassed based on a control configuration and the attributes of a received address.
摘要:
According to at least one example embodiment, a method of data coherence is employed within a multi-chip system to enforce cache coherence between chip devices of the multi-node system. According at least one example embodiment, a message is received by a first chip device of the multiple chip devices from a second chip device of the multiple chip devices. The message triggers invalidation of one or more copies, if any, of a data block. The data block stored in a memory attached to, or residing in, the first chip device. Upon determining that one or more remote copies of the data block are stored in one or more other chip devices, other than the first chip device, the first chip device sends one or more invalidation requests to the one or more other chip devices for invalidating the one or more remote copies of the data block.
摘要:
According to at least one example embodiment, a multi-chip system includes multiple chip devices configured to communicate to each other and share hardware resources. According to at least one example embodiment, a method of processing work item in the multi-chip system comprises designating, by a work source component associated with a chip device, referred to as the source chip device, of the multiple chip devices, a work item to a scheduler for scheduling. The scheduler then assigns the work item to a another chip device, referred to as the destination chip device, of the multiple chip devices for processing, the scheduler is one of one or more schedulers each associated with a corresponding chip device of the multiple chip devices.