Abstract:
An electronic assembly includes horizontally-stacked die disposed at an interposer, and may also include vertically-stacked die. The stacked die are interconnected via a multi-hop communication network that is partitioned into a link partition and a router partition. The link partition is at least partially implemented in the metal layers of the interposer for horizontally-stacked die. The link partition may also be implemented in part by the intra-die interconnects in a single die and by the inter-die interconnects connecting vertically-stacked sets of die. The router partition is implemented at some or all of the die disposed at the interposer and comprises the logic that supports the functions that route packets among the components of the processing system via the interconnects of the link partition. The router partition may implement fixed routing, or alternatively may be configurable using programmable routing tables or configurable logic blocks.
Abstract:
The described embodiments include a main memory and a cache memory (or “cache”) with a cache controller that includes a mode-setting mechanism. In some embodiments, the mode-setting mechanism is configured to dynamically determine an access pattern for the main memory. Based on the determined access pattern, the mode-setting mechanism configures at least one region of the main memory in a write-back mode and configures other regions of the main memory in a write-through mode. In these embodiments, when performing a write operation in the cache memory, the cache controller determines whether a region in the main memory where the cache block is from is configured in the write-back mode or the write-through mode and then performs a corresponding write operation in the cache memory
Abstract:
An accelerator device includes a first processing unit to access a structure of a graph dataset, and a second processing unit coupled with the first processing unit to perform computations based on data values in the graph dataset.
Abstract:
An electronic device includes processing circuitry that executes a lookup table (LUT) vector instruction. Executing the lookup table vector instruction causes the processing circuitry to acquire a set of reference values by using each input value from an input vector as an index to acquire a reference value from a reference vector. The processing circuitry then provides the set of reference values for one or more subsequent operations. The processing circuitry can also use the set of reference values for controlling vector elements from among a set of vector elements for which a vector operation is performed.
Abstract:
Systems and methods are disclosed that map quantum circuits to physical qubits of a quantum computer. Techniques are disclosed to generate a graph that characterizes the physical qubits of the quantum computer and to compute the resource requirements of each circuit of the quantum circuits. For each circuit, the graph is searched for a subgraph that matches the resource requirements of the circuit, based on a density matrix. Physical qubits, defined by the matching subgraph, are then allocated to the logical qubits of the circuit.
Abstract:
Systems and methods are disclosed that map quantum circuits to physical qubits of a quantum computer. Techniques are disclosed to generate a graph that characterizes the physical qubits of the quantum computer and to compute the resource requirements of each circuit of the quantum circuits. For each circuit, the graph is searched for a subgraph that matches the resource requirements of the circuit, based on a density matrix. Physical qubits, defined by the matching subgraph, are then allocated to the logical qubits of the circuit.
Abstract:
A processing system includes a processor, a memory, and an operating system that are used to allocate a page table caching memory object (PTCM) for a user of the processing system. An allocation of the PTCM is requested from a PTCM allocation system. In order to allocate the PTCM, a plurality of physical memory pages from a memory are allocated to store a PTCM page table that is associated with the PTCM. A lockable region of a cache is designated to hold a copy of the PTCM page table, after which the lockable region of the cache is subsequently locked. The PTCM page table is populated with page table entries associated with the PTCM and copied to the locked region of the cache.
Abstract:
Methods and apparatus provide virtual to physical address translations and a hardware page table walker with region based page table prefetch operation that produces virtual memory region tracking information that includes at least: data representing a virtual base address of a virtual memory region and a physical address of a first page table entry (PTE) corresponding to a virtual page within the virtual memory region. The hardware page table walker, in response to the TLB miss indication, prefetches a physical address of a second page table entry, that provides a final physical address for the missed TLB entry, using the virtual memory region tracking information. In some implementations, the prefetching of the physical PTE address is done in parallel with earlier levels of a page walk operations.
Abstract:
Methods and apparatus provide virtual to physical address translations and a hardware page table walker with region based page table prefetch operation that produces virtual memory region tracking information that includes at least: data representing a virtual base address of a virtual memory region and a physical address of a first page table entry (PTE) corresponding to a virtual page within the virtual memory region. The hardware page table walker, in response to the TLB miss indication, prefetches a physical address of a second page table entry, that provides a final physical address for the missed TLB entry, using the virtual memory region tracking information. In some implementations, the prefetching of the physical PTE address is done in parallel with earlier levels of a page walk operations.
Abstract:
Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.