摘要:
Embodiments of an invention for dynamic error correction using parity and redundant rows are disclosed. In one embodiment, an apparatus includes a storage structure, parity logic, an error storage space, and an error event generator. The storage structure is to store a plurality of data values. The parity logic is to detect a parity error in a data value stored in the storage structure. The error storage space is to store an indication of a detection of the parity error. The error event generator is to generate an event in response to the indication of the parity error being stored in the error storage space.
摘要:
Embodiments of an invention for dynamic error correction using parity and redundant rows are disclosed. In one embodiment, an apparatus includes a storage structure, parity logic, an error storage space, and an error event generator. The storage structure is to store a plurality of data values. The parity logic is to detect a parity error in a data value stored in the storage structure. The error storage space is to store an indication of a detection of the parity error. The error event generator is to generate an event in response to the indication of the parity error being stored in the error storage space.
摘要:
An electronic device is described herein. The electronic device may include a page walker module to receive a page request of a graphics processing unit (GPU). The page walker module may detect a page fault associated with the page request. The electronic device may include a controller, at least partially comprising hardware logic. The controller is to monitor execution of the page request having the page fault. The controller determines whether to suspend execution of a work item at the GPU associated with the page request having the page fault, or to continue execution of the work item based on factors associated with the page request.
摘要:
Transitions to ring 0, each time an application wants to use an adjunct processor, are avoided, saving central processor operating cycles and improving efficiency. Instead, initially each application is registered and setup to use adjunct processor resources in ring 3.
摘要:
Described herein are technologies related to a ensuring that graphics commands and graphics context are offloading and scheduled for consumption as the commands and graphics context are sent from coherent to non-coherent memory/fabric in a “processor to processor” handoff or transaction.
摘要:
Transitions to ring 0, each time an application wants to use an adjunct processor, are avoided, saving central processor operating cycles and improving efficiency. Instead, initially each application is registered and setup to use adjunct processor resources in ring 3.
摘要:
In one embodiment, a method includes: receiving, in a root tile of an accelerator device having a plurality of tiles, a message from a processor, the message comprising a register write request to a register of a first remote tile of the plurality of remote tiles; decoding, in an endpoint controller of the root tile, a system address of the message to identify a destination tile for the message, based at least in part on a base address register decode of the system address; and in response to identifying the first remote tile as the destination tile, updating a first portion of an address offset field of the system address to a predetermined value and directing the message to the first remote tile coupled to the root tile via a sideband interconnect. Other embodiments are described and claimed.
摘要:
Methods and systems may provide for storing a set of post-synchronization operations to a graphics memory and sending a flush marker to a graphics pipeline. Additionally, the set of post-synchronization operations may be processed in response to the flush marker exiting the graphics pipeline. In one example, the set of post-synchronization operations includes one or more atomic operations. Moreover, the set of post-synchronization operations may be obtained from an inline portion of an atomics command.
摘要:
Methods and apparatus are disclosed for efficient TLB (translation look-aside buffer) shoot-downs for heterogeneous devices sharing virtual memory in a multi-core system. Embodiments of an apparatus for efficient TLB shoot-downs may include a TLB to store virtual address translation entries, and a memory management unit, coupled with the TLB, to maintain PASID (process address space identifier) state entries corresponding to the virtual address translation entries. The PASID state entries may include an active reference state and a lazy-invalidation state. The memory management unit may perform atomic modification of PASID state entries responsive to receiving PASID state update requests from devices in the multi-core system and read the lazy-invalidation state of the PASID state entries. The memory management unit may send PASID state update responses to the devices to synchronize TLB entries prior to activation responsive to the respective lazy-invalidation state.
摘要:
Systems, apparatus and methods are described including distributing batches of geometric objects to a multi-core system, at each processor core, performing vertex processing and geometry setup processing on the corresponding batch of geometric objects, storing the vertex processing results shared memory accessible to all of the cores, and storing the geometry setup processing results in local storage. Each particular core may then perform rasterization using geometry setup results obtained from local storage within the particular core and from local storage of at least one of the other processor cores.