摘要:
Multi-level memory architecture technologies are described. One processor includes a requesting unit, a first memory interface to couple to a far memory (FM), a second memory interface to couple to a near memory (NM) and a multi-level memory controller (MLMC) coupled to the requesting unit, the first memory interface and the second memory interface. The MLMC is to write data into a memory page of NM in response to a request from the requesting unit to retrieve the memory page from FM. The MLMC receives a hint from the requesting unit and clears a writeback bit for the memory page indicated in the hint. The hint indicates that the data contained in the memory page of the NM is not to be subsequently requested by the requesting unit. The MLMC starts a writeback operation of a memory sector including the memory page and one or more additional memory pages. The writeback operation is to transfer the data contained in the memory page from the NM to the FM when the writeback bit is set and the writeback operation does is not to transfer the data contained in the memory page from NM to the FM when the writeback bit is cleared.
摘要:
Systems and methods for write allocation by a two-level memory controller. An example processing system comprises: a processing core; a memory controller communicatively coupled to the processing core; and a system memory communicatively coupled to the memory controller, the system memory comprising a first level memory and a second level memory; wherein the memory controller is configured, responsive to determining that a memory block referenced by a memory write request is not present in the first level memory, to allocate a new first level memory block without retrieving the memory block referenced by the request from the second level memory, wherein the memory write request is represented by an overwrite type memory write request.
摘要:
Dynamic heterogeneous hashing function technology for balancing memory requests between multiple memory channels is described. A processor includes functional units and multiple memory channels, and a memory controller unit (MCU) coupled between them. The MCU includes a dynamic heterogeneous hashing module (DHHM) that includes multiple specific-purpose hashing function blocks that define different interleaving sequences for memory requests to alternately access the multiple memory channels. The DHHM also includes a hashing-function selection block. The hashing-function selection block is operable to identify a requesting functional unit originating a current memory request and to select one of the specific-purpose hashing function blocks for the current memory request in view of the requesting functional unit.
摘要:
Dynamic heterogeneous hashing function technology for balancing memory requests between multiple memory channels is described. A processor includes functional units and multiple memory channels, and a memory controller unit (MCU) coupled between them. The MCU includes a dynamic heterogeneous hashing module (DHHM) that includes multiple specific-purpose hashing function blocks that define different interleaving sequences for memory requests to alternately access the multiple memory channels. The DHHM also includes a hashing-function selection block. The hashing-function selection block is operable to identify a requesting functional unit originating a current memory request and to select one of the specific-purpose hashing function blocks for the current memory request in view of the requesting functional unit.
摘要:
A processing device comprises an instruction execution unit, a memory agent and pinning logic to pin memory pages in a multi-level memory system upon request by the memory agent. The pinning logic includes an agent interface module to receive, from the memory agent, a pin request indicating a first memory page in the multi-level memory system, the multi-level memory system comprising a near memory and a far memory. The pinning logic further includes a memory interface module to retrieve the first memory page from the far memory and write the first memory page to the near memory. In addition, the pinning logic also includes a descriptor table management module to mark the first memory page as pinned in the near memory, wherein marking the first memory page as pinned comprises setting a pinning bit corresponding to the first memory page in a cache descriptor table and to prevent the first memory page from being evicted from the near memory when the first memory page is marked as pinned.
摘要:
An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
摘要:
Systems, apparatuses and methods may provide for technology that identifies, at an image post-processor, unresolved surface data and identifies, at the image post-processor, control data associated with the unresolved surface data. Additionally, the technology may resolve, at the image post-processor, the unresolved surface data and the control data into a final image.
摘要:
Systems, apparatuses and methods may provide away to render augmented reality (AR) and/or virtual reality (VR) sensory enhancements using ray tracing. More particularly, systems, apparatuses and methods may provide a way to normalize environment information captured by multiple capture devices, and calculate, for an observer, the sound sources or sensed events vector paths. The systems, apparatuses and methods may detect and/or manage one or more capture devices and assign one or more the capture devices based on one or more conditions to provide observer an immersive VR/AR experience.
摘要:
An apparatus to facilitate compute optimization is disclosed. The apparatus includes a plurality of processing units each comprising a plurality of execution units (EUs), wherein the plurality of EUs comprise a first EU type and a second EU type
摘要:
Technologies for one-level memory (1LM) and two-level memory (2LM) configurations in a common platform are described. A processor includes a first memory interface coupled to a first memory device that is located off-package of the processor and a second memory interface coupled to a second memory device that is located off-package of the processor. The processor also includes a multi-level memory controller (MLMC) coupled to the first memory interface and the second memory interface. The MLMC includes a first configuration and a second configuration. The first memory device is a random access memory (RAM) of a one-level memory (1LM) architecture in the first configuration. The first memory device is a first-level RAM of a two-level memory (2LM) architecture in the second configuration and the second memory device is a second-level non-volatile memory (NVM) of the 2LM architecture in the second configuration.