摘要:
System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.
摘要:
System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.
摘要:
Memory system enabling memory mirroring in single write operations. The memory system includes a memory channel which can store duplicate copies of a data element into multiple locations in the memory channel. The multiple locations are disposed in different memory modules and have different propagation times with respect to a data signal transmitted from the memory controller. In a write operation, the relative timings of the chip select, command and address signals among the multiple locations are adjusted according to the data propagation delay. As a result, a data element can be written into the multiple locations responsive to a data signal transmitted from the memory controller in a single transmission event.
摘要:
A neural-network accelerator die is stacked on and integrated with a high-bandwidth memory so that the stack behaves as a single, three-dimensional (3-D) integrated circuit. The accelerator die includes a high-bandwidth memory (HBM) interface that allows a host processor to store training data and retrieve inference-model and output data from memory. The accelerator die additionally includes accelerator tiles with a direct, inter-die memory interfaces to a stack of underlying memory banks. The 3-D IC thus supports both HBM memory channels optimized for external access and accelerator-specific memory channels optimized for training and inference.
摘要:
A method and system for direct memory transfers between memory modules are described that includes sending a request to a first memory module and storing the data sent on a memory bus by the first memory module into a second memory module. The direct transfer of data between the first and second memory modules reduces power consumption and increases performance.
摘要:
A memory system includes a CPU that communicates commands and addresses to a main-memory module. The module includes a buffer circuit that relays commands and data between the CPU and the main memory. The memory module additionally includes an embedded processor that shares access to main memory in support of peripheral functionality, such as graphics processing, for improved overall system performance. The buffer circuit facilitates the communication of instructions and data between CPU and the peripheral processor in a manner that minimizes or eliminates the need to modify CPU, and consequently reduces practical barriers to the adoption of main-memory modules with integrated processing power.
摘要:
A memory system includes a CPU that communicates commands and addresses to a main-memory module. The module includes a buffer circuit that relays commands and data between the CPU and the main memory. The memory module additionally includes an embedded processor that shares access to main memory in support of peripheral functionality, such as graphics processing, for improved overall system performance. The buffer circuit facilitates the communication of instructions and data between CPU and the peripheral processor in a manner that minimizes or eliminates the need to modify CPU, and consequently reduces practical barriers to the adoption of main-memory modules with integrated processing power.
摘要:
An apparatus and method for flexible metadata allocation and caching. In one embodiment of the method first and second requests are received from first and second applications, respectively, wherein the requests specify a reading of first and second data, respectively, from one or more memory devices. The circuit reads the first and second data in response to receiving the first and second requests. Receiving first and second metadata from the one or more memory devices in response to receiving the first and second requests. The first and second metadata correspond to the first and second data, respectively. The first and second data are equal in size, and the first and second metadata are unequal in size.
摘要:
System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.
摘要:
A method and system for direct memory transfers between memory modules are described that includes sending a request to a first memory module and storing the data sent on a memory bus by the first memory module into a second memory module. The direct transfer of data between the first and second memory modules reduces power consumption and increases performance.