-
1.
公开(公告)号:US20200272907A1
公开(公告)日:2020-08-27
申请号:US16748284
申请日:2020-01-21
Inventor: Hai JIN , Xiaofei LIAO , Long ZHENG , Haikun LIU , Xi GE
Abstract: A deep learning heterogeneous computing method based on layer-wide memory allocation, at least comprises steps of: traversing a neural network model so as to acquire a training operational sequence and a number of layers L thereof; calculating a memory room R1 required by data involved in operation at the ith layer of the neural network model under a double-buffer configuration, where 1≤i≤L; altering a layer structure of the ith layer and updating the training operational sequence; distributing all the data across a memory room of the CPU and the memory room of the GPU according to a data placement method; performing iterative computation at each said layer successively based on the training operational sequence so as to complete neural network training.
-
公开(公告)号:US20200333981A1
公开(公告)日:2020-10-22
申请号:US16774039
申请日:2020-01-28
Inventor: Haikun LIU , Xiaofei LIAO , Hai JIN , Zhiwei LI
IPC: G06F3/06
Abstract: The present invention is related to a storage system of scalable storage for in-memory objects using a DRAM-NVM hybrid memory devices.
-
3.
公开(公告)号:US20170277640A1
公开(公告)日:2017-09-28
申请号:US15287022
申请日:2016-10-06
Inventor: Hai JIN , Xiaofei LIAO , Haikun LIU , Yujie CHEN , Rentong GUO
IPC: G06F12/1045 , G06F12/0862
CPC classification number: G06F12/1054 , G06F12/0862 , G06F12/1027 , G06F2212/1024 , G06F2212/202 , G06F2212/22 , G06F2212/602 , G06F2212/68
Abstract: The present invention provides a DRAM/NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution. In the present invention, an utility-based data fetching mechanism is adopted in the DRAM/NVM hierarchical memory system, and it determines whether data in the NVM should be cached in the DRAM according to current DRAM memory utilization and application memory access patterns. It improves the efficiency of the DRAM cache and bandwidth usage between the NVM main memory and the DRAM cache.
-
-