-
公开(公告)号:US20180314436A1
公开(公告)日:2018-11-01
申请号:US15499313
申请日:2017-04-27
Applicant: Advanced Micro Devices, Inc.
Inventor: Arkaprava Basu , Jee Ho Ryoo
IPC: G06F3/06 , G06F12/1027 , G06F12/1009
Abstract: The present disclosure is directed to techniques for migrating data between heterogeneous memories in a computing system. More specifically, the techniques involve migrating data between a memory having better access characteristics (e.g., lower latency but greater capacity) and a memory having worse access characteristics (e.g., higher latency but lower capacity). Migrations occur with a variable migration granularity. A migration granularity specifies a number of memory pages, having virtual addresses that are contiguous in virtual address space, that are migrated in a single migration operation. A history-based technique that adjusts migration granularity based on the history of memory utilization by an application is provided. A profiling-based technique that adjusts migration granularity based on a profiling operation is also provided.
-
公开(公告)号:US20180024935A1
公开(公告)日:2018-01-25
申请号:US15216524
申请日:2016-07-21
Applicant: Advanced Micro Devices, Inc.
Inventor: Mitesh R. Meswani , Jee Ho Ryoo
IPC: G06F12/0893 , G06F12/0815
CPC classification number: G06F12/0893 , G06F2212/1024 , G06F2212/60
Abstract: The described embodiments include a computing device that caches data acquired from a main memory in a high-bandwidth memory (HBM), the computing device including channels for accessing data stored in corresponding portions of the HBM. During operation, the computing device sets each of the channels so that data blocks stored in the corresponding portions of the HBM include corresponding numbers of cache lines. Based on records of accesses of cache lines in the HBM that were acquired from pages in the main memory, the computing device sets a data block size for each of the pages, the data block size being a number of cache lines. The computing device stores, in the HBM, data blocks acquired from each of the pages in the main memory using a channel having a data block size corresponding to the data block size for each of the pages.
-