-
1.
公开(公告)号:US20190179764A1
公开(公告)日:2019-06-13
申请号:US16278509
申请日:2019-02-18
Applicant: Intel Corporation
Inventor: Zhe WANG , Alaa R. ALAMELDEEN , Lidia WARNES , Andy M. RUDOFF , Muthukumar P. SWAMINATHAN
IPC: G06F12/0891 , G06F12/0893 , G06F12/02
Abstract: A two-level main memory that includes a persistent memory and a cache is provided. Locations of dirty cache lines in the cache are tracked through the use a dirty cache line tracker. The dirty cache line tracker is stored in the cache and can be cached in a memory controller for the persistent memory. The dirty cache line tracker can be used to bypass cache lookup, perform efficient dirty cache line scrubbing and to decouple battery power and capacity of the cache in the two-level main memory.
-
公开(公告)号:US20230092541A1
公开(公告)日:2023-03-23
申请号:US17483195
申请日:2021-09-23
Applicant: Intel Corporation
Inventor: Francois DUGAST , Durgesh SRIVASTAVA , Sujoy SEN , Lidia WARNES , Thomas E. WILLIS , Bassam N. COURY
IPC: G06F12/0882 , G06F12/0811 , G06F12/123 , G06F13/16 , G06F13/42 , G06F15/78
Abstract: Methods and apparatus to minimize hot/cold page detection overhead on running workloads. A page meta data structure is populated with meta data associated with memory pages in one or more far memory tier. In conjunction with one or more processes accessing memory pages to perform workloads, the page meta data structure is updated to reflect accesses to the memory pages. The page meta data, which reflects the current state of memory, is used to determine which pages are “hot” pages and which pages are “cold” pages, wherein hot pages are memory pages with relatively higher access frequencies and cold pages are memory pages with relatively lower access frequencies. Variations on the approach including filtering meta data updates on pages in memory regions of interest and applying a filter(s) to trigger meta data updates based on (a) condition(s). A callback function may also be triggered to be executed synchronously with memory page accesses.
-
公开(公告)号:US20220334963A1
公开(公告)日:2022-10-20
申请号:US17849387
申请日:2022-06-24
Applicant: Intel Corporation
Inventor: Ankit PATEL , Lidia WARNES , Donald L. FAW , Bassam N. COURY , Douglas CARRIGAN , Hugh WILKINSON , Ananthan AYYASAMY , Michael F. FALLON
IPC: G06F12/06 , G06F12/0877 , G06F12/0868
Abstract: Examples described herein relate to circuitry, when operational, configured to: store records of memory accesses to a memory device by at least one requester based on a configuration, wherein the configuration is to specify a duration of memory access capture. In some examples, the at least one requester comprises one or more workloads running on one or more processors. In some examples, the configuration is to specify collection of one or more of: physical address ranges or read or write access type.
-
公开(公告)号:US20210200667A1
公开(公告)日:2021-07-01
申请号:US16727595
申请日:2019-12-26
Applicant: Intel Corporation
Inventor: Debra BERNSTEIN , Hugh WILKINSON , Douglas CARRIGAN , Bassam N. COURY , Matthew J. ADILETTA , Durgesh SRIVASTAVA , Lidia WARNES , William WHEELER , Michael F. FALLON
Abstract: Examples described herein relate to memory thin provisioning in a memory pool of one or more dual in-line memory modules or memory devices. At any instance, any central processing unit (CPU) can request and receive a full virtual allocation of memory in an amount that exceeds the physical memory attached to the CPU (near memory). A remote pool of additional memory can be dynamically utilized to fill the gap between allocated memory and near memory. This remote pool is shared between multiple CPUs, with dynamic assignment and address re-mapping provided for the remote pool. To improve performance, the near memory can be operated as a cache of the pool memory. Inclusive or exclusive content storage configurations can be applied. An inclusive cache configuration can include an entry in a near memory cache also being stored in a memory pool whereas an exclusive cache configuration can provide an entry in either a near memory cache or in a memory pool but not both. Near memory cache management includes current data location tracking, access counting and other caching heuristics, eviction of data from near memory cache to pool memory and movement of data from pool memory to memory cache.
-
5.
公开(公告)号:US20230195528A1
公开(公告)日:2023-06-22
申请号:US17556096
申请日:2021-12-20
Applicant: Intel Corporation
Inventor: Farah E. FARGO , Lucienne OLSON , Rita H. WOUHAYBI , Patricia M. MWOVE , Lidia WARNES , Aline C. KENFACK SADATE
Abstract: A workload orchestrator in a disaggregated computing system manages Infrastructure Processing Units (IPUs) in a bidirectional way to provide redundancy and optimal resource configurations. Light-weight machine learning capabilities are used by the IPUs and the workload orchestrator to profile workloads, specify a redundancy level for each workload phase and predict a configuration that can provide optimal performance and security for the disaggregated computing system.
-
公开(公告)号:US20220283951A1
公开(公告)日:2022-09-08
申请号:US17751557
申请日:2022-05-23
Applicant: Intel Corporation
Inventor: Neha PATHAPATI , Lidia WARNES , Durgesh SRIVASTAVA , Francois DUGAST , Navneet SINGH , Rasika SUBRAMANIAN , Sidharth N. KASHYAP
IPC: G06F12/0882 , G06F9/50 , G06N3/04 , G06K9/62
Abstract: A method is described. The method includes determining that a memory page is in one of an active state and an idle state from meta data that is maintained for the memory page. The method includes recording a past history of active/idle state determinations that were previously made for the memory page. The method includes training a neural network on the past history of the memory page. The method includes using the neural network to predict one of a future active state and future idle state for the memory page. The method includes determining a location for the memory page based on the past history of the memory page and the predicted future state of the memory page, the location being one of a faster memory and a slower memory. The method includes moving the memory page to the location from the other one of the faster memory and the slower memory.
-
-
-
-
-