-
公开(公告)号:US20220283951A1
公开(公告)日:2022-09-08
申请号:US17751557
申请日:2022-05-23
Applicant: Intel Corporation
Inventor: Neha PATHAPATI , Lidia WARNES , Durgesh SRIVASTAVA , Francois DUGAST , Navneet SINGH , Rasika SUBRAMANIAN , Sidharth N. KASHYAP
IPC: G06F12/0882 , G06F9/50 , G06N3/04 , G06K9/62
Abstract: A method is described. The method includes determining that a memory page is in one of an active state and an idle state from meta data that is maintained for the memory page. The method includes recording a past history of active/idle state determinations that were previously made for the memory page. The method includes training a neural network on the past history of the memory page. The method includes using the neural network to predict one of a future active state and future idle state for the memory page. The method includes determining a location for the memory page based on the past history of the memory page and the predicted future state of the memory page, the location being one of a faster memory and a slower memory. The method includes moving the memory page to the location from the other one of the faster memory and the slower memory.
-
公开(公告)号:US20190042544A1
公开(公告)日:2019-02-07
申请号:US16122030
申请日:2018-09-05
Applicant: Intel Corporation
Inventor: Sidharth N. KASHYAP , Angus LEPPER , Peter BOYLE
Abstract: Disclosed embodiments relate to mixed-precision vector multiply-accumulate (MPVMAC) In one example, a processor includes fetch circuitry to fetch a compress instruction having fields to specify locations of a source vector having N single-precision formatted elements, and a compressed vector having N neural half-precision (NHP) formatted elements, decode circuitry to decode the fetched compress instruction, execution circuitry to respond to the decoded compress instruction by: converting each element of the source vector into the NHP format and writing each converted element to a corresponding compressed vector element, wherein the processor is further to fetch, decode, and execute a MPVMAC instruction to multiply corresponding NHP-formatted elements using a 16-bit multiplier, and accumulate each of the products with previous contents of a corresponding destination using a 32-bit accumulator.
-