REDUCING MEMORY POWER USAGE IN FAR MEMORY

    公开(公告)号:US20230086149A1

    公开(公告)日:2023-03-23

    申请号:US17483491

    申请日:2021-09-23

    Abstract: Some embodiments include apparatuses and electrical models associated with the apparatus. One of the apparatuses includes a power control unit to monitor a power state of the apparatus for entry into a standby mode. The apparatus can include a two-level memory (2LM) hardware accelerator to, responsive to a notification from the power control unit of entry into the standby mode, flush dynamic random access memory (DRAM) content from a first memory part to a second memory part. The apparatus can include processing circuitry to determine memory utilization and move memory from a first memory portion to a second memory portion responsive to memory utilization exceeding a threshold. Other methods systems and apparatuses are described.

    Programmable Power Management Agent
    2.
    发明申请

    公开(公告)号:US20170285703A1

    公开(公告)日:2017-10-05

    申请号:US15623536

    申请日:2017-06-15

    Abstract: In an embodiment, a processor includes a first core and a power management agent (PMA), coupled to the first core, to include a static table that stores a list of operations, and a plurality of columns each to specify a corresponding flow that includes a corresponding subset of the operations. Execution of each flow is associated with a corresponding state of the first core. The PMA includes a control register (CR) that includes a plurality of storage elements to receive one of a first value and a second value. The processor includes execution logic, responsive to a command to place the first core into a first state, to execute an operation of a first flow when a corresponding storage element stores the first value and to refrain from execution of an operation of the first flow when the corresponding element stores the second value. Other embodiments are described and claimed.

    Programmable Power Management Agent
    3.
    发明申请
    Programmable Power Management Agent 有权
    可编程电源管理代理

    公开(公告)号:US20160252952A1

    公开(公告)日:2016-09-01

    申请号:US14634777

    申请日:2015-02-28

    Abstract: In an embodiment, a processor includes a first core and a power management agent (PMA), coupled to the first core, to include a static table that stores a list of operations, and a plurality of columns each to specify a corresponding flow that includes a corresponding subset of the operations. Execution of each flow is associated with a corresponding state of the first core. The PMA includes a control register (CR) that includes a plurality of storage elements to receive one of a first value and a second value. The processor includes execution logic, responsive to a command to place the first core into a first state, to execute an operation of a first flow when a corresponding storage element stores the first value and to refrain from execution of an operation of the first flow when the corresponding element stores the second value. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括耦合到第一核的第一核和电源管理代理(PMA),以包括存储操作列表的静态表,以及多个列,以指定包括 相应的操作子集。 每个流的执行与第一核的相应状态相关联。 PMA包括控制寄存器(CR),其包括多个存储元件以接收第一值和第二值中的一个。 处理器包括执行逻辑,响应于将第一核放入第一状态的命令,当对应的存储元件存储第一值时,执行第一流的操作,并且当第一流处于 相应的元素存储第二个值。 描述和要求保护其他实施例。

    Packed rotate processors, methods, systems, and instructions

    公开(公告)号:US10324718B2

    公开(公告)日:2019-06-18

    申请号:US15864158

    申请日:2018-01-08

    Abstract: A method of an aspect includes receiving a masked packed rotate instruction. The instruction indicates a first source packed data including a plurality of packed data elements, a packed data operation mask having a plurality of mask elements, at least one rotation amount, and a destination storage location. A result packed data is stored in the destination storage location in response to the instruction. The result packed data includes result data elements that each correspond to a different one of the mask elements in a corresponding relative position. Result data elements that are not masked out by the corresponding mask element include one of the data elements of the first source packed data in a corresponding position that has been rotated. Result data elements that are masked out by the corresponding mask element include a masked out value. Other methods, apparatus, systems, and instructions are disclosed.

    SIMD VARIABLE SHIFT AND ROTATE USING CONTROL MANIPULATION

    公开(公告)号:US20170109163A1

    公开(公告)日:2017-04-20

    申请号:US15391695

    申请日:2016-12-27

    Abstract: Vector single instruction multiple data (SIMD) shift and rotate instructions are provided specifying: a destination vector register comprising fields to store vector elements, a first vector register, a vector element size, and a second vector register. Vector data fields of a first element size are duplicated. Duplicate vector data fields are stored as corresponding data fields of twice the first element size. Control logic receives an element size for performing a SIMD shift or rotation operation. Through selectors corresponding to a vector element, portions are selected from the duplicated data fields, the selectors corresponding to any particular vector element select all portions similarly from the duplicated data fields for that particular vector element responsive to the first element size, but selectors corresponding to any particular vector element select at least two portions from the duplicated data fields differently for that particular vector element responsive to a second element size.

    Programmable power management agent

    公开(公告)号:US10761594B2

    公开(公告)日:2020-09-01

    申请号:US15623536

    申请日:2017-06-15

    Abstract: In an embodiment, a processor includes a first core and a power management agent (PMA), coupled to the first core, to include a static table that stores a list of operations, and a plurality of columns each to specify a corresponding flow that includes a corresponding subset of the operations. Execution of each flow is associated with a corresponding state of the first core. The PMA includes a control register (CR) that includes a plurality of storage elements to receive one of a first value and a second value. The processor includes execution logic, responsive to a command to place the first core into a first state, to execute an operation of a first flow when a corresponding storage element stores the first value and to refrain from execution of an operation of the first flow when the corresponding element stores the second value. Other embodiments are described and claimed.

    Apparatus and method for shared least recently used (LRU) policy between multiple cache levels

    公开(公告)号:US10657070B2

    公开(公告)日:2020-05-19

    申请号:US16105434

    申请日:2018-08-20

    Abstract: A method and apparatus are described for a shared LRU policy between cache levels. For example, one embodiment comprises: a level N cache to store a first plurality of entries; a level N+1 cache to store a second plurality of entries; the level N+1 cache to initially be provided with responsibility for implementing a least recently used (LRU) eviction policy for a first entry until receipt of a request for the first entry from the level N cache at which time the entry is copied from the level N+1 cache to the level N cache, the level N cache to then be provided with responsibility for implementing the LRU policy until the first entry is evicted from the level N cache, wherein upon being notified that the first entry has been evicted from the level N cache, the level N+1 cache to resume responsibility for implementing the LRU eviction policy.

    Use of error correcting code to carry additional data bits

    公开(公告)号:US10153784B2

    公开(公告)日:2018-12-11

    申请号:US15412763

    申请日:2017-01-23

    Abstract: Integrated circuits, systems and methods are disclosed in which data bits protected by error correction code (ECC) detection and correction may be increased such that a combination of primary and additional bits may also be ECC protected using existing ECC allocation, without affecting ECC capabilities. For example, the additional bits may be encoded into phantom bits that are in turn used in combination with the primary bits, to generate an ECC. This ECC may then be combined with the primary bits to form a code word. The code word may be transmitted (or stored) so that when the data bits are received (or retrieved), assumed values of the phantom bits may be decoded, using the ECC, back into the additional bits without the phantom bits or the additional bits ever having transmitted (or stored).

Patent Agency Ranking