-
公开(公告)号:US10817178B2
公开(公告)日:2020-10-27
申请号:US15032327
申请日:2013-10-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Jichuan Chang , Sheng Li , Parthasarathy Ranganathan
IPC: G06F3/06
Abstract: A method for compressing and compacting memory on a memory device is described. The method includes organizing a number of compressed memory pages referenced in a number of compaction table entries based on a size of the number of compressed memory pages and compressing the number of compaction table entries, in which a compaction table entry comprise a number of fields.
-
公开(公告)号:US10331560B2
公开(公告)日:2019-06-25
申请号:US15113960
申请日:2014-01-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Jichuan Chang , Sheng Li
IPC: G06F12/08 , G06F12/0815 , G06F12/0817 , G06F12/084 , G06F12/1027
Abstract: Methods and systems for providing cache coherence in multi-compute-engine systems are described herein. In on example, concise cache coherency directory (CDir) for providing cache coherence in the multi-compute-engine systems is described. The CDir comprises a common pattern aggregated entry for one or more cache lines from amongst a plurality of cache lines of a shared memory. The one or more cache lines that correspond to the common pattern aggregated entry are associated with a common sharing pattern from amongst a predetermined number of sharing patterns that repeat most frequently in the region.
-
公开(公告)号:US09846653B2
公开(公告)日:2017-12-19
申请号:US15120357
申请日:2014-02-21
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Jichuan Chang , Doe Hyun Yoon , Robert Schreiber
IPC: G06F12/08 , G06F12/0891 , G06F12/0862
CPC classification number: G06F12/0891 , G06F12/0862 , G06F2212/2024 , G06F2212/60 , G06F2212/6026
Abstract: Write operations on main memory comprise predicting a last write in a dirty cache line. The predicted last write indicates a predicted pattern of the dirty cache line before the dirty cache line is evicted from a cache memory. Further, the predicted pattern is compared with a pattern of original data bits stored in the main memory for identifying changes to be made in the original data bits. Based on the comparison, an optimization operation to be performed on the original data bits is determined. The optimization operation modifies the original data bits based on the predicted pattern of a last write cache line before the last write cache line is evicted from the cache memory.
-
公开(公告)号:US10585602B2
公开(公告)日:2020-03-10
申请号:US16011187
申请日:2018-06-18
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Doe Hyun Yoon , Naveen Muralimanohar , Jichuan Chang , Parthasarathy Ranganathan
Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
-
公开(公告)号:US10152247B2
公开(公告)日:2018-12-11
申请号:US15113824
申请日:2014-01-23
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Sheng Li , Jishen Zhao , Jichuan Chang , Parthasarathy Ranganathan , Alistair Veitch , Kevin T. Lim , Mark Lillibridge
Abstract: A technique includes acquiring a plurality of write requests from at least one memory controller and logging information associated with the plurality of write requests in persistent storage. The technique includes applying the plurality of write requests atomically as a group to persistent storage.
-
公开(公告)号:US10108239B2
公开(公告)日:2018-10-23
申请号:US15113995
申请日:2014-01-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Chandrakant Patel , Jichuan Chang , Cullen E. Bash
Abstract: Systems and methods for operating based on recovered waste heat are described. In one example, the method includes receiving recovered waste heat power and operating at least one system component of a recovered waste heat based computing device based on the recovered waste heat power, where the at least one system component is coupled to a non-volatile memory of the recovered waste heat based computing device. The method further includes preserving operational states of the at least one system component in the non-volatile memory based on a current power level associated with the recovered waste heat power.
-
公开(公告)号:US09710335B2
公开(公告)日:2017-07-18
申请号:US14785421
申请日:2013-07-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Doe Hyun Yoon , Terence P. Kelly , Jichuan Chang , Naveen Muralimanohar , Robert Schreiber , Parthasarathy Ranganathan
CPC classification number: G06F11/1451 , G06F3/0614 , G06F3/0628 , G06F11/1072 , G06F11/1435 , G06F11/1471 , G06F2201/84 , G11C29/52
Abstract: According to an example, versioned memory implementation may include comparing a global memory version to a block memory version. The global memory version may correspond to a plurality of memory blocks, and the block memory version may correspond to one of the plurality of memory blocks. A subblock-bit-vector (SBV) corresponding to a plurality of subblocks of the one of the plurality of memory blocks may be evaluated. Based on the comparison and the evaluation, a determination may be made as to which level in a cell of one of the plurality of subblocks of the one of the plurality of memory blocks checkpoint data is stored.
-
8.
公开(公告)号:US20160253105A1
公开(公告)日:2016-09-01
申请号:US15032327
申请日:2013-10-31
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Jichuan Chang , Sheng Li , Parthasarathy Ranganathan
IPC: G06F3/06
Abstract: A method for compressing and compacting memory on a memory device is described. The method includes organizing a number of compressed memory pages referenced in a number of compaction table entries based on a size of the number of compressed memory pages and compressing the number of compaction table entries, in which a compaction table entry comprise a number of fields.
Abstract translation: 描述了一种用于在存储器件上压缩和压缩存储器的方法。 该方法包括:基于压缩存储器页面的数量的大小来压缩多个压缩表条目中引用的多个压缩存储器页面,并且压缩压缩表条目的数量,其中压缩表条目包括多个字段。
-
公开(公告)号:US10691344B2
公开(公告)日:2020-06-23
申请号:US14785120
申请日:2013-05-30
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Doe Hyun Yoon , Sheng Li , Jichuan Chang , Ke Chen , Parthasarathy Ranganathan , Norman Paul Jouppi
Abstract: A first memory controller receives an access command from a second memory controller, where the access command is timing non-deterministic with respect to a timing specification of a memory. The first memory controller sends at least one access command signal corresponding to the access command to the memory, wherein the at least one access command signal complies with the timing specification. The first memory controller determines a latency of access of the memory. The first memory controller sends feedback information relating to the latency to the second memory controller.
-
公开(公告)号:US20180307420A1
公开(公告)日:2018-10-25
申请号:US16011187
申请日:2018-06-18
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Doe Hyun Yoon , Naveen Muralimanohar , Jichuan Chang , Parthasarathy Ranganathan
CPC classification number: G06F3/0619 , G06F3/065 , G06F3/0655 , G06F3/0656 , G06F3/0665 , G06F3/0688 , G06F3/0689 , G06F11/108 , G06F2211/1054 , G06F2211/1066
Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
-
-
-
-
-
-
-
-
-