-
公开(公告)号:US11556471B2
公开(公告)日:2023-01-17
申请号:US16399230
申请日:2019-04-30
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps , Michael S. Woodacre , Thomas McGee , Michael Malewicki
IPC: G06F12/0817
Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.
-
公开(公告)号:US11314637B2
公开(公告)日:2022-04-26
申请号:US16888123
申请日:2020-05-29
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps , Thomas McGee , Michael Malewicki
IPC: G06F12/02 , G06F12/084 , G06F9/38 , G06F9/54 , G06F12/123
Abstract: To reduce latency and bandwidth consumption in systems, systems and methods are provided for grouping multiple cache line request messages in a related and speculative manner. That is, multiple cache lines are likely to have the same state and ownership characteristics, and therefore, requests for multiple cache lines can be grouped. Information received in response can be directed to the requesting processor socket, and those speculatively received (not actually requested, but likely to be requested) can be maintained in queue or other memory until a request is received for that information, or until discarded to free up tracking space for new requests.
-
公开(公告)号:US20190155779A1
公开(公告)日:2019-05-23
申请号:US15816385
申请日:2017-11-17
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Frank R. Dropps , Michael Anderson , Michael Malewicki
Abstract: Multi-node, multi-socket computer systems and methods provide packet tunneling between processor nodes without going through a node controller link. On receiving a packet, the destination node identifier (NID) is examined, and if it is not same as the source socket, then the packet request address is examined. If it is determined that the packet is not for a remote connected socket, then the packet's destination NID and source socket NID are replaced along with recalculated data protection information. The modified packet is then sent to the destination socket over another processor interconnect path.
-
公开(公告)号:US20190108147A1
公开(公告)日:2019-04-11
申请号:US15729891
申请日:2017-10-11
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps
Abstract: A system includes a volatile memory to store data and a memory controller to manage the data in the volatile memory. The memory controller includes an inner code generator to generate a respective inner correction code for each of a plurality of blocks of the data in the volatile memory. An outer code generator generates an outer correction code based on the plurality of blocks of the data. The memory controller updates the outer correction code as part of a refresh to the plurality of blocks of the data in the volatile memory.
-
-
-