-
公开(公告)号:US20190114275A1
公开(公告)日:2019-04-18
申请号:US15786098
申请日:2017-10-17
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps
IPC: G06F13/376 , G06F12/0817
Abstract: A node controller to manage access to and provide responses from a remote memory for a plurality of processor nodes. A learning block monitors requests to a given data block in the remote memory and monitors parameters associated with the requests. The learning block updates a respective weighting value for each of the parameters associated with the requests to the given data block. Event detection circuitry stores the parameters and the weighting values for each of the parameters associated with an address for the given data block to determine a subsequent memory action for the prospective data block in the remote memory.
-
公开(公告)号:US20180293184A1
公开(公告)日:2018-10-11
申请号:US15483880
申请日:2017-04-10
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps , Michael E. Malewicki
CPC classification number: G06F13/161 , G06F13/1663 , G06F13/1673 , G06F13/4022 , G06F13/4068
Abstract: A high-performance computing system, method, and storage medium manage accesses to multiple memory modules of a computing node, the modules having different access latencies. The node allocates its resources into pools according to pre-determined memory access criteria. When another computing node requests a memory access, the node determines whether the request satisfies any of the criteria. If so, the associated pool of resources is selected for servicing the request; if not, a default pool is selected. The node then services the request if the pool of resources is sufficient. Otherwise, various error handling processes are performed. Each memory access criterion may relate to a memory address range assigned to a memory module, a type of request, a relationship between the nodes, a configuration of the requesting node, or a combination of these.
-
公开(公告)号:US20240345857A1
公开(公告)日:2024-10-17
申请号:US18133895
申请日:2023-04-12
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Brian J. Johnson , Frank R. Dropps , Derek S. Schumacher , Thomas Edward McGee
IPC: G06F9/455
CPC classification number: G06F9/45558 , G06F2009/45579 , G06F2009/45583
Abstract: A first hypervisor running on a first processor cluster is provided. During operation, the first hypervisor can determine a first set of processing nodes and a first memory unit of the first processor cluster in response to the booting up of a first Basic Input/Output System (BIOS) of the first processor cluster. The first hypervisor can discover a second hypervisor running on a second processor cluster comprising a second set of processing nodes and a second memory unit. The first hypervisor can operate, with the second hypervisor, a distributed system comprising the first and second sets of processing nodes and the first and second memory units. The first hypervisor can then operate, with the second hypervisor, a global virtual machine on the distributed system. The virtual memory space of the global virtual machine can be mapped to respective memory spaces of the first and second processor clusters.
-
公开(公告)号:US20230262001A1
公开(公告)日:2023-08-17
申请号:US17672481
申请日:2022-02-15
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Frank R. Dropps , Joseph G. Tietz , Derek Alan Sherlock
IPC: H04L47/2441 , H04L47/2483 , H04L47/10 , H04L47/52 , H04L47/78
CPC classification number: H04L47/2441 , H04L47/39 , H04L47/521 , H04L47/781 , H04L47/2483
Abstract: A system for facilitating enhanced virtual channel switching in a node of a distributed computing environment is provided. During operation, the system can allocate flow control credits for a first virtual channel to an upstream node in the distributed computing environment. The system can receive, via a message path comprising the upstream node, a message on the first virtual channel based on the allocated flow control credits. The system can then store the message in a queue associated with an input port and determine whether the message is a candidate for changing the first virtual channel at the node based on a mapping rule associated with the input port. If the message is a candidate, the system can associate the message with a second virtual channel indicated in the mapping rule in the queue. Subsequently, the system can send the message from the queue on the second virtual channel.
-
公开(公告)号:US20210367699A1
公开(公告)日:2021-11-25
申请号:US16882411
申请日:2020-05-22
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps , Mir Ashkan Seyedi
IPC: H04J14/02 , H04L12/427
Abstract: A fiber loop includes a plurality of processors coupled to each other and a controller coupled to each of the plurality of processors. The controller is configured to: assign to each of the plurality of processors a number of wavelengths for interconnect communications between the plurality of processors; receive, from a first processor of the plurality of processors, a request for one or more additional wavelengths; determine whether an interconnect bandwidth utilization on the fiber loop is less than a threshold; and in response to determining that the interconnect bandwidth utilization on the fiber loop is less than the threshold, reassign, to the first processor, one or more wavelengths that are assigned to a second processor of the plurality of processors.
-
公开(公告)号:US20210240625A1
公开(公告)日:2021-08-05
申请号:US17301949
申请日:2021-04-20
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Frank R. Dropps
IPC: G06F12/0817 , G06F12/123
Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.
-
公开(公告)号:US20180227078A1
公开(公告)日:2018-08-09
申请号:US15427941
申请日:2017-02-08
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps , Mark R. Sikkink
IPC: H04L1/00 , H04L12/721
CPC classification number: H04L1/0042 , H04L1/0009 , H04L1/0058 , H04L1/18 , H04L45/72
Abstract: A system, method, and storage medium provide dynamic, packet-based adaptive forward error correction over a lossy bidirectional data communication medium that couples a transmitting device to a receiving device. The transmitting device repeatedly transmits encoded data packets formed by applying, to unencoded data, a forward error correction (FEC) algorithm having a level N that indicates a number of correctable errors. The receiving device attempts to decode the encoded data packets using the FEC algorithm, requesting retransmission of a packet if there are too many errors to correct. The transmitting device decreases the level N when it does not receive such a request within a given duration. By contrast, the transmitting device increases the level N when it receives a sequence of such requests having a threshold length, each request being received less than the given duration after the previous request.
-
公开(公告)号:US20240069742A1
公开(公告)日:2024-02-29
申请号:US17898189
申请日:2022-08-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Thomas Edward McGee , Brian J. Johnson , Frank R. Dropps , Derek S. Schumacher , Stuart C. Haden , Michael S. Woodacre
IPC: G06F3/06 , G06F12/0817
CPC classification number: G06F3/0617 , G06F3/0647 , G06F3/0679 , G06F12/0828 , G06F2212/271 , G06F2212/621
Abstract: One aspect of the application can provide a system and method for replacing a failing node with a spare node in a non-uniform memory access (NUMA) system. During operation, in response to determining that a node-migration condition is met, the system can initialize a node controller of the spare node such that accesses to a memory local to the spare node are to be processed by the node controller, quiesce the failing node and the spare node to allow state information of processors on the failing node to be migrated to processors on the spare node, and subsequent to unquiescing the failing node and the spare node, migrate data from the failing node to the spare node while maintaining cache coherence in the NUMA system and while the NUMA system remains in operation, thereby facilitating continuous execution of processes previously executed on the failing node.
-
公开(公告)号:US11556478B2
公开(公告)日:2023-01-17
申请号:US17085321
申请日:2020-10-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Frank R. Dropps
IPC: G06F12/00 , G06F12/0891 , G06F12/0873 , G06F11/30 , G06F12/02 , G06F9/30 , G06F12/0817
Abstract: A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.
-
公开(公告)号:US11556471B2
公开(公告)日:2023-01-17
申请号:US16399230
申请日:2019-04-30
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Frank R. Dropps , Michael S. Woodacre , Thomas McGee , Michael Malewicki
IPC: G06F12/0817
Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.
-
-
-
-
-
-
-
-
-