-
公开(公告)号:US20190095340A1
公开(公告)日:2019-03-28
申请号:US15719092
申请日:2017-09-28
Applicant: Hewlett Packard Enterprise Development LP
Inventor: James Hyungsun Park , Harumi Kuno , Milind M. Chabbi , Wey Yuan Guy , Charles Stuart Johnson , Daniel Feldman , Tuan Tran , William N. Scherer, III , John L. Byrne
IPC: G06F12/109
Abstract: A memory region has logical partitions. Each logical partition has data packages. The memory region discontiguously stores the data packages of the logical partitions. A writing process can discontiguously generate the data packages of the logical partitions. A reading process can contiguously retrieve the data packages of a selected logical partition.
-
公开(公告)号:US11644882B2
公开(公告)日:2023-05-09
申请号:US17337107
申请日:2021-06-02
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Harumi Kuno , Alan Davis , Torsten Wilde , Daniel William Dauwe , Duncan Roweth , Ryan Dean Menhusen , Sergey Serebryakov , John L. Byrne , Vipin Kumar Kukkala , Sai Rahul Chalamalasetti
IPC: G06F1/3206 , G06F1/30 , H02J3/00 , G06F1/18
CPC classification number: G06F1/305 , G06F1/188 , G06F1/3206 , H02J3/003
Abstract: One embodiment provides a system and method for predicting network power usage associated with workloads. During operation, the system configures a simulator to simulate operations of a plurality of network components, which comprises embedding one or more event counters in each simulated network component. A respective event counter is configured to count a number of network-power-related events. The system collects, based on values of the event counters, network-power-related performance data associated with one or more sample workloads applied to the simulator; and trains a machine-learning model with the collected network-power-related performance data and characteristics of the sample workloads as training data 1, thereby facilitating prediction of network-power-related performance associated with a to-be-evaluated workload.
-
13.
公开(公告)号:US11556438B2
公开(公告)日:2023-01-17
申请号:US16994784
申请日:2020-08-17
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Cong Xu , Naveen Muralimanohar , Harumi Kuno
IPC: G06F11/20 , G06F11/14 , G06F11/07 , G06F11/00 , G06F11/36 , G06F9/48 , G06F9/52 , G06F9/54 , G06F9/455 , G06N20/00
Abstract: While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes.
-
公开(公告)号:US11210089B2
公开(公告)日:2021-12-28
申请号:US16508769
申请日:2019-07-11
Applicant: Hewlett Packard Enterprise Development LP
Inventor: John L. Byrne , Harumi Kuno , Jeffrey Drummond
Abstract: Methods and systems for conducting vector send operations are provided. The processor of a sender node receives a request to perform a collective send operation (e.g., MPI_Broadcast) from a user application, requesting a copy of data in one or more send buffers by sent to each of a plurality of destinations in a destination vector. The processor invokes a vector send operation from a software communications library, placing a remote enqueue atomic send command for each destination node of the destination vector in an entry of a transmit data mover (XDM) command queue in a single call. The processor executes all of the commands in the XDM command queue and writes the data in the one or more send buffers into each receive queue of each destination identified in the destination vector.
-
公开(公告)号:US10540227B2
公开(公告)日:2020-01-21
申请号:US15861381
申请日:2018-01-03
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Charles Johnson , Onkar Patil , Mesut Kuscu , Tuan Tran , Joseph Tucek , Harumi Kuno , Milind Chabbi , William Scherer
Abstract: A high performance computing system including processing circuitry and a shared fabric memory is disclosed. The processing circuitry includes processors coupled to local storages. The shared fabric memory includes memory devices and is coupled to the processing circuitry. The shared fabric memory executes a first sweep of a stencil code by sequentially retrieving data stripes. Further, for each retrieved data stripe, a set of values of the retrieved data stripe are updated substantially simultaneously. For each retrieved data stripe, the updated set of values are stored in a free memory gap adjacent to the retrieved data stripe. For each retrieved data stripe, the free memory gap is advanced to an adjacent memory location. A sweep status indicator is incremented from the first sweep to a second sweep.
-
公开(公告)号:US20160253384A1
公开(公告)日:2016-09-01
申请号:US15032977
申请日:2013-11-14
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Harumi Kuno , Goetz Graefe
IPC: G06F17/30
CPC classification number: G06F17/30469 , G06F17/30327 , G06F17/30589
Abstract: Disclosed herein are a system, non transitory computer-readable medium, and method for estimating database performance. A request for an estimate of data is read. The estimate is calculated based at least partially on a node located in a data structure.
Abstract translation: 这里公开了一种用于估计数据库性能的系统,非暂时计算机可读介质和方法。 读取对数据估计的请求。 至少部分地基于位于数据结构中的节点来计算估计。
-
公开(公告)号:US20220390999A1
公开(公告)日:2022-12-08
申请号:US17337107
申请日:2021-06-02
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Harumi Kuno , Alan Davis , Torsten Wilde , Daniel William Dauwe , Duncan Roweth , Ryan Dean Menhusen , Sergey Serebryakov , John L. Byrne , Vipin Kumar Kukkala , Sai Rahul Chalamalasetti
IPC: G06F1/30 , G06F1/3206 , G06F1/18 , H02J3/00
Abstract: One embodiment provides a system and method for predicting network power usage associated with workloads. During operation, the system configures a simulator to simulate operations of a plurality of network components, which comprises embedding one or more event counters in each simulated network component. A respective event counter is configured to count a number of network-power-related events. The system collects, based on values of the event counters, network-power-related performance data associated with one or more sample workloads applied to the simulator; and trains a machine-learning model with the collected network-power-related performance data and characteristics of the sample workloads as training data 1, thereby facilitating prediction of network-power-related performance associated with a to-be-evaluated workload.
-
18.
公开(公告)号:US20200379858A1
公开(公告)日:2020-12-03
申请号:US16994784
申请日:2020-08-17
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Cong Xu , Naveen Muralimanohar , Harumi Kuno
IPC: G06F11/20 , G06F11/14 , G06F11/07 , G06F11/00 , G06F11/36 , G06F9/48 , G06F9/52 , G06F9/54 , G06F9/455 , G06N20/00
Abstract: While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes.
-
19.
公开(公告)号:US10776225B2
公开(公告)日:2020-09-15
申请号:US16022990
申请日:2018-06-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Cong Xu , Naveen Muralimanohar , Harumi Kuno
IPC: G06F11/20 , G06F11/14 , G06F11/07 , G06F11/00 , G06F11/36 , G06F9/48 , G06F9/52 , G06F9/54 , G06F9/455 , G06N20/00
Abstract: While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes.
-
公开(公告)号:US10482013B2
公开(公告)日:2019-11-19
申请号:US15513407
申请日:2014-09-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Charles S. Johnson , Harumi Kuno , Goetz Graefe , Haris Volos , Mark Lillibridge , James Hyungsun Park , Wey Guy
IPC: G06F12/0804 , G06F12/0868 , G06F12/12 , G06F11/14
Abstract: Systems and methods associated with page modification are disclosed. One example method may be embodied on a non-transitory computer-readable medium storing computer-executable instructions. The instructions, when executed by a computer, may cause the computer to fetch a page to a buffer pool in a memory. The page may be fetched from at least one of a log and a backup using single page recovery. The instructions may also cause the computer to store a modification of the page to the log. The modification may be stored to the log as a log entry. The instructions may also cause the computer to evict the page from memory when the page is replaced in the buffer pool. Page writes associated with the eviction may be elided.
-
-
-
-
-
-
-
-
-