-
公开(公告)号:US10417215B2
公开(公告)日:2019-09-17
申请号:US15721317
申请日:2017-09-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Huanchen Zhang , Kimberly Keeton
Abstract: A system includes processing nodes and shared memory. Each processing node includes a processor and local memory. The local memory of each processing node stores at least a partial copy of the immutable data stage of a dataset. The shared memory is accessible by each processing node and stores a sole copy of the mutable data stage of the dataset and a master copy of the immutable data stage of a dataset.
-
公开(公告)号:US20190121750A1
公开(公告)日:2019-04-25
申请号:US15789431
申请日:2017-10-20
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Kimberly Keeton , Yupu Zhang , Haris Volos , Ram Swaminathan , Evan R. Kirshenbaum
IPC: G06F12/14 , G06F12/128
Abstract: Determining cache value currency using persistent markers is disclosed herein. In one example, a cache entry is retrieved from a local cache memory device. The cache entry includes a key, a value to be used by the computing device, and a marker flag to determine whether the cache entry is current. The local cache memory device also includes a marker location that indicates a location of a marker in a shared persistent fabric-attached memory (FAM). Using a marker location, the marker is retrieved from the shared persistent FAM. From the marker and the marker flag, it is determined whether the cache entry is current. The shared FAM pool is connected to the local cache memory devices of multiple computing devices.
-
公开(公告)号:US20180025043A1
公开(公告)日:2018-01-25
申请号:US15556238
申请日:2015-03-06
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Stanko Novakovic , Kimberly Keeton , Paolo Faraboschi , Robert Schreiber
IPC: G06F17/30
CPC classification number: G06F16/2358 , G06F16/273 , G06F16/9024
Abstract: In some examples, a graph processing server is communicatively linked to a shared memory. The shared memory may also be accessible to a different graph processing server. The graph processing server may compute an updated vertex value for a graph portion handled by the graph processing server and flush the updated vertex value to the shared memory, for retrieval by the different graph processing server. The graph processing server may also notify the different graph processing server indicating that the updated vertex value has been flushed to the shared memory.
-
14.
公开(公告)号:US20240020155A1
公开(公告)日:2024-01-18
申请号:US18476690
申请日:2023-09-28
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Dejan S. Milojicic , Kimberly Keeton , Paolo Faraboschi , Cullen E. Bash
CPC classification number: G06F9/4881 , G06F9/505 , G06F9/5044 , G06F9/5005 , G06F9/5055
Abstract: Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.
-
公开(公告)号:US11561607B2
公开(公告)日:2023-01-24
申请号:US17085805
申请日:2020-10-30
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Catherine Graves , Can Li , John Paul Strachan , Dejan S. Milojicic , Kimberly Keeton
IPC: G11C16/04 , G06F1/3296 , G11C13/00 , G06F1/3206 , G11C27/00
Abstract: Encoding of domain logic rules in an analog content addressable memory (aCAM) is disclosed. By encoding domain logic in an aCAM, rapid and flexible search capabilities are enabled, including the capability to search ranges of analog values, fuzzy match capabilities, and optimized parameter search capabilities. This is achieved with low latency by using only a small number of clock cycles at low power. A domain logic ruleset may be represented using various data structures such as decision trees, directed graphs, or the like. These representations can be converted to a table of values, where each table column can be directly mapped to a corresponding row of the aCAM.
-
公开(公告)号:US10372602B2
公开(公告)日:2019-08-06
申请号:US15545901
申请日:2015-01-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Sanketh Nalli , Haris Volos , Kimberly Keeton
IPC: G06F12/02 , G06F12/0804 , G06F12/0868
Abstract: Examples relate to ordering updates for nonvolatile memory accesses. In some examples, a first update that is propagated from a write-through processor cache of a processor is received by a write ordering buffer, where the first update is associated with a first epoch. The first update is stored in a first buffer entry of the write ordering buffer. At this stage, a second update that is propagated from the write-through processor cache is received, where the second update is associated with a second epoch. A second buffer entry of the write ordering buffer is allocated to store the second update. The first buffer entry and the second buffer entry can then be evicted to non-volatile memory in epoch order.
-
公开(公告)号:US20180018258A1
公开(公告)日:2018-01-18
申请号:US15545901
申请日:2015-01-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Sanketh Nalli , Haris Volos , Kimberly Keeton
IPC: G06F12/02
CPC classification number: G06F12/0238 , G06F12/0804 , G06F12/0868 , G06F2212/1028 , G06F2212/1032 , G06F2212/7203 , Y02D10/13
Abstract: Examples relate to ordering updates for nonvolatile memory accesses. In some examples, a first update that is propagated from a write-through processor cache of a processor is received by a write ordering buffer, where the first update is associated with a first epoch. The first update is stored in a first buffer entry of the write ordering buffer. At this stage, a second update that is propagated from the write-through processor cache is received, where the second update is associated with a second epoch. A second buffer entry of the write ordering buffer is allocated to store the second update. The first buffer entry and the second buffer entry can then be evicted to non-volatile memory in epoch order.
-
公开(公告)号:US20160253398A1
公开(公告)日:2016-09-01
申请号:US15032825
申请日:2013-12-06
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Sivashanmugam Jothivelavan , Kannan Rajkumar , Kannan K. Ramesh , Roy Annmary , Pendyala Jaipal , Hamilton De Freitas Coutinho , Guillherme De Campos Magalhaes , Marcelo Bandeira Condotta , Kimberly Keeton , Charles B. Morrey, III , Michael J. Spitzer
IPC: G06F17/30
CPC classification number: G06F16/27 , G06F16/178 , G06F16/184 , G06F16/86
Abstract: The present disclosure is generally related to replicating metadata. A method includes accessing a first file with a first unique identifier at a source location in a storage device, wherein metadata corresponding to the first file is stored in a first database with the first unique identifier. The method includes replicating the first file to produce a second file at a target location, wherein the second file has a second unique identifier. The method includes replicating the metadata and the first unique identifier to a second database. The method includes mapping the second unique identifier to the first unique identifier in the second database.
Abstract translation: 本公开通常涉及复制元数据。 一种方法包括在存储设备中的源位置处访问具有第一唯一标识符的第一文件,其中与第一文件相对应的元数据被存储在具有第一唯一标识符的第一数据库中。 该方法包括复制第一文件以在目标位置产生第二文件,其中第二文件具有第二唯一标识符。 该方法包括将元数据和第一唯一标识符复制到第二数据库。 该方法包括将第二唯一标识符映射到第二数据库中的第一唯一标识符。
-
公开(公告)号:US20240419490A1
公开(公告)日:2024-12-19
申请号:US18816471
申请日:2024-08-27
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Dejan S. Milojicic , Kimberly Keeton , Paolo Faraboschi , Cullen E. Bash
Abstract: Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.
-
公开(公告)号:US10942824B2
公开(公告)日:2021-03-09
申请号:US16153833
申请日:2018-10-08
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Haris Volos , Kimberly Keeton , Sharad Singhal , Yupu Zhang
Abstract: Exemplary embodiments herein describe programming models and frameworks for providing parallel and resilient tasks. Tasks are created in accordance with predetermined structures. Defined tasks are stored as data objects in a shared pool of memory that is made up of disaggregated memory communicatively coupled via a high performance interconnect that supports atomic operations as descried herein. Heterogeneous compute nodes are configured to execute tasks stored in the shared memory. When compute nodes fail, they do not impact the shared memory, the tasks or other data stored in the shared memory, or the other non-failing compute nodes. The non-failing compute nodes can take on the responsibility of executing tasks owned by other compute nodes, including tasks of a compute node that fails, without needing a centralized manager or schedule to re-assign those tasks. Task processing can therefore be performed in parallel and without impact from node failures.
-
-
-
-
-
-
-
-
-