-
公开(公告)号:US20220291952A1
公开(公告)日:2022-09-15
申请号:US17198871
申请日:2021-03-11
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: DEJAN S. MILOJICIC , Kimberly Keeton , Paolo Faraboschi , Cullen E. Bash
Abstract: Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.
-
公开(公告)号:US11200345B2
公开(公告)日:2021-12-14
申请号:US15746494
申请日:2015-07-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Mark Lillibridge , Paolo Faraboschi , Chris I. Dalton
Abstract: Techniques for a firewall to determine access to a portion of memory are provided. In one aspect, an access request to access a portion of memory within a pool of shared memory may be received at a firewall. The firewall may determine whether the access request to access the portion of memory is allowed. The access request may be allowed to proceed based on the determination. The operation of the firewall may not utilize address translation.
-
公开(公告)号:US20210049125A1
公开(公告)日:2021-02-18
申请号:US17072918
申请日:2020-10-16
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Dejan S. Milojicic , Kirk M. Bresniker , Paolo Faraboschi , John Paul Strachan
Abstract: A method of computing in memory, the method including inputting a packet including data into a computing memory unit having a control unit, loading the data into at least one computing in memory micro-unit, processing the data in the computing in memory micro-unit, and outputting the processed data. Also, a computing in memory system including a computing in memory unit having a control unit, wherein the computing in memory unit is configured to receive a packet having data and a computing in memory micro-unit disposed in the computing in memory unit, the computing in memory micro-unit having at least one of a memory matrix and a logic elements matrix.
-
公开(公告)号:US10884953B2
公开(公告)日:2021-01-05
申请号:US15693149
申请日:2017-08-31
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Dejan S Milojicic , Chris I Dalton , Paolo Faraboschi , Kirk M Bresniker
IPC: G06F12/14
Abstract: Example implementations relate to a capability enforcement processor. In an example, a capability enforcement processor may be interposed between a memory that stores data accessible via capabilities and a system processor that executes processes. The capability enforcement processor intercepts a memory request from the system processor and enforces the memory request based on capability enforcement processor capabilities maintained in per-process capability spaces of the capability enforcement processor.
-
公开(公告)号:US20200350991A1
公开(公告)日:2020-11-05
申请号:US16399176
申请日:2019-04-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Thomas Van Vaerenbergh , Raymond G. Beausoleil , Kevin B. Leigh , Di Liang , Terrel Morris , Paolo Faraboschi
Abstract: Examples herein relate to optical systems. In particular, implementations herein relate to an optical system including a bidirectional optical link such as an optical fiber. The optical system includes first and second optical modules coupled to opposing ends of the optical fiber. The first optical module is configured to transmit optical signals across the optical fiber in a first direction and the second optical module is configured to transmit optical signals across the optical fiber in a second direction opposite the first direction. Each of the first and second optical modules includes a multi-wavelength optical source configured to emit light. Respective channel spacing of the multi-wavelength optical sources of the first and second optical modules are offset from each other such that the respective wavelengths of the emitted light transmitted across the optical fiber from the first and second optical sources do not overlap.
-
公开(公告)号:US10740235B2
公开(公告)日:2020-08-11
申请号:US15746465
申请日:2015-07-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Alexandros Daglis , Paolo Faraboschi , Qiong Cai , Gary Gostin
IPC: G06F12/08 , G06F12/0817 , G06F12/14 , G06F12/0831 , G06F12/0811
Abstract: A technique includes, in response to a cache miss occurring with a given processing node of a plurality of processing nodes, using a directory-based coherence system for the plurality of processing nodes to regulate snooping of an address that is associated with the cache miss. Using the directory-based coherence system to regulate whether the address is included in a snooping domain is based at least in part on a number of cache misses associated with the address.
-
公开(公告)号:US10691375B2
公开(公告)日:2020-06-23
申请号:US15545915
申请日:2015-01-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Vanish Talwar , Paolo Faraboschi , Daniel Gmach , Yuan Chen , Al Davis , Adit Madan
IPC: G06F3/06 , G06F15/173
Abstract: In one example, a memory network may control access to a shared memory that is by multiple compute nodes. The memory network may control the access to the shared memory by receiving a memory access request originating from an application executing on the multiple compute nodes and determining a priority for processing the memory access request. The priority determined by the memory network may correspond to a memory address range in the memory that is specifically used by the application.
-
公开(公告)号:US20200097440A1
公开(公告)日:2020-03-26
申请号:US16139913
申请日:2018-09-24
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Dejan S. Milojicic , Kirk M. Bresniker , Paolo Faraboschi , John Paul Strachan
Abstract: A method of computing in memory, the method including inputting a packet including data into a computing memory unit having a control unit, loading the data into at least one computing in memory micro-unit, processing the data in the computing in memory micro-unit, and outputting the processed data. Also, a computing in memory system including a computing in memory unit having a control unit, wherein the computing in memory unit is configured to receive a packet having data and a computing in memory micro-unit disposed in the computing in memory unit, the computing in memory micro-unit having at least one of a memory matrix and a logic elements matrix.
-
公开(公告)号:US10592437B2
公开(公告)日:2020-03-17
申请号:US15664101
申请日:2017-07-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Geoffrey Ndu , Dejan S. Milojicic , Paolo Faraboschi , Chris I. Dalton
Abstract: Memory blocks are associated with each memory level of a hierarchy of memory levels. Each memory block has a matching key capability (MaKC). The MaKC of a memory block governs access to the memory block, in accordance with permissions specified by the MaKC. The MaKC of a memory block can uniquely identify the memory block across the hierarchy of memory levels, and can be globally unique across the memory blocks. An MaKC of a memory block includes a block protection key (BPK) stored with the memory block, and an execution protection key (EPK). If a provided EPK for a memory block matches the memory block's BPK upon comparison, access to the memory block is allowed according to the permissions specified by the MaKC.
-
公开(公告)号:US20180285011A1
公开(公告)日:2018-10-04
申请号:US15476185
申请日:2017-03-31
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Kaisheng Ma , Qiong Cai , Cong Xu , Paolo Faraboschi
IPC: G06F3/06
CPC classification number: G06F3/0631 , G06F3/061 , G06F3/0683 , G06F9/4881 , G06F9/5044 , G06F15/7821
Abstract: Examples described herein include receiving an operation pipeline for a computing system and building a graph that comprises a model for a number of potential memory side accelerator thread assignments to carry out the operation pipeline. The computing system may comprise at least two memories and a number of memory side accelerators. Each model may comprise a number of steps and at least one step out of the number of steps in each model may comprise a function performed at one memory side accelerator out of the number of memory side accelerators. Examples described herein also include determining a cost of at least one model.
-
-
-
-
-
-
-
-
-