-
公开(公告)号:US20240111993A1
公开(公告)日:2024-04-04
申请号:US17958189
申请日:2022-09-30
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: DARIO KOROLIJA , Kun Wu , Sai Rahul Chalamalasetti , Lance Mackimmie Evans , Dejan S. Milojicic
IPC: G06N3/04 , G06F16/2455 , G06F16/28 , G06N3/08
CPC classification number: G06N3/0445 , G06F16/24552 , G06F16/288 , G06N3/08
Abstract: Systems and methods are provided for performing object store offloading. A user query can be received from a client device to access a data object. The semantic structure associated with the data object can be identified, as well as one or more relationships associated with the semantic structure of the data object. A view of the data object can be determined based on the one or more relationships and said view can be provided to a user interface.
-
公开(公告)号:US11443036B2
公开(公告)日:2022-09-13
申请号:US16526388
申请日:2019-07-30
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Naysen Robertson , Sai Rahul Chalamalasetti , William James Walker
Abstract: In some examples, an apparatus includes a management controller for use in a computer system having a processing resource for executing an operating system (OS) of the computer system, the management controller being separate from the processing resource and to perform, based on operation of the management controller within a cryptographic boundary, management of components of the computer system, the management of components comprising power control of the computer system. The management controller is to receive sensor data, perform facial recognition based on the sensor data, and determine whether to initiate a security action responsive to the facial recognition.
-
公开(公告)号:US20220121885A1
公开(公告)日:2022-04-21
申请号:US17074201
申请日:2020-10-19
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Sai Rahul Chalamalasetti , Dejan S. Milojicic , Sergey Serebryakov
Abstract: Testing for bias in a machine learning (ML) model in a manner that is independent of the code/weights deployment path is described. If bias is detected, an alert for bias is generated, and optionally, the ML model can be incrementally re-trained to mitigate the detected bias. Re-training the ML model to mitigate the bias may include enforcing a bias cost function to maintain a level of bias in the ML model below a threshold bias level. One or more statistical metrics representing the level of bias present in the ML model may be determined and compared against one or more threshold values. If one or more metrics exceed corresponding threshold value(s), the level of bias in the ML model may be deemed to exceed a threshold level of bias, and re-training of the ML model to mitigate the bias may be initiated.
-
4.
公开(公告)号:US11947928B2
公开(公告)日:2024-04-02
申请号:US17017557
申请日:2020-09-10
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Craig Warner , Eun Sub Lee , Sai Rahul Chalamalasetti , Martin Foltin
CPC classification number: G06F7/5443 , G06F9/3867 , G06F9/522 , G06F40/20 , G06N3/063
Abstract: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
公开(公告)号:US11861429B2
公开(公告)日:2024-01-02
申请号:US17049031
申请日:2018-04-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: John Paul Strachan , Dejan S. Milojicic , Martin Foltin , Sai Rahul Chalamalasetti , Amit S. Sharma
Abstract: In some examples, a device includes a first processing core comprising a resistive memory array to perform an analog computation, and a digital processing core comprising a digital memory programmable with different values to perform different computations responsive to respective different conditions. The device further includes a controller to selectively apply input data to the first processing core and the digital processing core.
-
6.
公开(公告)号:US20220075597A1
公开(公告)日:2022-03-10
申请号:US17017557
申请日:2020-09-10
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Craig Warner , Eun Sub Lee , Sai Rahul Chalamalasetti , Martin Foltin
Abstract: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
公开(公告)号:US20200065150A1
公开(公告)日:2020-02-27
申请号:US16110516
申请日:2018-08-23
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Zhikui Wang , Antonio Lain , Sai Rahul Chalamalasetti , Anshuman Goswami
Abstract: A method for allocating resources includes determining that an initial allocation of memory bandwidth for one or more computing jobs fails a performance metric. The memory bandwidth provides access to a global memory pool for multiple legacy processors across a memory fabric. The method also includes determining a new allocation of memory bandwidth for the computing jobs that meets the performance metric. Additionally, the method includes assigning the new allocation of memory bandwidth to the computing jobs. The method further includes executing the computing jobs using the new allocation of memory bandwidth.
-
公开(公告)号:US20170371561A1
公开(公告)日:2017-12-28
申请号:US15190276
申请日:2016-06-23
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Qiong Cai , Paolo Faraboschi , Cong Xu , Ping Chi , Sai Rahul Chalamalasetti , Andrew C. Walton
IPC: G06F3/06
Abstract: Techniques for reallocating a memory pending queue based on stalls are provided. In one aspect, it may be determined at a memory stop of a memory fabric that at least one class of memory access is stalled. It may also be determined at the memory stop of the memory fabric that there is at least one class of memory access that is not stalled. At least a portion of a memory pending queue may be reallocated from the class of memory access that is not stalled to the class of memory access that is stalled.
-
公开(公告)号:US12204961B2
公开(公告)日:2025-01-21
申请号:US18528086
申请日:2023-12-04
Applicant: Hewlett Packard Enterprise Development LP
Inventor: John Paul Strachan , Dejan S. Milojicic , Martin Foltin , Sai Rahul Chalamalasetti , Amit S. Sharma
Abstract: In some examples, a device includes a first processing core comprising a resistive memory array to perform an analog computation, and a digital processing core comprising a digital memory programmable with different values to perform different computations responsive to respective different conditions. The device further includes a controller to selectively apply input data to the first processing core and the digital processing core.
-
公开(公告)号:US20230170991A1
公开(公告)日:2023-06-01
申请号:US17539275
申请日:2021-12-01
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Hyunmin Jeong , Sai Rahul Chalamalasetti , Marco Fiorentino , Peter Jin Rhim
IPC: H04B10/079
CPC classification number: H04B10/07955
Abstract: Examples described herein relate to a method for synchronizing a wavelength of light in an optical device. In some examples, a heater voltage may be predicted for a heater disposed adjacent to the optical device in a photonic chip. The predicted heater voltage may be applied to the heater to cause a change in the wavelength of the light inside the optical device. In response to applying the heater voltage, an optical power inside the optical device may be measured. Further, a check may be performed to determine whether the measured optical power is a peak optical power. If it is determined that measured optical power is the peak optical power, the application of the predicted heater voltage to the heater may be continued.
-
-
-
-
-
-
-
-
-