-
公开(公告)号:US20220066831A1
公开(公告)日:2022-03-03
申请号:US17008549
申请日:2020-08-31
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: MATTHEW S. GATES , Joel E. Lilienkamp , Alex Veprinsky , Susan Agten
IPC: G06F9/50 , G06F12/0877
Abstract: Systems and methods are provided for lock-free thread scheduling. Threads may be placed in a ring buffer shared by all computer processing units (CPUs), e.g., in a node. A thread assigned to a CPU may be placed in the CPU's local run queue. However, when a CPU's local run queue is cleared, that CPU checks the shared ring buffer to determine if any threads are waiting to run on that CPU, and if so, the CPU pulls a batch of threads related to that ready-to-run thread to execute. If not, an idle CPU randomly selects another CPU to steak threads from, and the idle CPU attempts to dequeue a thread batch associated with the CPU from the shared ring buffer. Polling may be handled through the use of a shared poller array to dynamically distribute polling across multiple CPUs.
-
公开(公告)号:US20240037072A1
公开(公告)日:2024-02-01
申请号:US17816056
申请日:2022-07-29
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Robert Michael Lester , Susan Agten , Matthew S. Gates , Alex Veprinsky
IPC: G06F16/174 , G06F3/06
CPC classification number: G06F16/1744 , G06F3/0608 , G06F3/0641 , G06F3/0658 , G06F3/0673
Abstract: Example implementations relate to storing data in a storage system. An example includes receiving, by a storage controller of a storage system, a data unit to be stored in persistent storage of the storage system. The storage controller determines maximum and minimum entropy values for the received data unit. The storage controller determines, based on at least the minimum entropy value and the maximum entropy value, whether the received data unit is viable for data reduction. In response to a determination that the received data unit is viable for data reduction, The storage controller performs at least one reduction operation on the received data unit.
-
公开(公告)号:US11853221B2
公开(公告)日:2023-12-26
申请号:US17651648
申请日:2022-02-18
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Xiali He , Alex Veprinsky , Matthew S. Gates , William Michael McCormack , Susan Agten
IPC: G06F12/0862
CPC classification number: G06F12/0862 , G06F2212/602
Abstract: In some examples, a system dynamically adjusts a prefetching load with respect to a prefetch cache based on a measure of past utilizations of the prefetch cache, wherein the prefetching load is to prefetch data from storage into the prefetch cache.
-
公开(公告)号:US20230315526A1
公开(公告)日:2023-10-05
申请号:US18326870
申请日:2023-05-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Matthew Gates , Joel E. Lilienkamp , Alex Veprinsky , Susan Agten
IPC: G06F9/50
CPC classification number: G06F9/5027 , G06F2212/2542 , G06F9/4881
Abstract: Systems and methods are provided for lock-free thread scheduling. Threads may be placed in a ring buffer shared by all computer processing units (CPUs), e.g., in a node. A thread assigned to a CPU may be placed in the CPU's local run queue. However, when a CPU's local run queue is cleared, that CPU checks the shared ring buffer to determine if any threads are waiting to run on that CPU, and if so, the CPU pulls a batch of threads related to that ready-to-run thread to execute. If not, an idle CPU randomly selects another CPU to steak threads from, and the idle CPU attempts to dequeue a thread batch associated with the CPU from the shared ring buffer. Polling may be handled through the use of a shared poller array to dynamically distribute polling across multiple CPUs.
-
公开(公告)号:US12072844B2
公开(公告)日:2024-08-27
申请号:US17816056
申请日:2022-07-29
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Robert Michael Lester , Susan Agten , Matthew S. Gates , Alex Veprinsky
IPC: G06F16/00 , G06F3/06 , G06F16/174
CPC classification number: G06F16/1744 , G06F3/0608 , G06F3/0641 , G06F3/0658 , G06F3/0673
Abstract: Example implementations relate to storing data in a storage system. An example includes receiving, by a storage controller of a storage system, a data unit to be stored in persistent storage of the storage system. The storage controller determines maximum and minimum entropy values for the received data unit. The storage controller determines, based on at least the minimum entropy value and the maximum entropy value, whether the received data unit is viable for data reduction. In response to a determination that the received data unit is viable for data reduction, The storage controller performs at least one reduction operation on the received data unit.
-
公开(公告)号:US20230267077A1
公开(公告)日:2023-08-24
申请号:US17651648
申请日:2022-02-18
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Xiali He , Alex Veprinsky , Matthew S. Gates , William Michael McCormack , Susan Agten
IPC: G06F12/0862
CPC classification number: G06F12/0862 , G06F2212/602
Abstract: In some examples, a system dynamically adjusts a prefetching load with respect to a prefetch cache based on a measure of past utilizations of the prefetch cache, wherein the prefetching load is to prefetch data from storage into the prefetch cache.
-
公开(公告)号:US11698816B2
公开(公告)日:2023-07-11
申请号:US17008549
申请日:2020-08-31
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Matthew S. Gates , Joel E. Lilienkamp , Alex Veprinsky , Susan Agten
IPC: G06F9/50 , G06F9/48 , G06F9/54 , G06F12/0811 , G06F12/0817
CPC classification number: G06F9/5027 , G06F9/4881 , G06F9/544 , G06F12/0811 , G06F12/0817 , G06F2212/1024 , G06F2212/2542
Abstract: Systems and methods are provided for lock-free thread scheduling. Threads may be placed in a ring buffer shared by all computer processing units (CPUs), e.g., in a node. A thread assigned to a CPU may be placed in the CPU's local run queue. However, when a CPU's local run queue is cleared, that CPU checks the shared ring buffer to determine if any threads are waiting to run on that CPU, and if so, the CPU pulls a batch of threads related to that ready-to-run thread to execute. If not, an idle CPU randomly selects another CPU to steal threads from, and the idle CPU attempts to dequeue a thread batch associated with the CPU from the shared ring buffer. Polling may be handled through the use of a shared poller array to dynamically distribute polling across multiple CPUs.
-
-
-
-
-
-