-
1.
公开(公告)号:US20240362076A1
公开(公告)日:2024-10-31
申请号:US18633519
申请日:2024-04-12
Applicant: DOOSAN ENERBILITY CO., LTD.
Inventor: Jwa Young MAENG , Hyun Sik KIM , Jun Woo YOO , Sang Gun NA
CPC classification number: G06F9/505 , G05B13/021
Abstract: The present disclosure relates to a system and method for adjusting control sensitivity based on optimal search. According to the present disclosure, stable control convergence can be achieved without excessively shortening the lifespan of a target device by moving a target device only when there is a control gain larger than or equal to a preset reference value. This occurs during continuous control of the target device through the system for adjusting a control sensitivity based on optimal search, which adjusts the sensitivity of the continuous control based on optimal search.
-
公开(公告)号:US12132649B2
公开(公告)日:2024-10-29
申请号:US18454202
申请日:2023-08-23
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Igor Gorodetsky , Hess M. Hodge , Timothy J. Johnson
IPC: G06F13/24 , G06F9/50 , G06F9/54 , G06F12/0862 , G06F12/1036 , G06F12/1045 , G06F13/14 , G06F13/16 , G06F13/38 , G06F13/40 , G06F13/42 , G06F15/173 , H04L1/00 , H04L43/0876 , H04L43/10 , H04L45/00 , H04L45/021 , H04L45/028 , H04L45/12 , H04L45/122 , H04L45/125 , H04L45/16 , H04L45/24 , H04L45/28 , H04L45/42 , H04L45/745 , H04L45/7453 , H04L47/10 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/34 , H04L47/52 , H04L47/62 , H04L47/625 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/78 , H04L47/80 , H04L49/00 , H04L49/101 , H04L49/15 , H04L49/90 , H04L49/9005 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/40 , H04L69/28
CPC classification number: H04L45/28 , G06F9/505 , G06F9/546 , G06F12/0862 , G06F12/1036 , G06F12/1063 , G06F13/14 , G06F13/16 , G06F13/1642 , G06F13/1673 , G06F13/1689 , G06F13/385 , G06F13/4022 , G06F13/4068 , G06F13/4221 , G06F15/17331 , H04L1/0083 , H04L43/0876 , H04L43/10 , H04L45/021 , H04L45/028 , H04L45/122 , H04L45/123 , H04L45/125 , H04L45/16 , H04L45/20 , H04L45/22 , H04L45/24 , H04L45/38 , H04L45/42 , H04L45/46 , H04L45/566 , H04L45/70 , H04L45/745 , H04L45/7453 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/18 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/323 , H04L47/34 , H04L47/39 , H04L47/52 , H04L47/621 , H04L47/6235 , H04L47/626 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/781 , H04L47/80 , H04L49/101 , H04L49/15 , H04L49/30 , H04L49/3009 , H04L49/3018 , H04L49/3027 , H04L49/90 , H04L49/9005 , H04L49/9021 , H04L49/9036 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/40 , G06F2212/50 , G06F2213/0026 , G06F2213/3808 , H04L69/28
Abstract: A network interface controller (NIC) capable of efficient memory access is provided. The NIC can be equipped with an operation logic block, a signaling logic block, and a tracking logic block. The operation logic block can maintain an operation group associated with packets requesting an operation on a memory segment of a host device of the NIC. The signaling logic block can determine whether a packet associated with the operation group has arrived at or departed from the NIC. Furthermore, the tracking logic block can determine that a request for releasing the memory segment has been issued. The tracking logic block can then determine whether at least one packet associated with the operation group is under processing in the NIC. If no packet associated with the operation group is under processing in the NIC, tracking logic block can notify the host device that the memory segment can be released.
-
公开(公告)号:US20240356836A1
公开(公告)日:2024-10-24
申请号:US18675642
申请日:2024-05-28
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Edwin L. Froese , Robert L. Alverson , Konstantinos Fragkiadakis
IPC: H04L45/28 , G06F9/50 , G06F9/54 , G06F12/0862 , G06F12/1036 , G06F12/1045 , G06F13/14 , G06F13/16 , G06F13/28 , G06F13/38 , G06F13/40 , G06F13/42 , G06F15/173 , H04L1/00 , H04L43/0876 , H04L43/10 , H04L45/00 , H04L45/02 , H04L45/021 , H04L45/028 , H04L45/12 , H04L45/122 , H04L45/125 , H04L45/16 , H04L45/24 , H04L45/42 , H04L45/745 , H04L45/7453 , H04L47/10 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/34 , H04L47/52 , H04L47/62 , H04L47/625 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/78 , H04L47/80 , H04L49/00 , H04L49/101 , H04L49/15 , H04L49/90 , H04L49/9005 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/28 , H04L69/40
CPC classification number: H04L45/28 , G06F9/505 , G06F9/546 , G06F12/0862 , G06F12/1036 , G06F12/1063 , G06F13/14 , G06F13/16 , G06F13/1642 , G06F13/1673 , G06F13/1689 , G06F13/28 , G06F13/385 , G06F13/4022 , G06F13/4068 , G06F13/4221 , G06F15/17331 , H04L1/0083 , H04L43/0876 , H04L43/10 , H04L45/02 , H04L45/021 , H04L45/028 , H04L45/122 , H04L45/123 , H04L45/125 , H04L45/16 , H04L45/20 , H04L45/22 , H04L45/24 , H04L45/38 , H04L45/42 , H04L45/46 , H04L45/566 , H04L45/70 , H04L45/745 , H04L45/7453 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/18 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/323 , H04L47/34 , H04L47/39 , H04L47/52 , H04L47/621 , H04L47/6235 , H04L47/626 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/781 , H04L47/80 , H04L49/101 , H04L49/15 , H04L49/30 , H04L49/3009 , H04L49/3018 , H04L49/3027 , H04L49/90 , H04L49/9005 , H04L49/9021 , H04L49/9036 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/40 , G06F2212/50 , G06F2213/0026 , G06F2213/3808 , H04L69/28
Abstract: Systems and methods are provided for managing multicast data transmission in a network having a plurality of switches arranged in a Dragonfly network topology, including: receiving a multicast transmission at an edge port of a switch and identifying the transmission as a network multicast transmission; creating an entry in a multicast table within the switch; routing the multicast transmission across the network to a plurality of destinations via a plurality of links, wherein at each of the links the multicast table is referenced to determine to which ports the multicast transmission should be forwarded; and changing, when necessary, the virtual channel used by each copy of the multicast transmission as the copy progresses through the network.
-
公开(公告)号:US12118411B2
公开(公告)日:2024-10-15
申请号:US16568038
申请日:2019-09-11
Applicant: ADVANCED MICRO DEVICES, INC. , ATI TECHNOLOGIES ULC
Inventor: Sneha V. Desai , Michael Estlick , Erik Swanson , Anilkumar Ranganagoudra
CPC classification number: G06F9/544 , G06F9/505 , G06F9/5083 , G06F9/528 , G06F9/546
Abstract: A processor includes a plurality of execution pipes and a distributed scheduler coupled to the plurality of execution pipes. The distributed scheduler includes a first queue to buffer instruction operations from a front end of an instruction pipeline of the processor and a plurality of second queues, wherein each second queue is to buffer instruction operations allocated from the first queue for a corresponding separate subset of execution pipes of the plurality of execution pipes. The distributed scheduler further includes a queue controller to select an allocation mode from a plurality of allocation modes based on whether at least one indicator of an imbalance at the distributed scheduler is detected, and further to control the distributed scheduler to allocate instruction operations from the first queue among the plurality of second queues in accordance with the selected allocation mode.
-
5.
公开(公告)号:US20240338254A1
公开(公告)日:2024-10-10
申请号:US18131726
申请日:2023-04-06
Applicant: Dell Products L.P.
CPC classification number: G06F9/505 , G06F9/45558 , G06F2009/4557
Abstract: An apparatus comprises a processing device configured to obtain monitoring data characterizing resource utilization by information technology (IT) assets having resources assigned from a shared resource pool, to select features for use in modeling predicted resource utilization by the IT assets in future time periods, to generate predictions of resource utilization by the IT assets in each of the future time periods, and to determine whether the predicted resource utilization by a given IT asset exhibits at least a threshold difference from its current resource allocation. The processing device is further configured, response to the determination, to proactively adjust resource allocation to the given IT asset from the shared resource pool for the given future time period based at least in part on the predicted resource utilization, for the given future time period, by other ones of the IT assets having resources assigned from the shared resource pool.
-
公开(公告)号:US12112214B2
公开(公告)日:2024-10-08
申请号:US18355033
申请日:2023-07-19
Applicant: Microsoft Technology Licensing, LLC
Inventor: Shandan Zhou , Saurabh Agarwal , Karthikeyan Subramanian , Thomas Moscibroda , Paul Naveen Selvaraj , Sandeep Ramji , Sorin Iftimie , Nisarg Sheth , Wanghai Gu , Ajay Mani , Si Qin , Yong Xu , Qingwei Lin
CPC classification number: G06F9/5083 , G06F9/45558 , G06F9/505 , G06F2009/4557 , G06F11/0709 , G06F11/076 , G06F11/3006 , G06F11/301 , G06F11/3433
Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting expansion failures and implementing defragmentation instructions based on the predicted expansion failures and other signals. For example, systems disclosed herein may apply a failure prediction model to determine an expansion failure prediction associated with an estimated likelihood that deployment failures will occur on a node cluster. The systems disclosed herein may further generate defragmentation instructions indicating a severity level that a defragmentation engine may execute on a cluster level to prevent expansion failures while minimizing negative customer impacts. By uniquely generating defragmentation instructions for each node cluster, a cloud computing system can minimize expansion failures, increase resource capacity, reduce costs, and provide access to reliable services to customers.
-
公开(公告)号:US12112212B2
公开(公告)日:2024-10-08
申请号:US17186708
申请日:2021-02-26
Applicant: Google LLC
Inventor: Dmytro Tymofieiev , Jaideep Singh , Kusum Kumar Madarasu
IPC: G06F9/50 , G06F11/34 , G06F12/02 , G06N20/00 , H04L41/0896
CPC classification number: G06F9/5083 , G06F9/5016 , G06F11/3442 , G06F12/0238 , H04L41/0896 , G06F9/505 , G06F11/3409 , G06F2209/508 , G06F2212/1024 , G06N20/00
Abstract: Methods, systems, and apparatus, including computer-readable storage media for load. A load balancer can input data to the plurality of computing devices configured to process the input data according to a load-balancing distribution. The load balancer can receive from a first computing device of the plurality of computing devices, data characterizing memory bandwidth for a memory device on the first computing device and over a period of time. The load balancer can determine, at least from the data characterizing the memory bandwidth and a memory bandwidth saturation point for the first computing device, that the first computing device can process additional data within a predetermined latency threshold. In response to the determining, the load balancer can send the additional data to the first computing device.
-
公开(公告)号:US20240333820A1
公开(公告)日:2024-10-03
申请号:US18624028
申请日:2024-04-01
Applicant: Utech, Inc.
Inventor: Igor Fedyak
IPC: H04L67/63 , G06F9/50 , H04L67/01 , H04L67/1014 , H04L67/125 , H04L69/22
CPC classification number: H04L67/63 , G06F9/5033 , G06F9/505 , G06F9/5055 , H04L67/01 , H04L67/1014 , H04L67/125 , H04L69/22
Abstract: In accordance with an embodiment, described herein is a system and method for receiving content to be parsed, and configuring a network of parsing devices for use in parsing the content in accordance with templates. The system comprises a management server in communication with the parsing network, and the management server is configured to determine a parsing assignment for one or more parsing devices within the parsing network. The parsing network comprises a plurality of parsing devices, each comprising or associated with an endpoint for enabling communication with the management server. The parsing assignment indicates content items to be parsed by the parsing devices and associated templates for use by the parsing devices.
-
9.
公开(公告)号:US20240330066A1
公开(公告)日:2024-10-03
申请号:US18198472
申请日:2023-05-17
Applicant: JPMorgan Chase Bank, N.A.
Inventor: Jessie RINCON-PAZ , Francine SHEPHARD , Navin NAGARAJAIAH , Tijelino J BRAVO , Louis FLORES , Andres Lucas GARCIA FIORINI , Anmol P MEHTA , Nisha KAW , Rajesh GUNTHA , Shiv GURUSWAMY , Joseph E LEIDEMER
IPC: G06F9/50
CPC classification number: G06F9/505
Abstract: A method and a system for automated performance of capacity allocation, brokerage, placement, and provisioning of compute, network, and storage resources are provided. The method includes: receiving a first data set that relates to resource requirements of a user; retrieving, from a memory, a second data set that relates to resource availability; analyzing the first data set and the second data set in order to determine a proposed allocation of resources and a proposed timing that corresponds to the proposed allocation; and provisioning the resources to the user based on the proposed allocation and the proposed timing. A machine learning model that is trained by using historical resource allocation data may be applied to the first data set and the second data set in order to perform the analysis.
-
公开(公告)号:US20240330054A1
公开(公告)日:2024-10-03
申请号:US18621635
申请日:2024-03-29
Applicant: Beijing Volcano Engine Technology Co., Ltd.
Inventor: Leilei SUN , Siying ZHAO , Xuan LUO
IPC: G06F9/50
CPC classification number: G06F9/5016 , G06F9/505
Abstract: A data processing method, an electronic device, and a computer readable medium are provided. The method includes: after an elastic scaling rule for a target cluster is created, configuring a cache space corresponding to the target cluster according to the elastic scaling rule, obtaining indicator detection state data of the load type rule item within a completion time range according to a cluster identifier of the target cluster and an indicator item identifier carried by the load type rule item, storing the indicator detection state data to the cache space corresponding to the target cluster, keeping obtaining the indicator detection update result corresponding to the target cluster during the polling interval and updating a storage content in the cache space corresponding to the target cluster according to the indicator detection update result.
-
-
-
-
-
-
-
-
-