Apparatus and method for a closed-loop dynamic resource allocation control framework

    公开(公告)号:US12210434B2

    公开(公告)日:2025-01-28

    申请号:US16914305

    申请日:2020-06-27

    Abstract: An apparatus and method for closed loop dynamic resource allocation. For example, one embodiment of a method comprises: collecting data related to usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priority workloads associated with one or more guaranteed performance levels and best effort workloads not associated with guaranteed performance levels; analyzing the data to identify resource reallocations from one or more of the priority workloads to one or more of the best effort workloads in one or more subsequent time periods while still maintaining the guaranteed performance levels; reallocating the resources from the priority workloads to the best effort workloads for the subsequent time periods; monitoring execution of the priority workloads with respect to the guaranteed performance level during the subsequent time periods; and preemptively reallocating resources from the best effort workloads to the priority workloads during the subsequent time periods to ensure compliance with the guaranteed performance level and responsive to detecting that the guaranteed performance level is in danger of being breached.

    TECHNOLOGIES FOR ENFORCING COHERENCE ORDERING IN CONSUMER POLLING INTERACTIONS

    公开(公告)号:US20190102301A1

    公开(公告)日:2019-04-04

    申请号:US15720379

    申请日:2017-09-29

    Abstract: Technologies for enforcing coherence ordering in consumer polling interactions include a network interface controller (NIC) of a target computing device which is configured to receive a network packet, write the payload of the network packet to a data storage device of the target computing device, and obtain, subsequent to having transmitted a last write request to write the payload to the data storage device, ownership of a flag cache line of a cache of the target computing device. The NIC is additionally configured to receive a snoop request from a processor of the target computing device, identify whether the received snoop request corresponds to a read flag snoop request associated with an active request being processed by the NIC, and hold the received snoop request for delayed return in response to having identified the received snoop request as the read flag snoop request. Other embodiments are described herein.

    CACHE MONITORING
    5.
    发明申请
    CACHE MONITORING 审中-公开

    公开(公告)号:US20190042388A1

    公开(公告)日:2019-02-07

    申请号:US16022543

    申请日:2018-06-28

    Abstract: There is disclosed in one example a computing apparatus, including: a processor; a multilevel cache including a plurality of cache levels; a peripheral device configured to write data directly to a directly writable cache; and a cache monitoring circuit, including cache counters La to be incremented when a cache line is allocated into the directly writable cache, Lp to be incremented when a cache line is processed by the processor and deallocated from the directly writable cache, and Le to be incremented when a cache line is evicted from the directly writable cache to the memory, wherein the cache monitoring circuit is to determine a direct write policy according to the cache counters.

    MULTI-TIMESCALE POWER CONTROL TECHNOLOGIES

    公开(公告)号:US20220326757A1

    公开(公告)日:2022-10-13

    申请号:US17853442

    申请日:2022-06-29

    Abstract: The present disclosure is related to power control mechanisms for workload processing systems, and in particular, multi-scale power control technologies that can be used to reduce the overhead of workload processing systems. The disclosed power control mechanisms operate on multiple timescales including a slow timescale and a fast timescale. Separate control loops (or governors) are used for the slow and fast timescales where each control loop includes its own trigger mechanisms and configurable operational policies. The operational policies for slow timescale control loop can be trained separately using various machine learning techniques while the operational policies for the fast timescale control loop can be simple and reactive heuristics.

Patent Agency Ranking