APPARATUS AND METHOD FOR A CLOSED-LOOP DYNAMIC RESOURCE ALLOCATION CONTROL FRAMEWORK

    公开(公告)号:US20210406147A1

    公开(公告)日:2021-12-30

    申请号:US16914305

    申请日:2020-06-27

    Abstract: An apparatus and method for closed loop dynamic resource allocation. For example, one embodiment of a method comprises: collecting data related to usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priority workloads associated with one or more guaranteed performance levels and best effort workloads not associated with guaranteed performance levels; analyzing the data to identify resource reallocations from one or more of the priority workloads to one or more of the best effort workloads in one or more subsequent time periods while still maintaining the guaranteed performance levels; reallocating the resources from the priority workloads to the best effort workloads for the subsequent time periods; monitoring execution of the priority workloads with respect to the guaranteed performance level during the subsequent time periods; and preemptively reallocating resources from the best effort workloads to the priority workloads during the subsequent time periods to ensure compliance with the guaranteed performance level and responsive to detecting that the guaranteed performance level is in danger of being breached.

    METHODS AND APPARATUS TO REDUCE STATIC AND DYNAMIC FRAGMENTATION IMPACT ON SOFTWARE-DEFINED INFRASTRUCTURE ARCHITECTURES

    公开(公告)号:US20180026868A1

    公开(公告)日:2018-01-25

    申请号:US15655846

    申请日:2017-07-20

    Abstract: Techniques for reducing fragmentation in software-defined infrastructures are described. A compute node, including one or more processor circuits, may be configured to access one or more remote resources via a fabric, the compute node may be configured to receive a dynamic tolerated fragmentation for the one or more remote resources. The compute node may be configured to monitor the performance of the one or more remote resources. For example, the compute node may be configured to monitor if one or more of the monitored resources were to exceed a threshold bandwidth or latency range as defined by the dynamic tolerated fragmentation. The compute node may be configured to determine that the monitored performance of the one or more remote resources is outside a threshold defined by the dynamic tolerated fragmentation. If one or more of the remote resources is outside the threshold, for a predetermined period of time, or otherwise, the compute node may be configured to determine so and take appropriate measures, such as generating a message indicating that performance of the one or more remote resources is outside a threshold defined by the dynamic tolerated fragmentation. Other embodiments are described and claimed.

    APPARATUS AND METHOD FOR A RESOURCE ALLOCATION CONTROL FRAMEWORK USING PERFORMANCE MARKERS

    公开(公告)号:US20210406075A1

    公开(公告)日:2021-12-30

    申请号:US16914301

    申请日:2020-06-27

    Abstract: An apparatus and method for dynamic resource allocation with mile/performance markers. For example, one embodiment of a processor comprises: resource allocation circuitry to allocate a plurality of hardware resources to a plurality of workloads including priority workloads associated with one or more guaranteed performance levels; and monitoring circuitry to evaluate execution progress of a workload across a plurality of nodes, each node to execute one or more processing stages of the workload, wherein the monitoring circuitry is to evaluate the execution progress of the workload, at least in part, by reading progress markers advertised by the workload at the specified processing stages, wherein the monitoring circuitry is to detect that the workload may not meet one of the guaranteed performance levels based on the progress markers, and wherein the resource allocation circuitry, responsive to the monitoring circuitry, is to reallocate one or more of the plurality of hardware resources to improve the performance level of the workload.

    SHARED MEMORY CONTROLLER IN A DATA CENTER
    7.
    发明申请

    公开(公告)号:US20190042488A1

    公开(公告)日:2019-02-07

    申请号:US15857337

    申请日:2017-12-28

    Abstract: Technology for a memory controller is described. The memory controller can receive a request from a data consumer node in a data center for training data. The training data indicated in the request can correspond to a model identifier (ID) of a model that runs on the data consumer node. The memory controller can identify a data provider node in the data center that stores the training data that is requested by the data consumer node. The data provider node can be identified using a tracking table that is maintained at the memory controller. The memory controller can send an instruction to the data provider node that instructs the data provider node to send the training data to the data consumer node to enable training of the model that runs on the data consumer node.

    System, Apparatus And Method For Adaptive Peer-To-Peer Communication With Edge Platform

    公开(公告)号:US20210377356A1

    公开(公告)日:2021-12-02

    申请号:US16887087

    申请日:2020-05-29

    Abstract: In one embodiment, a method includes: receiving, in an edge platform, a plurality of messages from a plurality of edge devices coupled to the edge platform, the plurality of messages comprising metadata including priority information and granularity information; extracting at least the priority information from the plurality of messages; storing the plurality of messages in entries of a pending request queue according to the priority information; selecting a first message stored in the pending request queue for delivery to a destination circuit; and sending a message header for the first message to the destination circuit via at least one interface circuit, the message header including the priority information, and thereafter sending a plurality of packets including payload information of the first message to the destination circuit via the at least one interface circuit. Other embodiments are described and claimed.

Patent Agency Ranking