APPLICATION AND TRAFFIC AWARE MACHINE LEARNING-BASED POWER MANAGER

    公开(公告)号:US20250088427A1

    公开(公告)日:2025-03-13

    申请号:US18759333

    申请日:2024-06-28

    Abstract: Example systems and techniques are disclosed for power management. An example system includes one or more memories and one or more processors. The one or more processors are configured to obtain workload metrics from a plurality of nodes of a cluster. The one or more processors are configured to obtain network function metrics from the plurality of nodes of the cluster. The one or more processors are configured to execute at least one machine learning model to predict a corresponding measure of criticality of traffic of each node. The one or more processors are configured to determine, based on the corresponding measure of criticality of traffic of each node, a corresponding power mode for at least one processing core of each node. The one or more processors are configured to recommend or apply the corresponding power mode to the at least one processing core of each node.

    DYNAMIC SERVICE REBALANCING IN NETWORK INTERFACE CARDS HAVING PROCESSING UNITS

    公开(公告)号:US20240380701A1

    公开(公告)日:2024-11-14

    申请号:US18316668

    申请日:2023-05-12

    Abstract: An edge services controller may use a service scheduling algorithm to deploy services on Network Interface Cards (NICs) of a NIC fabric while incrementally scheduling services. The edge services controller may assign services to specific nodes depending on their available resources on these nodes. Available resources may include CPU compute, DPU compute, node bandwidth, etc. The edge services controller may also consider the distance between the services that communicate with each other (i.e., hop count between nodes if two communicating services are placed on separate nodes) and the weight of communication between the services. Two services that communicate heavily with each other may consume more bandwidth, and putting them further apart is more detrimental than keeping them closer to each other, i.e., reducing the hop count between each other depending on the bandwidth consumption due to their inter-service communications.

    INTELLIGENT FIREWALL FLOW CREATOR
    25.
    发明公开

    公开(公告)号:US20240179126A1

    公开(公告)日:2024-05-30

    申请号:US18472042

    申请日:2023-09-21

    CPC classification number: H04L63/0263 H04L41/16 H04L63/0236

    Abstract: Example systems, methods, and storage media are described. An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which, when executed by the processing circuitry, cause the network system to obtain telemetry data, the telemetry data comprising indications of creations of instances of a flow. The instructions cause the network system to, based on the indications of the creations of the instances of the flow, determine a pattern of creation of the instances of the flow. The instructions cause the network system to, based on the pattern of creation of the instances of the flow, generate an action entry in a policy table for a particular instance of the flow prior to receiving a first packet of the particular instance of the flow.

    USING NETWORK INTERFACE CARDS HAVING PROCESSING UNITS TO DETERMINE LATENCY

    公开(公告)号:US20230006904A1

    公开(公告)日:2023-01-05

    申请号:US17806865

    申请日:2022-06-14

    Abstract: A system is configured to compute a latency between a first computing device and a second computing device. The system includes a network interface card (NIC) of a first computing device. The NIC includes a set of interfaces configured to receive one or more packets and send one or more packets. The processing unit is configured to identify information indicative of a forward packet, compute, based on a first time corresponding to the forward packet and a second time corresponding to a reverse packet associated with the forward packet, a latency between the first computing device and a second computing device, wherein the second computing device includes a destination of the forward packet and a source of the reverse packet, and output information indicative of the latency between the first computing device and the second computing device.

    DISTRIBUTED APPLICATION CALL PATH PERFORMANCE ANALYSIS

    公开(公告)号:US20250112851A1

    公开(公告)日:2025-04-03

    申请号:US18478260

    申请日:2023-09-29

    Abstract: In general, techniques are described for managing a distributed application based on call paths among the multiple services of the distributed application that traverse underlying network infrastructure. In an example, a method comprises determining, by a computing system, and for a distributed application implemented with a plurality of services, a call path from an entry endpoint service of the plurality of services to a terminating endpoint service of the plurality of services; determining, by the computing system, a corresponding network path for each pair of adjacent services from a plurality of pairs of services that communicate for the call path; and based on a performance indicator for a network device of the corresponding network path meeting a threshold, performing, by the computing system, one or more of: reconfiguring the network; or redeploying one of the plurality of services to a different compute node of the compute nodes.

    Intelligent firewall policy processor

    公开(公告)号:US12267300B2

    公开(公告)日:2025-04-01

    申请号:US18472050

    申请日:2023-09-21

    Abstract: An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which cause the system to obtain telemetry data, the telemetry data being associated with a plurality of applications running on a plurality of hosts. The instructions cause the system to, based on the telemetry data, determine a subset of applications of the plurality of applications that run on a first host of the plurality of hosts. The instructions cause the system to determine a subset of firewall policies of a plurality of firewall polices, each of the subset of firewall policies applying to at least one respective application of the subset of applications. The instructions cause the system to generate an indication of the subset of firewall policies and send the indication to a management plane of a distributed firewall.

    DEPENDENCY-AWARE SMART GREEN WORKLOAD SCALER

    公开(公告)号:US20250088434A1

    公开(公告)日:2025-03-13

    申请号:US18759383

    申请日:2024-06-28

    Abstract: An example system may include one or more memories and one or more processors. The one or more processors are configured to determine that a first workload depends on one or more other workloads. The one or more processors are configured to determine a measure of first carbon emission associated with the first workload and determine a predicted measure of second carbon emission associated with the one or more other workloads. The one or more processors are configured to determine a combined emission, the combined emission including the measure of the first carbon emission and the predicted measure of the second carbon emission. The one or more processors are configured to determine a replica count of the first workload based on the combined emission and an emission threshold and schedule spawning of replicas of the first workload or destruction of replicas of the first workload to implement the replica count.

Patent Agency Ranking