CONTROLLING FAIR BANDWIDTH ALLOCATION EFFICIENTLY
    12.
    发明申请
    CONTROLLING FAIR BANDWIDTH ALLOCATION EFFICIENTLY 审中-公开
    有效控制公平分配

    公开(公告)号:US20160212065A1

    公开(公告)日:2016-07-21

    申请号:US14601214

    申请日:2015-01-20

    CPC classification number: H04L47/783 H04L47/525 H04L47/528 H04L47/6265

    Abstract: Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.

    Abstract translation: 微调度器控制客户端的带宽分配,每个客户端订阅输出通信链路的相应预定义的带宽部分。 宏调度器通过以预定的第一截止时间分配与活跃的每个相应客户端相关联的带宽的相应订阅部分来控制微调度器,剩余带宽由相应客户端在相应的活动客户端之间成比例地共享 通过预定义的第二截止日期,同时通过宏调度器最小化微调度器之间的协调,周期性地调整对每个微调度器的相应带宽分配。

    NETWORK TRAFFIC RATE LIMITING IN COMPUTING SYSTEMS

    公开(公告)号:US20190081899A1

    公开(公告)日:2019-03-14

    申请号:US15907546

    申请日:2018-02-28

    Abstract: Distributed computing systems, devices, and associated methods of packet routing are disclosed herein. In one embodiment, a computing device includes a field programmable gate array (“FPGA”) that includes an inbound processing path and outbound processing path in opposite processing directions. The inbound processing path can forward a packet received from the computer network to a buffer on the FPGA instead of the NIC. The outbound processing path includes an outbound multiplexer having a rate limiter circuit that only forwards the received packet from the buffer back to the computer network when a virtual port corresponding to the packet has sufficient transmission allowance. The outbound multiplexer can also periodically increment the transmission allowance based on a target bandwidth for the virtual port.

    REMOTE DIRECT MEMORY ACCESS IN COMPUTING SYSTEMS

    公开(公告)号:US20190079897A1

    公开(公告)日:2019-03-14

    申请号:US15824925

    申请日:2017-11-28

    Abstract: Distributed computing systems, devices, and associated methods of remote direct memory access (“RDMA”) packet routing are disclosed herein. In one embodiment, a server includes a main processor, a network interface card (“NIC”), and a field programmable gate array (“FPGA”) operatively coupled to the main processor via the NIC. The FPGA includes an inbound processing path having an inbound packet buffer configured to receive an inbound packet from the computer network, a NIC buffer, and a multiplexer between the inbound packet buffer and the NIC, and between the NIC buffer and the NIC. The FPGA also includes an outbound processing path having an outbound action circuit having an input to receive the outbound packet from the NIC, a first output to the computer network, and a second output to the NIC buffer in the inbound processing path.

    CONTROLLING FAIR BANDWIDTH ALLOCATION EFFICIENTLY

    公开(公告)号:US20190007338A1

    公开(公告)日:2019-01-03

    申请号:US16123193

    申请日:2018-09-06

    Abstract: Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.

    VIRTUAL FILTERING PLATFORM IN DISTRIBUTED COMPUTING SYSTEMS

    公开(公告)号:US20180262556A1

    公开(公告)日:2018-09-13

    申请号:US15639319

    申请日:2017-06-30

    Inventor: Daniel Firestone

    Abstract: Computing systems, devices, and associated methods of operation of filtering packets at virtual switches implemented at hosts in a distributed computing system are disclosed herein. In one embodiment, a method includes receiving, at the virtual switch, a packet having a header and a payload and processing, at the virtual switch, the received packet based on multiple match action tables arranged in a hierarchy in which first and second layers individually contain one or more match action tables that individually contain one or more entries each containing a condition and a corresponding processing action.

    Load balancing in distributed computing systems

    公开(公告)号:US10652320B2

    公开(公告)日:2020-05-12

    申请号:US15438585

    申请日:2017-02-21

    Abstract: Techniques for facilitating load balancing in distributed computing systems are disclosed herein. In one embodiment, a method includes receiving, at a destination server, a request packet from a load balancer via the computer network requesting a remote direct memory access (“RDMA”) connection between an originating server and one or more other servers selectable by the load balancer. The method can also include configuring, at the destination server, a rule for processing additional packets transmittable to the originating server via the RDMA connection based on the received reply packet. The rule is configured to encapsulate an outgoing packet transmittable to the originating server with an outer header having a destination field containing a network address of the originating server and a source field containing another network address of the destination server.

Patent Agency Ranking