Computerized methods and systems for managing cloud computer services

    公开(公告)号:US10911367B2

    公开(公告)日:2021-02-02

    申请号:US16271966

    申请日:2019-02-11

    摘要: Systems, methods, and other embodiments associated with managing instances of services are described. In one embodiment, a method includes constructing pre-provisioned instances of a service within a first pool and constructing pre-orchestrated instances of the service within a second pool. In response to receiving a request for the service, the method executes executable code of a first pre-orchestrated instance as an executing instance and removing the pre-orchestrated instance from the second pool. A pre-provisioned instance is selected from the first pool to create a second pre-orchestrated instance within the second pool, and the pre-provisioned instance is removed from the first pool.

    Targeted selection of cascade ports

    公开(公告)号:US10911296B2

    公开(公告)日:2021-02-02

    申请号:US15933902

    申请日:2018-03-23

    摘要: Techniques are described for providing targeted selection of cascade ports of an aggregation device. In one example, the disclosed techniques enable dynamic assignment of active and backup cascade ports of an aggregation device for each extended port of satellite devices. In this example, rather than allocating resources for each of the extended ports of the satellite devices on all of the cascade ports of the aggregation device, the aggregation device instead allocates resources for each of the extended port only on the assigned active and backup cascade ports for the respective one of the extended ports of the satellite devices. The techniques are also described for providing traffic steering to a backup cascade port in the event the assigned active cascade port is unreachable, and, if the cascade port remains unreachable for a specified duration, the aggregation device may assign new active and backup cascade ports for the extended port.

    NETWORK DEVICE HAVING REDUCED LATENCY

    公开(公告)号:US20210029054A1

    公开(公告)日:2021-01-28

    申请号:US16949117

    申请日:2020-10-14

    摘要: A network device includes a transmit buffer from which data is transmitted to a network, and a packet buffer from which data chunks are transmitted to the transmit buffer in response to read requests. The packet buffer has a maximum read latency from receipt of a read request to transmission of a responsive data chunk, and receives read requests including a read request for a first data chunk of a network packet and a plurality of additional read requests for additional data chunks of the network packet. A latency timer monitors elapsed time from receipt of the first read request, and outputs a latency signal when the elapsed time reaches the first maximum read latency. Transmission logic waits until the elapsed time equals the first maximum read latency, and then transmits the first data chunk from the transmit buffer, without regard to a fill level of the transmit buffer.

    Telemetry for cloud switches queuing excursion

    公开(公告)号:US10904157B2

    公开(公告)日:2021-01-26

    申请号:US16376617

    申请日:2019-04-05

    摘要: Telemetry for cloud switches queuing excursion may be provided. A first hysteresis threshold and a second hysteresis threshold for a queue of the network switch may be specified. Next, a queue position relative to the first hysteresis threshold and the second hysteresis threshold may be determined for each incoming packets for the queue. A number of crossings including the queue position passing the first hysteresis threshold and subsequently passing the second hysteresis threshold in a first predetermined time period may be determined. A number of data packets being sent to the queue of the network switch may then be altered based on one or more of the number of crossings, the first hysteresis threshold, and the second hysteresis threshold.

    Data transmitting program, data transmitting device, and data transmitting method

    公开(公告)号:US10863005B2

    公开(公告)日:2020-12-08

    申请号:US16214529

    申请日:2018-12-10

    申请人: FUJITSU LIMITED

    发明人: Takuma Maeda

    摘要: A data transmission method for transmitting compressed data is disclosed. The method includes: classifying transmission target files into transmission groups; calculating, for each of the transmission files, a first compression time taken to compress the file by a first compression system and a first transmission time taken to transmit the file after being compressed by the first compression system; and determining, for each of the transmission groups, transmission order of files belonging to the transmission group based on the first compression time and the first transmission time.

    Mitigating priority flow control deadlock in stretch topologies

    公开(公告)号:US10834010B2

    公开(公告)日:2020-11-10

    申请号:US16172659

    申请日:2018-10-26

    摘要: Embodiments provide for mitigating priority flow control deadlock in stretch topologies by initializing a plurality of queues in a buffer of a leaf switch at a local cluster of a site having a plurality of clusters, wherein each queue of the plurality of queues corresponds to a respective one cluster of the plurality of clusters; receiving a pause command for no-drop traffic on the leaf switch, the pause command including an internal Class-of-Service (iCoS) identifier associated with a particular cluster of the plurality of cluster and a corresponding queue in the plurality of queues; and in response to determining, based on the iCoS identifier, that the pause command was received from a remote spine switch associated with a different cluster than the local cluster: forwarding the pause command to a local spine switch in the local cluster; and implementing the pause command on the corresponding queue in the buffer.

    Systems and methods for predictive scheduling and rate limiting

    公开(公告)号:US10834009B2

    公开(公告)日:2020-11-10

    申请号:US16357019

    申请日:2019-03-18

    申请人: Intel Corporation

    摘要: Systems and methods are disclosed for enhancing network performance by using modified traffic control (e.g., rate limiting and/or scheduling) techniques to control a rate of packet (e.g., data packet) traffic to a queue scheduled by a Quality of Service (QoS) engine for reading and transmission. In particular, the QoS engine schedules packets using estimated packet sizes before an actual packet size is known by a direct memory access (DMA) engine coupled to the QoS engine. The QoS engine subsequently compensates for discrepancies between the estimated packet sizes and actual packet sizes (e.g., when the DMA engine has received an actual packet size of the scheduled packet). Using these modified traffic control techniques that leverage estimating packet sizes may reduce and/or eliminate latency introduced due to determining actual packet sizes.