MANAGING MULTICAST SERVICE CHAINS IN A CLOUD ENVIRONMENT

    公开(公告)号:US20190342354A1

    公开(公告)日:2019-11-07

    申请号:US15968690

    申请日:2018-05-01

    Abstract: Techniques for provisioning multicast chains in a cloud-based environment are described herein. In an embodiment, an orchestration system sends a particular model of a distributed computer program application comprising one or more sources, destinations, and virtualized appliances for initiation by one or more host computers to a software-defined networking (SDN) controller. The SDN controller determines one or more locations for the virtualized appliances and generates a particular updated model of the distributed computer program application, the updated model comprising the one or more locations for the virtualized appliances. The SDN controller sends the updated model of the distributed computer program application to the orchestration system. The orchestration system uses the particular updated model to generate a mapping of virtualized appliances to available host computers of the one or more host computers based, at least in part, on the particular updated model of the distributed computer program application. Using the mapping of virtualized appliances to available host computers, the orchestration system sends instructions for initiating the virtualized appliances on the available host computers to one or more cloud management systems.

    Scalable deep learning video analytics

    公开(公告)号:US10121103B2

    公开(公告)日:2018-11-06

    申请号:US15374571

    申请日:2016-12-09

    Abstract: In one embodiment, a method includes receiving training data, the training data including training video data representing video of a location in a quiescent state, training a neural network using the training data to obtain a plurality of metrics, receiving current data, the current data including current video data representing video of the location at a current time period, generating a reconstruction error based on the plurality of metrics and the current video data in the embedded space, and generating, in response to determining that the reconstruction error is greater than a threshold, a notification indicative of the location being in a non-quiescent state.

    Adaptive telemetry based on in-network cross domain intelligence

    公开(公告)号:US09749718B1

    公开(公告)日:2017-08-29

    申请号:US15215098

    申请日:2016-07-20

    CPC classification number: H04Q9/00 G08C25/00

    Abstract: Disclosed are systems, methods, and computer-readable storage media for adaptive telemetry based on in-network cross domain intelligence. A telemetry server can receive at least a first telemetry data stream and a second telemetry data stream. The first telemetry data stream can provide data collected from a first data source and the second telemetry data stream can provide data collected from a second data source. The telemetry server can determine correlations between the first telemetry data stream and the second telemetry data stream that indicate redundancies between data included in the first telemetry data stream and the second telemetry data stream, and then adjust, based on the correlations between the first telemetry data stream and the second telemetry data stream, data collection of the second telemetry data stream to reduce redundant data included in the first telemetry data stream and the second telemetry data stream.

    CONTENT DISTRIBUTION SYSTEM CACHE MANAGEMENT
    15.
    发明申请
    CONTENT DISTRIBUTION SYSTEM CACHE MANAGEMENT 审中-公开
    内容分发系统高速缓存管理

    公开(公告)号:US20170026286A1

    公开(公告)日:2017-01-26

    申请号:US14803162

    申请日:2015-07-20

    Abstract: Content distribution system cache management may be provided. First, a sync packet may be received by a cache server from a first server. The sync packet may include a list indicating a cache server where a chunk is to be stored and the address for the chunk. Next, an address for the chunk may be obtained by the cache server by parsing the sync packet. The cache server may then determine that the chunk is not stored on the cache server by using the address for the chunk. Next, in response to determining that the chunk is not stored on the cache server, a connection may be opened between the first server and the cache server. The cache server may then receive the chunk over the connection and cache the chunk on the cache server.

    Abstract translation: 可以提供内容分发系统缓存管理。 首先,高速缓存服务器可以从第一服务器接收同步分组。 同步分组可以包括指示要存储块的高速缓存服务器的列表和用于块的地址。 接下来,缓存服务器可以通过解析同步分组来获得该块的地址。 然后,高速缓存服务器可以通过使用该块的地址来确定该块不被存储在高速缓存服务器上。 接下来,响应于确定该块没有存储在缓存服务器上,可以在第一服务器和缓存服务器之间打开连接。 然后,高速缓存服务器可以通过连接接收该块,并将高速缓存缓存在高速缓存服务器上。

    PARTITIONING AND PLACEMENT OF MODELS

    公开(公告)号:US20230053575A1

    公开(公告)日:2023-02-23

    申请号:US17578872

    申请日:2022-01-19

    Abstract: This disclosure describes techniques and mechanisms for enabling a user to run heavy deep learning workloads on standard edge networks without off-loading computation to a cloud, leveraging the available edge computing resources, and efficiently partitioning and distributing a Deep Neural Network (DNN) over a network. The techniques enable the user to split a workload into multiple parts and process the workload on a set of smaller, less capable compute nodes in a distributed manner, without compromising on performance, and while meeting a Service Level Objective (SLO).

    Computer network packet transmission timing

    公开(公告)号:US10944852B2

    公开(公告)日:2021-03-09

    申请号:US16392533

    申请日:2019-04-23

    Abstract: Establishing an expected transmit time at which a network interface controller (NIC) is expected to transmit a next packet. Enqueuing, with the NIC and before the expected transmit time, a packet P1 to be transmitted at the expected transmit time. Upon enqueuing P1, incrementing the expected transmit time by an expected transmit duration of P1. Transmitting at the NIC's line rate and timestamping enqueued P1 with its actual transmit time. Adjusting the expected transmit time by a difference between P1's actual transmit and P1's expected transmit time. Requesting, before completion of transmitting P1, to transmit a P2 at time t(P2). Enqueuing, in sequence, zero or more P0, such that the current expected transmit time plus the duration of the transmission of the P0s at the line rate equals t(P2). Transmitting at the line rate each enqueued P0. Upon enqueuing each P0, incrementing, for each P0, the expected transmit time by the expected transmit duration of the P0. Enqueuing P2 for transmission directly following enqueuing the final P0. Transmitting, by the NIC, enqueued P2 at t(P2).

Patent Agency Ranking