NETWORK DATA MINING TO DETERMINE USER INTEREST

    公开(公告)号:US20130318015A1

    公开(公告)日:2013-11-28

    申请号:US13958879

    申请日:2013-08-05

    CPC classification number: G06N20/00 G06F16/24578 G06N5/022 G06Q30/02

    Abstract: Mining information from network data traffic to determine interests of online network users is provided herein. A data packet received at a network interface device can be accessed and inspected at line rate speeds. Source or addressing information in the data packet can be extracted to identify an initiating and/or receiving device. The packet can be inspected to identify occurrences of keywords or data features related with one or more subject matters. A vector can be defined for a network device that indicates a relative rank of interest in various subject matters. Furthermore, statistical analysis can be implemented on data stored in one or more interest vectors to determine information pertinent to network user interests. The information can facilitate providing value-added products or services to network users.

    Budgeted neural network architecture search system and method

    公开(公告)号:US12050979B2

    公开(公告)日:2024-07-30

    申请号:US16357603

    申请日:2019-03-19

    CPC classification number: G06N3/044 G06N3/08

    Abstract: A neural network architecture search may be conducted by a controller to generate a neural network. The controller may perform the search by generating a directed acyclic graph across nodes in a search space, the nodes representing compute operations for a neural network. As the search is performed, the controller may retrieve resource availability information to modify the likelihood of a generated neural network architecture including previously unused nodes.

    Optimizing serverless computing using a distributed computing framework

    公开(公告)号:US11016673B2

    公开(公告)日:2021-05-25

    申请号:US15931302

    申请日:2020-05-13

    Abstract: Aspects of the technology provide improvements to a Serverless Computing (SLC) workflow by determining when and how to optimize SLC jobs for computing in a Distributed Computing Framework (DCF). DCF optimization can be performed by abstracting SLC tasks into different workflow configurations to determined optimal arrangements for execution in a DCF environment. A process of the technology can include steps for receiving an SLC job including one or more SLC tasks, executing one or more of the tasks to determine a latency metric and a throughput metric for the SLC tasks, and determining if the SLC tasks should be converted to a Distributed Computing Framework (DCF) format based on the latency metric and the throughput metric. Systems and machine-readable media are also provided.

    Provisioning using pre-fetched data in serverless computing environments

    公开(公告)号:US10771584B2

    公开(公告)日:2020-09-08

    申请号:US15827969

    申请日:2017-11-30

    Abstract: A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.

    OPTIMIZING SERVERLESS COMPUTING USING A DISTRIBUTED COMPUTING FRAMEWORK

    公开(公告)号:US20200272338A1

    公开(公告)日:2020-08-27

    申请号:US15931302

    申请日:2020-05-13

    Abstract: Aspects of the technology provide improvements to a Serverless Computing (SLC) workflow by determining when and how to optimize SLC jobs for computing in a Distributed Computing Framework (DCF). DCF optimization can be performed by abstracting SLC tasks into different workflow configurations to determined optimal arrangements for execution in a DCF environment. A process of the technology can include steps for receiving an SLC job including one or more SLC tasks, executing one or more of the tasks to determine a latency metric and a throughput metric for the SLC tasks, and determining if the SLC tasks should be converted to a Distributed Computing Framework (DCF) format based on the latency metric and the throughput metric. Systems and machine-readable media are also provided.

    System and method for resource placement across clouds for data intensive workloads

    公开(公告)号:US10705882B2

    公开(公告)日:2020-07-07

    申请号:US15850230

    申请日:2017-12-21

    Abstract: Systems, methods, computer-readable media are disclosed for determining a point of delivery (POD) device or network component on a cloud for workload and resource placement in a multi-cloud environment. A method includes determining a first amount of data for transitioning from performing a first function on input data to performing a second function on a first outcome of the first function; determining a second amount of data for transitioning from performing the second function on the first outcome to performing a third function on a second outcome of the second function; determining a processing capacity for each of one or more network nodes on which the first function and the third function are implemented; and selecting the network node for implementing the second function based on the first amount of data, the second amount of data, and the processing capacity for each of the network nodes.

    Multi-datacenter message queue
    29.
    发明授权

    公开(公告)号:US10476982B2

    公开(公告)日:2019-11-12

    申请号:US15154141

    申请日:2016-05-13

    Abstract: Approaches are disclosed for distributing messages across multiple data centers where the data centers do not store messages using a same message queue protocol. In some embodiment, a network element translates messages from a message queue protocol (e.g., Kestrel, RABBITMQ, APACHE Kafka, and ACTIVEMQ) to an application layer messaging protocol (e.g., XMPP, MQTT, WebSocket protocol, or other application layer messaging protocols). In other embodiments, a network element translates messages from an application layer messaging protocol to a message queue protocol. Using the new approaches disclosed herein, data centers communicate using, at least in part, application layer messaging protocols to disconnect the message queue protocols used by the data centers and enable sharing messages between messages queues in the data centers. Consequently, the data centers can share messages regardless of whether the underlying message queue protocols used by the data centers (and the network devices therein) are compatible with one another.

Patent Agency Ranking