AUTOMATIC AND REAL-TIME CELL PERFORMANCE EXAMINATION AND PREDICTION IN COMMUNICATION NETWORKS

    公开(公告)号:US20240022938A1

    公开(公告)日:2024-01-18

    申请号:US17864663

    申请日:2022-07-14

    CPC classification number: H04W24/10 H04L41/16 H04B17/318 G06K9/6262 G06K9/6298

    Abstract: Aspects of the subject disclosure may include, for example, a method performed by a processing system; the method includes receiving a plurality of values of key performance indicators (KPIs) relating to performance of a cell on a communication network. The plurality of values of the KPIs includes labeled training data for training a machine learning (ML) model for the performance of the cell. The method further includes iteratively executing, using the labeled training data, a training procedure for the ML model; and testing the trained ML model. The labeled training data corresponds to ground truth data that may include a training data set, a validation data set and a test data set. The trained ML model, when deployed on a communication network, receives as input near-real time data regarding the performance of the cell and provides as output predictions of the performance of the cell. Other embodiments are disclosed.

    SYSTEM AND METHOD FOR LOW LATENCY EDGE COMPUTING

    公开(公告)号:US20230085361A1

    公开(公告)日:2023-03-16

    申请号:US17991340

    申请日:2022-11-21

    Abstract: Aspects of the subject disclosure may include, for example, a method in which a processing system receives data at an edge node of a network that also includes regional nodes and central nodes. The processing system also determines a latency criterion associated with an application for processing the data; the application corresponds to an application programming interface. The method also includes processing the data in accordance with the application, monitoring a latency associated with the processing, and determining whether the latency meets the latency criterion. The processing system dynamically assigns data processing resources so that the latency meets the latency criterion; the resources include computation, network and storage resources of the edge node, a central node, and a regional node in communication with the edge node and the central node. Other embodiments are disclosed.

    DYNAMIC WIRELESS NETWORK THROUGHPUT ADJUSTMENT

    公开(公告)号:US20230038198A1

    公开(公告)日:2023-02-09

    申请号:US17392932

    申请日:2021-08-03

    Abstract: Dynamic wireless network throughput adjustment is provided herein. A method can include determining, by a system comprising a processor, a sector of a communication network for which an amount of congestion present in the sector is greater than a congestion threshold; selecting, by the system from among respective network equipment operating in the sector, target network equipment for throughput adjustment based on equipment performance metrics respectively associated with the respective network equipment; and facilitating, by the system, adjusting a throughput of the target network equipment by an adjustment amount determined based on target equipment performance metrics, of the equipment performance metrics, associated with the target network equipment.

    Creating and Using Cell Clusters
    4.
    发明申请

    公开(公告)号:US20210314789A1

    公开(公告)日:2021-10-07

    申请号:US17106335

    申请日:2020-11-30

    Abstract: Concepts and technologies are disclosed for creating and using cell clusters. Cellular network data associated with a cellular network can be obtained. The cellular network data can include configuration data associated with a cell of the cellular network and a performance indicator associated with the cellular network. A number of cell clusters to be generated can be determined and the cell clusters can be generated. The cell clusters can include a cell cluster that can represent multiple cells including the cell. A model that represents the cell cluster can be trained. An input cluster that represents multiple inputs can be generated. The inputs can be associated with the multiple cells and the input cluster can include a value. The value can be provided as input to the model to obtain a predicted output associated with the cell cluster.

    METHOD AND APPARATUS FOR OBTAINING RECORDED MEDIA CONTENT

    公开(公告)号:US20200351556A1

    公开(公告)日:2020-11-05

    申请号:US16934662

    申请日:2020-07-21

    Abstract: Aspects of the subject disclosure may include, for example, a method in which an end user device sends a first request to a digital video recorder (DVR) to receive a media content item, and receives a metadata file including authentication information and a network address of a media content server where the media content item is located. The end user device sends a second request to the media content server to receive the media content item; the second request is sent to the network address and includes the authentication information and an identification of the requested media content item. The end user device receives the media content item from the media content server, responsive to the authentication information and the identification of the media content item sent in the second request. Other embodiments are disclosed.

    FLOW MANAGEMENT AND FLOW MODELING IN NETWORK CLOUDS

    公开(公告)号:US20200249978A1

    公开(公告)日:2020-08-06

    申请号:US16851231

    申请日:2020-04-17

    Abstract: Assignment of network addresses and estimations of flow sizes associated with network nodes can be enhanced. Assignment management component (AMC) partitions a set of network addresses into subsets of network addresses associated with respective classes. For respective virtual machines (VMs), an estimator component estimates a flow size associated with a VM based on parameters associated with the VM. AMC classifies VMs based on threshold flow-size values and respective estimated flow sizes of VMs, and assigns VMs to respective sub-groups of VMs associated with respective subsets of network addresses based on respective classifications of VMs. AMC assigns an available network address of a subset of network addresses associated with a class to a VM of a sub-group associated with that class. Estimated flow sizes and performance metrics also are utilized to make determinations regarding VM placement, traffic management, load balancing, resource allocation, and orchestration in cloud networks.

    FLOW MANAGEMENT AND FLOW MODELING IN NETWORK CLOUDS

    公开(公告)号:US20190171474A1

    公开(公告)日:2019-06-06

    申请号:US15829806

    申请日:2017-12-01

    Abstract: Assignment of network addresses and estimations of flow sizes associated with network nodes can be enhanced. Assignment management component (AMC) partitions a set of network addresses into subsets of network addresses associated with respective classes. For respective virtual machines (VMs), an estimator component estimates a flow size associated with a VM based on parameters associated with the VM. AMC classifies VMs based on threshold flow-size values and respective estimated flow sizes of VMs, and assigns VMs to respective sub-groups of VMs associated with respective subsets of network addresses based on respective classifications of VMs. AMC assigns an available network address of a subset of network addresses associated with a class to a VM of a sub-group associated with that class. Estimated flow sizes and performance metrics also are utilized to make determinations regarding VM placement, traffic management, load balancing, resource allocation, and orchestration in cloud networks.

    Storing data at edges or cloud storage with high security

    公开(公告)号:US12204675B2

    公开(公告)日:2025-01-21

    申请号:US17498945

    申请日:2021-10-12

    Abstract: The disclosed technology is directed towards partitioning data and distributing the data to different storage locations, which facilitates better data security. For example, a large database of source data can be partitioned into a small enabler partition and one or more large partitions, in which a full set of the partitions is needed to reconstruct the source data to its original state. The large partition can be maintained at an edge computing facility to reduce latency, or at a cloud computing facility to reduce storage expenses, with the smaller enabler partition only accessed when needed to reconstruct the data. A database is partitioned into a group of partitions, and the group of partitions is distributed to separate storage facilities. The separate storage and computing facilities/nodes are accessed to obtain datasets of the group of partitions, and merged to reconstruct the source data.

Patent Agency Ranking