EDGE COMPUTING IN 4G & 5G CORES BASED ON APPLICATION USAGE

    公开(公告)号:US20230300913A1

    公开(公告)日:2023-09-21

    申请号:US17696480

    申请日:2022-03-16

    IPC分类号: H04W76/12 H04L67/5681

    CPC分类号: H04W76/12 H04L67/2847

    摘要: Systems and methods are provided herein to perform edge computing in 4G and 5G cores based on application usage. Networks identify highly requested content at an access point and may use network data analytics functions to identify which content is popular in the area served by an access point. The highly requested content may be applications, webpages, or similar content. Once the highly requested content has been identified, tunneling to at least one application server to download the highly requested content occurs. The content may then be stored in a dynamically caching server near the access point. The network data analytics operations to identify highly requested content may be performed on a periodic basis, and content that no longer meets a predetermined threshold may be flushed from the dynamically caching server.

    Stream-based data deduplication with peer node prediction

    公开(公告)号:US20190173942A1

    公开(公告)日:2019-06-06

    申请号:US16267549

    申请日:2019-02-05

    IPC分类号: H04L29/08 H04L29/06

    摘要: Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.

    CACHING SYSTEM
    6.
    发明申请
    CACHING SYSTEM 审中-公开

    公开(公告)号:US20190045024A1

    公开(公告)日:2019-02-07

    申请号:US16154862

    申请日:2018-10-09

    申请人: Google LLC

    IPC分类号: H04L29/08 G06Q30/02 H04L29/06

    摘要: This document describes a content caching system for pre-loading digital components, the system including a communication interface configured to communicate with a remote device over a wireless network, a local content cache; and an evaluation system comprising one or more processors. The one or more operations include pre-loading a digital component for rendering in a browser at a time that is subsequent to a time of the pre-loading, registering a scheme of a network reference for the cached digital component, with the scheme comprising a specified portion of the network reference for the cached digital component; retrieving, from the local content cache, the pre-loaded digital component associated with the digital component tag comprising the network reference; and rendering, from the local content cache, the pre-loaded digital component in a graphical user interface rather than requesting the digital component from the remote device.

    DATA PACKET TRANSMISSION METHOD, NETWORK SIDE DEVICE, AND USER EQUIPMENT

    公开(公告)号:US20180359183A1

    公开(公告)日:2018-12-13

    申请号:US16106325

    申请日:2018-08-21

    IPC分类号: H04L12/747 H04L12/805

    摘要: Embodiments of the present disclosure disclose a data packet transmission method, including: receiving, by a network device, a first request message sent by user equipment, where the first request message is used to request the network device to allocate data cache space; caching, by the network device in the data cache space, at least a part of a data packet sent by a server device to the user equipment; receiving, by the network device, a second request message sent by the user equipment, where the second request message is used to request the network device to send the cached data packet; and sending, by the network device, a part or all of the cached data packet to the user equipment. Therefore, a data packet can be cached on a network side, so as to resolve a problem that data packet transmission in a data transmission process is not timely.

    METHOD AND SYSTEM FOR PREDICTIVE LOADING OF SOFTWARE RESOURCES

    公开(公告)号:US20180309848A1

    公开(公告)日:2018-10-25

    申请号:US16024000

    申请日:2018-06-29

    IPC分类号: H04L29/08 H04L12/24

    摘要: A method for predictive loading of software resources in a web application includes predicting a future state of the web application, determining the software resources required by the first predicted future state, and loading the software resources required by the first predicted future state. Determining that future predicated state further includes determining an associated probability for each possible future state in the first set of possible future states, identifying, from the first set of possible future states, a first predicted future state with the highest associated probability, and predicting a first set of possible future states based on a current state, run-time application context, and either use case data or historical application usage data.

    Efficient content delivery over wireless networks using guaranteed prefetching at selected times-of-day

    公开(公告)号:US20180219965A1

    公开(公告)日:2018-08-02

    申请号:US15936453

    申请日:2018-03-27

    IPC分类号: H04L29/08 H04W4/18

    摘要: A method for content delivery includes selecting one or more time intervals. During each time interval among the selected time intervals, given content is prefetched from a content source to a communication terminal using a guaranteed prefetching mode, by continuously tracking the given content on the content source, so as to detect changes to the given content as they occur during the selected time interval, and maintaining the communication terminal continuously synchronized with the content source with respect to the given content, throughout the selected time interval, notwithstanding the changes that occur during the selected time interval, by continuously prefetching at least part of the given content from the content source to the communication terminal. Outside of the one or more selected time intervals, the given content is prefetched using a best-effort prefetching mode, by tracking the given content less frequently than the guaranteed prefetching mode.