-
公开(公告)号:US20230300913A1
公开(公告)日:2023-09-21
申请号:US17696480
申请日:2022-03-16
IPC分类号: H04W76/12 , H04L67/5681
CPC分类号: H04W76/12 , H04L67/2847
摘要: Systems and methods are provided herein to perform edge computing in 4G and 5G cores based on application usage. Networks identify highly requested content at an access point and may use network data analytics functions to identify which content is popular in the area served by an access point. The highly requested content may be applications, webpages, or similar content. Once the highly requested content has been identified, tunneling to at least one application server to download the highly requested content occurs. The content may then be stored in a dynamically caching server near the access point. The network data analytics operations to identify highly requested content may be performed on a periodic basis, and content that no longer meets a predetermined threshold may be flushed from the dynamically caching server.
-
公开(公告)号:US20190191355A1
公开(公告)日:2019-06-20
申请号:US15846235
申请日:2017-12-19
CPC分类号: H04W40/20 , H04L67/06 , H04L67/18 , H04L67/2847 , H04W4/029 , H04W4/40 , H04W4/70 , H04W40/026
摘要: In one embodiment, a system includes: a download server instantiated on a computing device, and a multiplicity of wireless access points (APs), where the download server is operative to: receive a download request from a mobile device, determine a current location for the mobile device, predict a route for the mobile device based at least on the current location, allocate at least one target AP along the route from among the multiplicity of wireless APs, and in response to the download request, forward at least one download file to the at least one target AP, where the at least one target AP is operative to: receive the at least one download file, identify the mobile device, and download at least part of the download file to the mobile device in an mmWave transmission.
-
公开(公告)号:US20190191003A1
公开(公告)日:2019-06-20
申请号:US16282744
申请日:2019-02-22
CPC分类号: H04L67/2842 , G06F16/00 , H04L67/2847
摘要: Systems and methods for delivering fractions of content to user devices before the content is selected or requested (e.g., a pre-delivery of content) are described. In some embodiments, the systems and methods receive an indication that content is available for pre-delivery from a content server to a user device over a network, determine a fraction (e.g., size) of the content available for pre-delivery that satisfies one or more predicted content playback conditions, and causes the determined fraction of the content available for pre-delivery to be delivered to the user device.
-
公开(公告)号:US20190173942A1
公开(公告)日:2019-06-06
申请号:US16267549
申请日:2019-02-05
CPC分类号: H04L67/108 , H04L67/2847 , H04L67/42
摘要: Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.
-
公开(公告)号:US20190149586A1
公开(公告)日:2019-05-16
申请号:US16246413
申请日:2019-01-11
IPC分类号: H04L29/06 , G06F16/957 , H04L29/08 , H04N21/258 , G06F15/167 , G06Q10/00 , H04N21/61 , H04N21/25 , H04N21/237 , H04N21/222 , H04N21/218 , G06F16/958
CPC分类号: H04L65/4084 , G06F15/167 , G06F16/9574 , G06F16/972 , G06Q10/00 , H04L67/18 , H04L67/2847 , H04L67/2852 , H04L67/306 , H04N21/2181 , H04N21/2223 , H04N21/237 , H04N21/252 , H04N21/25841 , H04N21/25891 , H04N21/6125
摘要: One embodiment of the present invention sets forth a method for updating content stored in a cache residing at an internet service provider (ISP) location that includes receiving popularity data associated with a first plurality of content assets, where the popularity data indicate the popularity of each content asset in the first plurality of content assets across a user base that spans multiple geographic regions, generating a manifest that includes a second plurality of content assets based on the popularity data and a geographic location associated with the cache, where each content asset included in the manifest is determined to be popular among users proximate to the geographic location or users with preferences similar to users proximate to the geographic location, and transmitting the manifest to the cache, where the cache is configured to update one or more content assets stored in the cache based on the manifest.
-
公开(公告)号:US20190045024A1
公开(公告)日:2019-02-07
申请号:US16154862
申请日:2018-10-09
申请人: Google LLC
发明人: Tuna Toksoz , Thomas Graham Price , Anurag Agrawal
CPC分类号: H04L67/2847 , G06Q30/0277 , H04L67/02 , H04L67/025 , H04L67/1097 , H04L67/2819 , H04L67/2857 , H04L67/42
摘要: This document describes a content caching system for pre-loading digital components, the system including a communication interface configured to communicate with a remote device over a wireless network, a local content cache; and an evaluation system comprising one or more processors. The one or more operations include pre-loading a digital component for rendering in a browser at a time that is subsequent to a time of the pre-loading, registering a scheme of a network reference for the cached digital component, with the scheme comprising a specified portion of the network reference for the cached digital component; retrieving, from the local content cache, the pre-loaded digital component associated with the digital component tag comprising the network reference; and rendering, from the local content cache, the pre-loaded digital component in a graphical user interface rather than requesting the digital component from the remote device.
-
公开(公告)号:US20180359183A1
公开(公告)日:2018-12-13
申请号:US16106325
申请日:2018-08-21
发明人: Lingli Pang , Xiaoxiao Zheng , Min Huang
IPC分类号: H04L12/747 , H04L12/805
CPC分类号: H04L45/742 , H04L29/06 , H04L47/36 , H04L67/02 , H04L67/2842 , H04L67/2847
摘要: Embodiments of the present disclosure disclose a data packet transmission method, including: receiving, by a network device, a first request message sent by user equipment, where the first request message is used to request the network device to allocate data cache space; caching, by the network device in the data cache space, at least a part of a data packet sent by a server device to the user equipment; receiving, by the network device, a second request message sent by the user equipment, where the second request message is used to request the network device to send the cached data packet; and sending, by the network device, a part or all of the cached data packet to the user equipment. Therefore, a data packet can be cached on a network side, so as to resolve a problem that data packet transmission in a data transmission process is not timely.
-
公开(公告)号:US20180336053A1
公开(公告)日:2018-11-22
申请号:US15600248
申请日:2017-05-19
CPC分类号: G06F9/45558 , G06F8/63 , G06F8/65 , G06F2009/45562 , H04L67/2847
摘要: Embodiments include systems and computer program products to perform an operation for managing different virtual machine images as a single virtual machine image. The operation generally includes generating a representation of a virtual machine (VM) image, and generating a first VM instance from the VM image. The representation of the VM image includes a set of artifacts associated with the VM image. The operation also includes receiving an indication of an available software update. Upon determining that the software update is applicable to the representation of the VM image, the operation further includes applying the software update to the first VM instance image.
-
公开(公告)号:US20180309848A1
公开(公告)日:2018-10-25
申请号:US16024000
申请日:2018-06-29
CPC分类号: H04L67/34 , H04L67/22 , H04L67/2847
摘要: A method for predictive loading of software resources in a web application includes predicting a future state of the web application, determining the software resources required by the first predicted future state, and loading the software resources required by the first predicted future state. Determining that future predicated state further includes determining an associated probability for each possible future state in the first set of possible future states, identifying, from the first set of possible future states, a first predicted future state with the highest associated probability, and predicting a first set of possible future states based on a current state, run-time application context, and either use case data or historical application usage data.
-
10.
公开(公告)号:US20180219965A1
公开(公告)日:2018-08-02
申请号:US15936453
申请日:2018-03-27
发明人: Daniel Yellin , Ofir Shalvi , David Ben Eli , Eilon Regev , Shimon Moshavi
CPC分类号: H04L67/2847 , H04L67/22 , H04W4/18
摘要: A method for content delivery includes selecting one or more time intervals. During each time interval among the selected time intervals, given content is prefetched from a content source to a communication terminal using a guaranteed prefetching mode, by continuously tracking the given content on the content source, so as to detect changes to the given content as they occur during the selected time interval, and maintaining the communication terminal continuously synchronized with the content source with respect to the given content, throughout the selected time interval, notwithstanding the changes that occur during the selected time interval, by continuously prefetching at least part of the given content from the content source to the communication terminal. Outside of the one or more selected time intervals, the given content is prefetched using a best-effort prefetching mode, by tracking the given content less frequently than the guaranteed prefetching mode.
-
-
-
-
-
-
-
-
-