摘要:
Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.
摘要:
An Internet infrastructure delivery platform (e.g., operated by a service provider) provides an overlay network (a server infrastructure) that is used to facilitate “second screen” end user media experiences. In this approach, first media content, which is typically either live on-demand, is being rendered on a first content device (e.g., a television, Blu-Ray disk or another source). That first media content may be delivered by servers in the overlay network. One or multiple end user second content devices are then adapted to be associated with the first content source, preferably, via the overlay network, to facilitate second screen end user experiences (on the second content devices).
摘要:
Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.
摘要:
A high-performance distributed ledger and transaction computing network fabric over which large numbers of transactions (involving the transformation, conversion or transfer of information or value) are processed concurrently in a scalable, reliable, secure and efficient manner. In one embodiment, the computing network fabric or “core” is configured to support a distributed blockchain network that organizes data in a manner that allows communication, processing and storage of blocks of the chain to be performed concurrently, with little synchronization, at very high performance and low latency, even when the transactions themselves originate from distant sources. This data organization relies on segmenting a transaction space within autonomous but cooperating computing nodes that are configured as a processing mesh. According to an aspect of this disclosure, the CDN edge network is then used to deliver receipts associated with transactions that are processed into the blockchain.
摘要:
A high-performance distributed ledger and transaction computing network fabric over which large numbers of transactions (involving the transformation, conversion or transfer of information or value) are processed concurrently in a scalable, reliable, secure and efficient manner. In one embodiment, the computing network fabric or “core” is configured to support a distributed blockchain network that organizes data in a manner that allows communication, processing and storage of blocks of the chain to be performed concurrently, with little synchronization, at very high performance and low latency, even when the transactions themselves originate from distant sources. This data organization relies on segmenting a transaction space within autonomous but cooperating computing nodes that are configured as a processing mesh. Each computing node typically is functionally-equivalent to all other nodes in the core. The nodes operate on blocks independently from one another while still maintaining a consistent and logically-complete view of the blockchain as a whole.
摘要:
A traffic on-boarding method is operative at an acceleration server of an overlay network. It begins at the acceleration server when that server receives an assertion generated by an identity provider (IdP), the IdP having generated the assertion upon receiving an authentication request from a service provider (SP), the SP having generated the authentication request upon receiving from a client a request for a protected resource. The acceleration server receives the assertion and forwards it to the SP, which verifies the assertion and returns to the acceleration server a token, together with the protected resource. The acceleration server then returns a response to the requesting client that includes a version of the protected resource that points back to the acceleration server and not the SP. When the acceleration server then receives an additional request from the client, the acceleration server interacts with the service provider using an overlay network optimization.
摘要:
The present invention leverages an existing content delivery network infrastructure to provide a system that enhances performance for any application that uses the Internet Protocol (IP) as its underlying transport mechanism. An overlay network comprises a set of edge nodes, intermediate nodes, and gateway nodes. This network provides optimized routing of IP packets. Internet application users can use the overlay to obtain improved performance during normal network conditions, to obtain or maintain good performance where normal default BGP routing would otherwise force the user over congested or poorly performing paths, or to enable the user to maintain communications to a target server application even during network outages.
摘要:
An Internet infrastructure delivery platform (e.g., operated by a service provider) provides an overlay network (a server infrastructure) that is used to facilitate “second screen” end user media experiences. In this approach, first media content, which is typically either live on-demand, is being rendered on a first content device (e.g., a television, Blu-Ray disk or other source). That first media content may be delivered by servers in the overlay network. One or multiple end user second content devices are then adapted to be associated with the first content source, preferably, via the overlay network, to facilitate second screen end user experiences (on the second content devices).
摘要:
Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.
摘要:
This patent document describes technology for providing real-time messaging and entity update services in a distributed proxy server network, such as a CDN. Uses include distributing real-time notifications about updates to data stored in and delivered by the network, with both high efficiency and locality of latency. The technology can be integrated into conventional caching proxy servers providing HTTP services, thereby leveraging their existing footprint in the Internet, their existing overlay network topologies and architectures, and their integration with existing traffic management components.