摘要:
A system for dynamically selecting from among a plurality of acceleration techniques implemented in a Content Delivery Network (CDN) using attributes associated with content requests may include a network interface that receives a content request from a client system for content, where the request is associated with one or more attributes. The system may also include an intermediate server that accelerates access to the content stored in the CDN edge servers. The intermediate server may include a processor configured to access the one or more attributes associated with the content request, select one or more acceleration techniques from the plurality of acceleration techniques where the one or more acceleration techniques are selected based on the one or more attributes, and use the one or more acceleration techniques to provide the content to the client system.
摘要:
A method of generating a quantitative assessment of a connection between a content distributor and a user may include accessing social networks on which the content distributor maintains an account, and receiving an input indicating the user. The method may also include passing an indication of the user to the social networks, and receiving data descriptive of connections and interactions of the at least one user account in the plurality of social networks. The method may additionally include calculating a value score based on the data descriptive of the connections and interactions of the user account in the plurality of social networks, where the relationship value score indicates a potential for generating new sales for the business through the user. The method may further include adjusting one or more policies that control how content is distributed from the content distributor to the user.
摘要:
A point of presence includes edge servers having IP addresses and storing content, a switch/router that receives and forwards requests for content to the edge servers, a health monitoring server that gathers health information from the edge servers, and a dynamic request rerouting (DRR) server. The DRR server is connected with each of the edge servers. If the DRR server obtains a determination that one of the edge servers is down, it advertises a route including the IP address of the down edge server to the switch/router. The switch/router forwards a request for content, originally addressed to the down server, to the DRR server. The DRR server forwards the request to a working server that stores the content. The working server sends the content to the DRR server, the DRR server forwards the content back to the switch/router, and the switch/router responds to the original request with the content.
摘要:
A system and method for accelerating web page delivery is disclosed in one embodiment. Web content requests are made to an edge server of a first point of presence (POP) of a content delivery network (CDN). The web content has embedded resource links. The first POP can rewrite the embedded resource links to route requests for the embedded resource links to any POP in the CDN or even the origin server. In some embodiments, the first POP can decide if the first POP and/or another POP referenced in a rewritten embedded resource link should cache and/or accelerate the resource referenced in the embedded resource link.
摘要:
A method for generating and delivering highlight versions of content for special case delivery through a Content Delivery Network (CDN) may include storing and distributing content in response to user requests using a plurality of edge servers, the plurality of edge servers being organized into a plurality of geographically distributed Points of Presence (POPs) in the CDN comprising a first POP that stores first content. The method may also include receiving a request for the first content and directing the request for the first content to the first POP. The method may additionally include automatically determining that the request for the first content originated from a mobile device and in response, providing a limited portion of the first content to the mobile device, wherein the limited portion of the first content comprises less than the first content.
摘要:
A system and a method for accelerating delivery of a webpage by using a preloader file during a delay in fetching the web file are disclosed. When an end user makes a request through a client computer for a webpage, a Content Delivery Network (CDN) server sends the client a preloader file. The preloader file contains requests for resources that are likely to be part of the web file. The client downloads the resources, and the resources are saved in a browser cache. The preloader file also directs the client to request the webpage again. While the client is downloading the resources, the CDN server requests the web file from an origin server. The origin server composes the webpage and delivers the webpage to the CDN server. When the client makes a second request for the web file, the CDN server delivers the web file to the client. When the client renders the web file to display the webpage, the client can retrieve the resources from the browser cache.
摘要:
Content delivery networks (CDNs) deliver content objects for others is disclosed. End user computers are directed to an edge server for delivery of a requested content object by a universal resource indicator (URI). When an edge server does not have a copy of the content object from the URI, information is successively passed to ancestor servers within a hierarchy until the content object is found. There can be different hierarchies designated for different URIs or times at which requests are received. Once the content object is located in the hierarchical chain, the content object is passed back down the chain to the edge server for delivery.
摘要:
According to the invention, a content download system for downloading a content file and additional content using a window is disclosed. The content download system includes a content site, a content provider and a third party. The content site presents the window that displays the download progress of the content file, presents the additional content while the content file is downloaded, and allows selection of the additional content. The content provider serves as an origin server for the content file. The third party receives information on the content file and provides the additional content.
摘要:
A machine-implementable method for managing cloud-based transcoding resources available to a content delivery network includes maintaining a queue of video transcoding jobs that may be executed by internal transcoders of the content delivery network or by external transcoders of a plurality of cloud-based resources, utilizing a server of the content delivery network. The method further includes determining, at first and second times, corresponding first and second transcoder supply deviations, and a trend indicator; and determining whether to activate, deactivate, or make no change to the number of transcoders that are currently activated for transcoding, based at least on the second transcoder supply deviation and the trend indicator.
摘要:
According to the invention, a delivery network for assisting delivery of content objects over an Internet is disclosed. The delivery network includes a network outlet, an interface and a routing function. The network outlet is coupled to a plurality of full-route networks, where each of the plurality of full-route networks is capable of delivering content objects to a plurality of terminal networks. The plurality of terminal networks include a terminal network, where the plurality of terminal networks are coupled to a plurality of end user computers. The interface receives content objects for delivery to the plurality of end user computers. The routing function routes content objects in at least two modes, where a first mode routes content objects based upon a first route path from the network outlet to the terminal network, and a second mode routes at least some content objects using a second route path from the network outlet to the terminal network. The first route path is chosen based upon delivery efficiency. Switching from the first mode to the second mode is triggered when at least of a portion of the first route path reaches a predetermined level of use. The first and second route paths are different, and the second route path is less efficient than the first route path.