Abstract:
A cache server receives content and an instruction indicating an event associated with the content that causes a processor to invoke a call out to an adjunct device. The instruction further indicates an operation that the adjunct device is to perform. The cache server detects the event associated with the content, halts a flow of the content in response to detecting the event associated with the content, passes via the call out the content to the adjunct device to perform the operation, receives from the adjunct device a response and resulting data from the operation, and performs an additional operation on the resulting data based on the response from the adjunct device.
Abstract:
A cache server receives content and an instruction indicating an event associated with the content that causes a processor to invoke a call out to an adjunct device. The instruction further indicates an operation that the adjunct device is to perform. The cache server detects the event associated with the content, halts a flow of the content in response to detecting the event associated with the content, passes via the call out the content to the adjunct device to perform the operation, receives from the adjunct device a response and resulting data from the operation, and performs an additional operation on the resulting data based on the response from the adjunct device.
Abstract:
A network is configured to utilize available bandwidth to conduct bulk data transfers without substantially affecting the successful transmission of time-sensitive traffic in the network. In order to avoid this interference, the packets carrying data for bulk data transfers are associated with a low priority class such that the routers of the network will preferentially drop these packets over packets associated with the normal traffic of the network. As such, when the normal traffic peaks or there are link failures or equipment failures, the normal traffic is preferentially transmitted over the bulk-transfer traffic and thus the bulk-transfer traffic dynamically adapts to changes in the available bandwidth of the network. Further, to reduce the impact of dropped packets for the bulk-transfer traffic, the packets of the bulk-transfer traffic are encoded at or near the source component using a loss-resistant transport protocol so that the dropped packets can be reproduced at a downstream link.
Abstract:
A cache server receives content and an instruction indicating an event associated with the content that causes a processor to invoke a call out to an adjunct device. The instruction further indicates an operation that the adjunct device is to perform. The cache server detects the event associated with the content, halts a flow of the content in response to detecting the event associated with the content, passes via the call out the content to the adjunct device to perform the operation, receives from the adjunct device a response and resulting data from the operation, and performs an additional operation on the resulting data based on the response from the adjunct device.
Abstract:
A system for providing a multi-delivery-method policy-controlled client proxy is disclosed. The system may receive a request for a network service from a client. Based on the request for the network service, the system may detect the presence of a client proxy associated with the client. If client proxy is detected, the system may provide a data object that includes information that indicates that the client proxy is a primary source for content that may be requested by the client. The system may redirect, based on the data object, a request for the content received from the client to the client proxy. The system may then obtain, via the client proxy, the content by utilizing a delivery method that is selected based on a policy. Finally, the system may provide, via the client proxy, the content to the client.
Abstract:
A network is configured to utilize available bandwidth to conduct bulk data transfers without substantially affecting the successful transmission of time-sensitive traffic in the network. In order to avoid this interference, the packets carrying data for bulk data transfers are associated with a low priority class such that the routers of the network will preferentially drop these packets over packets associated with the normal traffic of the network. As such, when the normal traffic peaks or there are link failures or equipment failures, the normal traffic is preferentially transmitted over the bulk-transfer traffic and thus the bulk-transfer traffic dynamically adapts to changes in the available bandwidth of the network. Further, to reduce the impact of dropped packets for the bulk-transfer traffic, the packets of the bulk-transfer traffic are encoded at or near the source component using a loss-resistant transport protocol so that the dropped packets can be reproduced at a downstream link.
Abstract:
A system for providing a multi-delivery-method policy-controlled client proxy is disclosed. The system may receive a request for a network service from a client. Based on the request for the network service, the system may detect the presence of a client proxy associated with the client. If client proxy is detected, the system may provide a data object that includes information that indicates that the client proxy is a primary source for content that may be requested by the client. The system may redirect, based on the data object, a request for the content received from the client to the client proxy. The system may then obtain, via the client proxy, the content by utilizing a delivery method that is selected based on a policy. Finally, the system may provide, via the client proxy, the content to the client.
Abstract:
A cache server receives content and an instruction indicating an event associated with the content that causes a processor to invoke a call out to an adjunct device. The instruction further indicates an operation that the adjunct device is to perform. The cache server detects the event associated with the content, halts a flow of the content in response to detecting the event associated with the content, passes via the call out the content to the adjunct device to perform the operation, receives from the adjunct device a response and resulting data from the operation, and performs an additional operation on the resulting data based on the response from the adjunct device.
Abstract:
Aspects of the subject disclosure may include, for example, identifying a threshold value of a streaming-media key performance indicator (KPI) based on a predetermined target portion of end-user devices that provide an acceptable performance. Performance records are obtained for a content delivery network (CDN) adapted to cache and serve media content requested by the end-user devices. Predicted values of the streaming-media KPI are generated according to the performance records of the CDN and compared to the threshold value of the streaming-media KPI to obtain a comparison. An anomaly is detected according to the comparison, to indicate that a predetermined number of the predicted values of the streaming-media KPI fail to satisfy the threshold value. Other embodiments are disclosed.
Abstract:
Aspects of the subject disclosure may include, for example, identifying a threshold value of a streaming-media key performance indicator (KPI) based on a predetermined target portion of end-user devices that provide an acceptable performance. Performance records are obtained for a content delivery network (CDN) adapted to cache and serve media content requested by the end-user devices. Predicted values of the streaming-media KPI are generated according to the performance records of the CDN and compared to the threshold value of the streaming-media KPI to obtain a comparison. An anomaly is detected according to the comparison, to indicate that a predetermined number of the predicted values of the streaming-media KPI fail to satisfy the threshold value. Other embodiments are disclosed.