摘要:
A technique enables resources to be shared among data flows that may have different senders (sources) and/or different receivers (destinations) in a data network. Identifiers are associated with data flows and used to indicate whether resources may be shared between data flows. The identifiers are carried in signaling messages used to reserve resources for data flows. An existing data flow that is associated with an identifier that matches an identifier associated with a new data flow is allowed to share resources its resources with the new data flow.
摘要:
A technique is provided for one or more network nodes to deterministically select data flows to preempt. In particular, each node employs a set of predefined rules which instructs the node as to which existing data flow should be preempted in order to admit a new high-priority data flow. The rules are precisely defined and are common to all nodes configured in accordance with the present invention. Illustratively, a network node not only selects a data flow to preempt, but additionally may identify other “fate sharing” data flows that may be preempted. As used herein, a group of data flows has a fate-sharing relationship if the application instance(s) containing the data flows functions adequately only when all the fate-shared flows are operational. In a first illustrative embodiment, after a data flow in a fate-sharing group is preempted, network nodes may safely tear down the group's remaining data flows. In a second illustrative embodiment, when a data flow is preempted, all its fate-shared data flows are marked as being “at risk.” Because the at-risk flows are not immediately torn down, it is less likely that resources allocated for the at-risk flows may be freed then subsequently used to establish relatively lower-priority data flows instead of relatively higher-priority data flows.
摘要:
In one embodiment, an intermediate network device includes a communication facility configured to receive a reservation request message that includes a flow spec object. The flow spec object specifies one or more flow parameters that describe a given traffic flow that desires to pass through the intermediate network device. A flow is configured to compare the one or more flow parameters specified in the flow spec object to one or more constants stored in a memory, to determine a type of traffic of the given traffic flow. The flow analyzer determines the type of traffic independent of any differentiated services codepoint (DSCP) values in packets of the given traffic flow. A traffic scheduler is configured to assign the given traffic flow to a particular per hop behavior (PHB) based on the determined type of traffic for the given traffic flow.
摘要:
A two phase reservation mechanism for use with computer networks carrying voice or other time or bandwidth sensitive traffic. During the first or “resource allocation” phase, network resources sufficient to support the anticipated voice traffic are set aside within the computer network along the route between the sourcing entity and receiving entity. Although the network resources have been set aside, they are specifically not made available to the voice traffic, until the second phase of the reservation mechanism, called the “resource available” phase. During the resource available phase, the network resources that were previously set aside are now made available to the voice traffic.
摘要:
A system assigns network traffic flows to appropriate queues and/or queue servicing algorithms based upon one or more flow parameters contained in reservation requests associated with the traffic flows. The system may be disposed at an intermediate network device within a computer network. The intermediate network device includes a reservation engine, a packet classification engine, an admission control entity, a traffic scheduler, and a flow analyzer. The flow analyzer includes or has access to a memory that is preprogrammed with one or more heuristic sets for use in evaluating the flow parameters of reservation requests. When a reservation request that includes one or more flow parameters characterizing the bandwidth and/or forwarding requirements of the anticipated traffic flow is received, the flow analyzer applies the heuristic sets. Depending on which set of heuristics, if any, the parameters satisfy, the flow analyzer selects the appropriate queue and/or queue servicing algorithm for the flow.
摘要:
A method is provided in one example embodiment and includes receiving video data at an adaptive bitrate (ABR) client that includes a buffer; determining whether a buffer level for the buffer is below a target buffer level; applying a random delay for a fetch interval associated with requesting the video data; and requesting a next segment of the video data after the random delay. The random delay can provide for a plurality of fetch times to become decorrelated from each other.
摘要:
In one embodiment, a method includes identifying a current encoding rate requested by a client device for content received from a content source, setting at a network device a rate limit to limit the rate at which the content is received at the client device based on the current encoding rate, and adjusting the rate limit based on changes in the current encoding rate. The rate limit is set to allow the client device to change the current encoding rate to a next higher available encoding rate.
摘要:
A method is provided in one example embodiment and includes generating a bandwidth estimation for an adaptive bitrate (ABR) client; evaluating a current state of a buffer of the ABR client; and determining an encoding rate to be used for the ABR client based, at least, on the bandwidth estimation and the current state of the buffer. A fetch interval for the ABR client increases as the buffer becomes more full, while not reaching a level at which the ABR client is consuming data at a same rate at which it is downloading the data.
摘要:
A method is provided in one example embodiment and includes generating a bandwidth estimation for an adaptive bitrate (ABR) client; evaluating a current state of a buffer of the ABR client; and determining an encoding rate to be used for the ABR client based, at least, on the bandwidth estimation and the current state of the buffer. A fetch interval for the ABR client increases as the buffer becomes more full, while not reaching a level at which the ABR client is consuming data at a same rate at which it is downloading the data.
摘要:
In one embodiment, a method includes receiving data at a cache node in a network of cache nodes, the cache node located on a data path between a source of the data and a network device requesting the data, and determining if the received data is to be cached at the cache node, wherein determining comprises calculating a cost incurred to retrieve the data. An apparatus and logic are also disclosed.