Abstract:
Resource-aware dynamic bandwidth control uses information about current network state and receiver performance to avoid, minimize and/or recover from the effects of network spikes and data processing spikes. Linear models may be used to estimate a time required to process data packets in a data processing queue, and are thus useful to determine whether a data processing spike is occurring. When a data processing spike occurs, an alarm may be sent from a client to a server notifying the server that the client must drop packets. In response, the server can encode and transmit an independent packet suitable for replacing the queued data packets which can then be dropped by the client and the independent packet present to the processor instead.
Abstract:
Described are the architecture of such a system, algorithms for time synchronization during a multiway conferencing session, methods to fight with network imperfections such as jitter to improve synchronization, methods of introducing buffering delays to create handicaps for players with faster connections, methods which help players with synchronization (such as a synchronized metronome during a music conferencing session), methods for synchronized recording and live delivery of synchronized data to the audience watching the distributed interaction live over the Internet.
Abstract:
A method and system for providing computer-generated output and in particular graphical output. An output capturing and encoding engine is configured to intercept graphical output from an application on a server, organize the output into regions having similar motion and/or graphical characteristics, and convert the data from each region into a format suitable to balance transmission efficiencies versus display quality or capability at the receiving end.
Abstract:
Resource-aware dynamic bandwidth control uses information about current network state and receiver performance to avoid, minimize and/or recover from the effects of network spikes and data processing spikes. Linear models may be used to estimate a time required to process data packets in a data processing queue, and are thus useful to determine whether a data processing spike is occurring. When a data processing spike occurs, an alarm may be sent from a client to a server notifying the server that the client must drop packets. In response, the server can encode and transmit an independent packet suitable for replacing the queued data packets which can then be dropped by the client and the independent packet present to the processor instead.
Abstract:
Resource-aware dynamic bandwidth control uses information about current network state and receiver performance to avoid, minimize and/or recover from the effects of network spikes and data processing spikes. Linear models may be used to estimate a time required to process data packets in a data processing queue, and are thus useful to determine whether a data processing spike is occurring. When a data processing spike occurs, an alarm may be sent from a client to a server notifying the server that the client must drop packets. In response, the server can encode and transmit an independent packet suitable for replacing the queued data packets which can then be dropped by the client and the independent packet present to the processor instead.
Abstract:
Resource-aware dynamic bandwidth control uses information about current network state and receiver performance to avoid, minimize and/or recover from the effects of network spikes and data processing spikes. Linear models may be used to estimate a time required to process data packets in a data processing queue, and are thus useful to determine whether a data processing spike is occurring. When a data processing spike occurs, an alarm may be sent from a client to a server notifying the server that the client must drop packets. In response, the server can encode and transmit an independent packet suitable for replacing the queued data packets which can then be dropped by the client and the independent packet present to the processor instead.