Abstract:
A video streaming method for transitioning between multiple sequences of coded video data may include receiving and decoding transmission units from a first sequence of coded video data. In response to a request to transition to a second sequence of coded video data, the method may determine whether a time to transition to the second sequence of coded video data can be reduced by transitioning to the second sequence of coded video data via an intermediate sequence of coded video data. If the time can be reduced, the method may include receiving at least one transmission unit from an intermediate sequence of coded video data that corresponds to the request to transition, decoding the transmission unit from the intermediate sequence, and transitioning from the first sequence to the second sequence via the decoded transmission unit from the intermediate sequence.
Abstract:
Techniques for coding video data are described that maintain high precision coding for low motion video content. Such techniques include determining whether a source video sequence to be coded has low motion content. When the source video sequence contains low motion content, the video sequence may be coded as a plurality of coded frames using a chain of temporal prediction references among the coded frames. Thus, a single frame in the source video sequence is coded as a plurality of frames. Because the coded frames each represent identical content, the quality of coding should improve across the plurality of frames. Optionally, the disclosed techniques may increase the resolution at which video is coded to improve precision and coding quality.
Abstract:
Techniques are disclosed for overcoming communication lag between interactive operations among devices in a streaming session. According to the techniques, a first device streaming video content to a second device and an annotation is entered to a first frame being displayed at the second device, which is communicated back to the first device. Responsive to a communication that identifies the annotation, a first device may identify an element of video content from the first frame to which the annotation applies and determine whether the identified element is present in a second frame of video content currently displayed at the first terminal. If so, the first device may display the annotation with the second frame in a location where the identified element is present. If not, the first device may display the annotation via an alternate technique.
Abstract:
In a video coding system, a common video sequence is coded multiple times to yield respective instances of coded video data. Each instance may be coded according to a set coding parameters derived from a target bit rate of a respective tier of service. Each tier may be coded according to a constraint that limits a maximum coding rate of the tier to be less than a target bit rate of another predetermined tier of service. Having been coded according to the constraint facilitates dynamic switching among tiers by a requesting client device processing resources or communication bandwidth changes. Improved coding systems to switch among different coding streams may increase quality of video streamed while minimizing transmission and storage size of such content.
Abstract:
In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX.
Abstract:
A video coding/decoding system codes data efficiently even when input video data exhibits changes in dynamic range. The system may map pixel values of the first frame from a dynamic range specific to the input image data to a second dynamic range that applies universally to a plurality of frames that have different dynamic ranges defined for them. The system may code the mapped pixel values to reduce bandwidth of the mapped frame data, and thereafter transmit the coded image data to a channel.
Abstract:
The invention is directed to an efficient way for encoding and decoding video. Embodiments include identifying different coding units that share a similar characteristic. The characteristic can be, for example: quantization values, modes, block sizes, color space, motion vectors, depth, facial and non-facial regions, and filter values. An encoder may then group the units together as a coherence group. An encoder may similarly create a table or other data structure of the coding units. An encoder may then extract the commonly repeating characteristic or attribute from the coding units. The encoder may transmit the coherence groups along with the data structure, and other coding units which were not part of a coherence group. The decoder may receive the data, and utilize the shared characteristic by storing locally in cache, for faster repeated decoding, and decode the coherence group together.
Abstract:
Embodiments of the present invention provide techniques for efficiently coding/decoding video data during circumstances when constraints are imposed on the video data. A frame from a video sequence may be marked as a delayed decoder refresh frame. Frames successive to the delayed decoder refresh frame in coding order may predictively coded without reference to frames preceding the delayed decoder refresh frame in coding order. The distance between the delayed decoder refresh frame and the successive frames may exceed a distance threshold. Frames successive to a current frame in decoding order may be decoded without reference to frames preceding the current frame in decoding order. The distance between the current frame and the successive frames may exceed a distance threshold.
Abstract:
Computing devices may implement instant video communication connections for video communications. Connection information for mobile computing devices may be maintained. A request to initiate an instant video communication may be received, and if authorized, the connection information for the particular recipient mobile computing device may be accessed. Video communication data may then be sent to the recipient mobile computing device according to the connection information so that the video communication data may be displayed at the recipient device as it is received. New connection information for different mobile computing devices may be added, or updates to existing connection information may also be performed. Connection information for some mobile computing devices may be removed.
Abstract:
Techniques are presented for modifying images of an object in video, for example to correct for lens distortion, or to beautify a face. These techniques include extracting and validating features of an object from a source video frame, tracking those features over time, estimating a pose of the object, modifying a 3D model of the object based on the features, and rendering a modified video frame based on the modified 3D model and modified intrinsic and extrinsic matrices. These techniques may be applied in real-time to an object in a sequence of video frames.