Abstract:
Techniques are disclosed for overcoming communication lag between interactive operations among devices in a streaming session. According to the techniques, a first device streaming video content to a second device and an annotation is entered to a first frame being displayed at the second device, which is communicated back to the first device. Responsive to a communication that identifies the annotation, a first device may identify an element of video content from the first frame to which the annotation applies and determine whether the identified element is present in a second frame of video content currently displayed at the first terminal. If so, the first device may display the annotation with the second frame in a location where the identified element is present. If not, the first device may display the annotation via an alternate technique.
Abstract:
A video coding system may initiate coding of a new coding session with reference to an “inferred key frame” that is known both to an encoder and a decoder before a coding session begins. The inferred key frame need not be transmitted between the encoder and decoder via the channel. Instead, the inferred key frame may be stored locally at the encoder and the decoder. Frames coded at the onset of a video coding session may be coded with reference to the inferred key frame, which increases the likelihood a decoder will receive a frame it can decode properly and accelerate the rate at which the decoder generates recovered video data. Inferred key frames may be used as prediction references to recover from transmission errors.
Abstract:
A method of adaptive chroma downsampling is presented. The method comprises converting a source image to a converted image in an output color format, applying a plurality of downsample filters to the converted image and estimating a distortion for each filter chose the filter that produces the minimum distortion. The distortion estimation includes applying an upsample filter, and a pixel is output based on the chosen filter. Methods for closed loop conversions are also presented.
Abstract:
In a video coding system, a common video sequence is coded multiple times to yield respective instances of coded video data. Each instance may be coded according to a set coding parameters derived from a target bit rate of a respective tier of service. Each tier may be coded according to a constraint that limits a maximum coding rate of the tier to be less than a target bit rate of another predetermined tier of service. Having been coded according to the constraint facilitates dynamic switching among tiers by a requesting client device processing resources or communication bandwidth changes. Improved coding systems to switch among different coding streams may increase quality of video streamed while minimizing transmission and storage size of such content.
Abstract:
In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX.
Abstract:
A video coding/decoding system codes data efficiently even when input video data exhibits changes in dynamic range. The system may map pixel values of the first frame from a dynamic range specific to the input image data to a second dynamic range that applies universally to a plurality of frames that have different dynamic ranges defined for them. The system may code the mapped pixel values to reduce bandwidth of the mapped frame data, and thereafter transmit the coded image data to a channel.
Abstract:
Embodiments of the invention provide techniques for upsampling a video sequence for coding. According to the method, an estimate of camera motion may be obtained from motion sensor data. Video data may be analyzed to detect motion within frames output from a camera that is not induced by the camera motion. When non-camera motion falls within a predetermined operational limit, video upsampling processes may be engaged. In another embodiment, video upsampling may be performed by twice estimating image content for a hypothetical new a frame using two different sources as inputs. A determination may be made whether the two estimates of the frame match each other sufficiently well. If so, the two estimates may be merged to yield a final estimated frame and the new frame may be integrated into a stream of video data.
Abstract:
Techniques for encoding data based at least in part upon an awareness of the decoding complexity of the encoded data and the ability of a target decoder to decode the encoded data are disclosed. In some embodiments, a set of data is encoded based at least in part upon a state of a target decoder to which the encoded set of data is to be provided. In some embodiments, a set of data is encoded based at least in part upon the states of multiple decoders to which the encoded set of data is to be provided.
Abstract:
The invention is directed to an efficient way for encoding and decoding video. Embodiments include identifying different coding units that share a similar characteristic. The characteristic can be, for example: quantization values, modes, block sizes, color space, motion vectors, depth, facial and non-facial regions, and filter values. An encoder may then group the units together as a coherence group. An encoder may similarly create a table or other data structure of the coding units. An encoder may then extract the commonly repeating characteristic or attribute from the coding units. The encoder may transmit the coherence groups along with the data structure, and other coding units which were not part of a coherence group. The decoder may receive the data, and utilize the shared characteristic by storing locally in cache, for faster repeated decoding, and decode the coherence group together.
Abstract:
Error mitigation techniques are provided for video coding system in which input frames are selected for coding either as a Random Access Pictures (“RAP frames”) or as a non-RAP frame. Coded RAP frames may include RAP identifiers that set an ID context for subsequent frames. Coded non-RAP frames may include RAP identifiers that match the RAP identifiers that were included in the coded RAP frames. Thus, in the absence of transmission errors, a coded non-RAP frame should include a RAP identifier that matches the identifier of the preceding RAP frame. If the identifier of a non-RAP frame does not match the identifier of the RAP frame that immediately preceded it, then it indicates that a RAP frame was lost during transmission. In this case, the decoder may engage error recovery processes.