Abstract:
A method of managing resources on a terminal includes determining a number of downloaded video streams active at the terminal, prioritizing the active video streams, assigning a decoding quality level to each active video stream based on a priority assignment for each active video stream, and apportioning reception bandwidth to each active video stream based on an assigned quality level of each active video stream.
Abstract:
Frame packing techniques are disclosed for multi-directional images and video. According to an embodiment, a multi-directional source image is reformatted into a format in which image data from opposing fields of view are represented in respective regions of the packed image as flat image content. Image data from a multi-directional field of view of the source image between the opposing fields of view are represented in another region of the packed image as equirectangular image content. It is expected that use of the formatted frame will lead to coding efficiencies when the formatted image is processed by predictive video coding techniques and the like.
Abstract:
Techniques are disclosed for coding and decoding video data using object recognition and object modeling as a basis of coding and error recovery. A video decoder may decode coded video data received from a channel. The video decoder may perform object recognition on decoded video data obtained therefrom, and, when an object is recognized in the decoded video data, the video decoder may generate a model representing the recognized object. It may store data representing the model locally. The video decoder may communicate the model data to an encoder, which may form a basis of error mitigation and recovery. The video decoder also may monitor deviation patterns in the object model and associated patterns in audio content; if/when video decoding is suspended due to operational errors, the video decoder may generate simulated video data by analyzing audio data received during the suspension period and developing video data from the data model and deviation(s) associated with patterns detected from the audio data.
Abstract:
Frame packing techniques are disclosed for multi-directional images and video. According to an embodiment, a multi-directional source image is reformatted into a format in which image data from opposing fields of view are represented in respective regions of the packed image as flat image content. Image data from a multi-directional field of view of the source image between the opposing fields of view are represented in another region of the packed image as equirectangular image content. It is expected that use of the formatted frame will lead to coding efficiencies when the formatted image is processed by predictive video coding techniques and the like.
Abstract:
Techniques are disclosed for coding and decoding video captured as cube map images. According to these techniques, padded reference images are generated for use during predicting input data. A reference image is stored in a cube map format. A padded reference image is generated from the reference image in which image data of a first view contained in reference image is replicated and placed adjacent to a second view contained in the cube map image. When coding a pixel block of an input image, a prediction search may be performed between the input pixel block and content of the padded reference image. When the prediction search identifies a match, the pixel block may be coded with respect to matching data from the padded reference image. Presence of replicated data in the padded reference image is expected to increase the likelihood that adequate prediction matches will be identified for input pixel block data, which will increase overall efficiency of the video coding.
Abstract:
Techniques are disclosed for overcoming communication lag between interactive operations among devices in a streaming session. According to the techniques, a first device streaming video content to a second device and an annotation is entered to a first frame being displayed at the second device, which is communicated back to the first device. Responsive to a communication that identifies the annotation, a first device may identify an element of video content from the first frame to which the annotation applies and determine whether the identified element is present in a second frame of video content currently displayed at the first terminal. If so, the first device may display the annotation with the second frame in a location where the identified element is present. If not, the first device may display the annotation via an alternate technique.
Abstract:
Chroma deblock filtering of reconstructed video samples may be performed to remove blockiness artifacts and reduce color artifacts without over-smoothing. In a first method, chroma deblocking may be performed for boundary samples of a smallest transform size, regardless of partitions and coding modes. In a second method, chroma deblocking may be performed when a boundary strength is greater than 0. In a third method, chroma deblocking may be performed regardless of boundary strengths. In a fourth method, the type of chroma deblocking to be performed may be signaled in a slice header by a flag. Furthermore, luma deblock filtering techniques may be applied to chroma deblock filtering.
Abstract:
An encoding system may include a video source that provides video data to be coded, a video coder, a transmitter, and a controller to manage operation of the system. The controller may control the video coder to code and compress the image information from the video source into video data, based upon one or more motion prediction parameters. The transmitter may transmit the video data. A decoding system may decode the video data based upon the motion prediction parameters.
Abstract:
The invention is directed to an efficient way for encoding and decoding video. Embodiments include identifying different coding units that share a similar characteristic. The characteristic can be, for example: quantization values, modes, block sizes, color space, motion vectors, depth, facial and non-facial regions, and filter values. An encoder may then group the units together as a coherence group. An encoder may similarly create a table or other data structure of the coding units. An encoder may then extract the commonly repeating characteristic or attribute from the coding units. The encoder may transmit the coherence groups along with the data structure, and other coding units which were not part of a coherence group. The decoder may receive the data, and utilize the shared characteristic by storing locally in cache, for faster repeated decoding, and decode the coherence group together.
Abstract:
Coding techniques for input video may include assigning picture identifiers to input frames in either long-form or short-form formats. If a network error has occurred that results in loss of previously-coded video data, a new input frame may be assigned a picture identifier that is coded in a long-form coding format. If no network error has occurred, the input frame may be assigned a picture identifier that is coded in a short-form coding format. Long-form coding may mitigate against loss of synchronization between an encoder and a decoder by picture identifiers.