Abstract:
A video coding/decoding system codes data efficiently even when input video data exhibits changes in dynamic range. The system may map pixel values of the first frame from a dynamic range specific to the input image data to a second dynamic range that applies universally to a plurality of frames that have different dynamic ranges defined for them. The system may code the mapped pixel values to reduce bandwidth of the mapped frame data, and thereafter transmit the coded image data to a channel.
Abstract:
Embodiments of the invention provide techniques for upsampling a video sequence for coding. According to the method, an estimate of camera motion may be obtained from motion sensor data. Video data may be analyzed to detect motion within frames output from a camera that is not induced by the camera motion. When non-camera motion falls within a predetermined operational limit, video upsampling processes may be engaged. In another embodiment, video upsampling may be performed by twice estimating image content for a hypothetical new a frame using two different sources as inputs. A determination may be made whether the two estimates of the frame match each other sufficiently well. If so, the two estimates may be merged to yield a final estimated frame and the new frame may be integrated into a stream of video data.
Abstract:
The invention is directed to an efficient way for encoding and decoding video. Embodiments include identifying different coding units that share a similar characteristic. The characteristic can be, for example: quantization values, modes, block sizes, color space, motion vectors, depth, facial and non-facial regions, and filter values. An encoder may then group the units together as a coherence group. An encoder may similarly create a table or other data structure of the coding units. An encoder may then extract the commonly repeating characteristic or attribute from the coding units. The encoder may transmit the coherence groups along with the data structure, and other coding units which were not part of a coherence group. The decoder may receive the data, and utilize the shared characteristic by storing locally in cache, for faster repeated decoding, and decode the coherence group together.
Abstract:
Error mitigation techniques are provided for video coding system in which input frames are selected for coding either as a Random Access Pictures (“RAP frames”) or as a non-RAP frame. Coded RAP frames may include RAP identifiers that set an ID context for subsequent frames. Coded non-RAP frames may include RAP identifiers that match the RAP identifiers that were included in the coded RAP frames. Thus, in the absence of transmission errors, a coded non-RAP frame should include a RAP identifier that matches the identifier of the preceding RAP frame. If the identifier of a non-RAP frame does not match the identifier of the RAP frame that immediately preceded it, then it indicates that a RAP frame was lost during transmission. In this case, the decoder may engage error recovery processes.
Abstract:
Embodiments of the present invention generate estimates of device motion from two data sources on a computing device—a motion sensor and a camera. The device may compare the estimates to each other to determine if they agree. If they agree, the device may confirm that device motion estimates based on the motion sensor are accurate and may output those estimates to an application within the device. If the device motion estimates disagree, the device may alter the motion estimates obtained from the motion sensor before outputting them to the application.
Abstract:
Embodiments of the present invention provide techniques for efficiently coding/decoding video data during circumstances when constraints are imposed on the video data. A frame from a video sequence may be marked as a delayed decoder refresh frame. Frames successive to the delayed decoder refresh frame in coding order may predictively coded without reference to frames preceding the delayed decoder refresh frame in coding order. The distance between the delayed decoder refresh frame and the successive frames may exceed a distance threshold. Frames successive to a current frame in decoding order may be decoded without reference to frames preceding the current frame in decoding order. The distance between the current frame and the successive frames may exceed a distance threshold.
Abstract:
A system obtains a data set representing immersive video content for display at a display time, including first data representing the content according to a first level of detail, and second data representing the content according to a second higher level of detail. During one or more first times prior to the display time, the system causes at least a portion of the first data to be stored in a buffer. During one or more second times prior to the display time, the system generates a prediction of a viewport for displaying the content to a user at the display time, identifies a portion of the second data corresponding to the prediction of the viewport, and causes the identified portion of the second data to be stored in the video buffer. At the display time, the system causes the content to be displayed to the user using the video buffer.
Abstract:
In communication applications, aggregate source image data at a transmitter exceeds the data that is needed to display a rendering of a viewport at a receiver. Improved streaming techniques that include estimating a location of a viewport at a future time. According to such techniques, the viewport may represent a portion of an image from a multi-directional video to be displayed at the future time, and tile(s) of the image may be identified in which the viewport is estimated to be located. In these techniques, the image data of tile(s) in which the viewport is estimated to be located may be requested at a first service tier, and the other tile in which the viewport is not estimated to be located may be requested at a second service tier, lower than the first service tier.
Abstract:
A technique for transmitting data in a copresence environment includes initiating a virtual communication session between a local device and remote devices in a shared copresence environment, where each of the plurality of sending devices are transmitting a sending quality data stream in the virtual communication session. A region of interest for the local device is determined that includes a portion of the copresence environment. The local device subscribes to a first quality data stream for the remote devices represented in the region of interest, and a second quality data stream for the remote devices not represented in the region of interest.
Abstract:
A system obtains a data set representing immersive video content for display at a display time, including first data representing the content according to a first level of detail, and second data representing the content according to a second higher level of detail. During one or more first times prior to the display time, the system causes at least a portion of the first data to be stored in a buffer. During one or more second times prior to the display time, the system generates a prediction of a viewport for displaying the content to a user at the display time, identifies a portion of the second data corresponding to the prediction of the viewport, and causes the identified portion of the second data to be stored in the video buffer. At the display time, the system causes the content to be displayed to the user using the video buffer.