Abstract:
Techniques are disclosed for coding video in applications where regions of video are inactive on a frame to frame basis. According to the techniques, coding processes update and reconstruct only a subset of pixel blocks of pixels within a frame, while other pixel blocks are retained from a previously coded frame stored in a coder's or decoder's reference frame buffer. The technique is called Backward Reference Updating (or “BRU”) for convenience. At a desired pixel block granularity, based on the activity between a current frame to be coded and its reference frame(s), BRU will only perform prediction, transform, quantization, and reconstruction on selected regions that are determined to be active. The reconstructed pixels in these active regions are directly placed onto a specified reference frame in memory instead of creating a new frame. Therefore, fewer memory transfers need to be performed.
Abstract:
In communication applications, aggregate source image data at a transmitter exceeds the data that is needed to display a rendering of a viewport at a receiver. Improved streaming techniques that include estimating a location of a viewport at a future time. According to such techniques, the viewport may represent a portion of an image from a multi-directional video to be displayed at the future time, and tile(s) of the image may be identified in which the viewport is estimated to be located. In these techniques, the image data of tile(s) in which the viewport is estimated to be located may be requested at a first service tier, and the other tile in which the viewport is not estimated to be located may be requested at a second service tier, lower than the first service tier.
Abstract:
Techniques are disclosed for improved video coding with virtual reference frames. A motion vector for prediction of a pixel block from a reference may be constrained based on the reference. In as aspect, if the reference is a temporally interpolated virtual reference frame with corresponding time close to the time of the current pixel block, the motion vector for prediction may be constrained magnitude and/or precision. In another aspect, a bitstream syntax for encoding the constrained motion vector may also be constrained. In this manner, the techniques proposed herein contribute to improved coding efficiencies.
Abstract:
A video classification, indexing, and retrieval system is disclosed that classifies and retrieves video along multiple indexing dimensions. A search system may field queries identifying desired parameters of video, search an indexed database for videos that match the query parameters, and create clips extracted from responsive videos that are provided in response. In this manner, different queries may cause different clips to be created from a single video, each clip tailored to the parameters of the query that is received.
Abstract:
Techniques are disclosed for generating virtual reference frames that may be used for prediction of input video frames. The virtual reference frames may be derived from already-coded reference frames and thereby incur reduced signaling overhead. Moreover, signaling of virtual reference frames may be avoided until an encoder selects the virtual reference frame as a prediction reference for a current frame. In this manner, the techniques proposed herein contribute to improved coding efficiencies.
Abstract:
Some embodiments provide a method for initiating a video conference using a first mobile device. The method presents, during an audio call through a wireless communication network with a second device, a selectable user-interface (UI) item on the first mobile device for switching from the audio call to the video conference. The method receives a selection of the selectable UI item. The method initiates the video conference without terminating the audio call. The method terminates the audio call before allowing the first and second devices to present audio and video data exchanged through the video conference.
Abstract:
A filtering system for video coders and decoders is disclosed that includes a feature detector having an input for samples reconstructed from coded video data representing a color component of source video, and having an output for data identifying a feature recognized therefrom, an offset calculator having an input for the feature identification data from the feature detector and having an output for a filter offset, and a filter having an input for the filter offset from the offset calculator and an input for the reconstructed samples, and having an output for filtered samples. The filtering system is expected to improve operations of video coder/decoder filtering systems by selecting filtering offsets from analysis of recovered video data in a common color plane as the samples that will be filtered.
Abstract:
Systems and methods disclosed for video compression, utilizing neural networks for predictive video coding. Processes employed combine multiple banks of neural networks with codec system components to carry out the coding and decoding of video data.
Abstract:
Systems and methods are disclosed for reshaping HDR video content to improve compression efficiency while using standard encoding/decoding techniques. Input HDR video frames, e.g., represented in an IPT color space, may be reshaped before the encoding/decoding process and the corresponding reconstructed HDR video frames may then be reverse reshaped. The disclosed reshaping methods may be combinations of scene-based or segment-based methods.
Abstract:
Coded video data may be transmitted between an encoder and a decoder using multiple FEC codes and/or packets for error detection and correction. Only a subset of the FEC packets need be transmitted between the encoder and decoder. The FEC packets of each FEC group may take, as inputs, data packets of a current FEC group and also an untransmitted FEC packet of a preceding FEC group. Due to relationships among the FEC packets, when transmission errors arise and data packets are lost, there remain opportunities for a decoder to recover lost data packets from earlier-received FEC groups when later-received FEC groups are decoded. This opportunity to recover data packets from earlier FEC groups may be useful in video coding and other systems, in which later-received data often cannot be decoded unless earlier-received data is decoded properly.