Abstract:
System and methods for improved playback of a video stream are presented. Video snippets are identified that include a number of consecutive frames for playback. Snippets may be evenly temporally spaced in the video stream or may be content adaptive. Then the first frame of a snippet may be selected as the first frame of a scene or other appropriate stopping point. Scene detection, object detection, motion detection, video metadata, or other information generated during encoding or decoding of the video stream may aid in appropriate snippet selection.
Abstract:
Image and video processing techniques are disclosed for processing components of a color space individually by determining limits for each component based on the relationship between each component in a color space. These limits may then be used to clip each component such that the component values are within the determined range for that component. In this manner, more efficient processing of images and/or video may be achieved.
Abstract:
A video coding system may include an encoder performs motion-compensated prediction on a video signal in a second format converted from an input format of the video signal. The video coding system may also include a decoder to decode portions of the encoded video, and a filtering system that filters portions of the decoded video, for example, by deblocking filtering or SAO filtering, using parameters derived from the video signal in the input format. A prediction system may include another format converter that converts the decoded video to the input format. The prediction system may select parameters of the motion-compensated prediction based at least in part on a comparison of the video signal in the input format to decoded video in the input format.
Abstract:
Video coding systems and methods protect against banding artifacts in decoded image content. According to the method, a video coder may identify, from content of pixel blocks of a frame of video data, which pixel blocks are likely to exhibit banding artifacts from the video coding/decoding processes. The video coder may assemble regions of the frame that are likely to exhibit banding artifacts based on the identified pixel blocks' locations with respect to each other. The video coder may apply anti-banding processing to pixel blocks within one or more of the identified regions and, thereafter, may code the processed frame by a compression operation.
Abstract:
Judder artifacts are remedied in video coding system by employing frame rate conversion at an encoder. According to the disclosure, a source video sequence may be coded as base layer coded video at a first frame rate. An encoder may identify a portion of the coded video sequence that likely will exhibit judder effects when decoded. For those portions that likely will exhibit judder effects, video data representing the portion of the source video may be coded at a higher frame rate than a frame rate of the coded base layer data as enhancement layer data. Moreover, an encoder may generate metadata representing “FRC hints”—techniques that a decoder should employ when performing decoder-side frame rate conversion. An encoding terminal may transmit the base layer coded video and either the enhancement layer coded video or the FRC hints to a decoder. Thus, encoder infrastructure may mitigate against judder artifacts that may arise during decoding.
Abstract:
A system may include a receiver, a decoder, a post-processor, and a controller. The receiver may receive encoded video data. The decoder may decode the encoded video data. The post-processor may perform post-processing on frames of decoded video sequence from the decoder. The controller may adjust post-processing of a current frame, based upon at least one condition parameters detected at the system.
Abstract:
A scalable coding system codes video as a base layer representation and an enhancement layer representation. A base layer coder may code an LDR representation of a source video. A predictor may predict an HDR representation of the source video from the coded base layer data. A comparator may generate prediction residuals which represent a difference between an HDR representation of the source video and the predicted HDR representation of the source video. A quantizer may quantize the residuals down to an LDR representation. An enhancement layer coder may code the LDR residuals. In other embodiments, the enhancement layer coder may code LDR-converted HDR video directly.
Abstract:
In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX.
Abstract:
Computing devices may implement dynamic transitions from video messages to video communications. Video communication data for a video message may be received at a recipient device. The video communication data may be displayed as it is received, and recorded for subsequent playback. An indication of a selection to establish a video communication with the sender of the video message may be received, or an indication that display of the video communication is to be ceased may be received. If a video communication is to be established, then a video communication connection with the sender of the video message may be created so that subsequent video communication data may be sent via the established connection.
Abstract:
An encoding system may include a video source that captures video image, a video coder, and a controller to manage operation of the system. The video coder may encode the video image into encoded video data using a plurality of subgroup parameters corresponding to a plurality of subgroups of pixels within a group. The controller may set the subgroup parameters for at least one of the subgroups of pixels in the video coder, based upon at least one parameters corresponding to the group. A decoding system may decode the video data based upon the motion prediction parameters.