Abstract:
Methods and apparatus for contextual video content adaptation are disclosed. Video content is adapted based on any number of criteria such as a target device type, viewing conditions, network conditions or various use cases, for example. A target adaptation of content may be defined for a specified video source. For example, based on receiving a request from a portable device for a live sports feed, a shortened and reduced resolution version of the live sport feed video may be defined for the portable device. The source content may be accessed and adapted (e.g., adapted temporally, spatially, etc.) and an adapted version of content generated. For example, the source content may be cropped to a particular spatial region of interest and/or reduced in length to a particular scene. The generated adaptation may be transmitted to a device in response to the request, or stored to a storage device.
Abstract:
An encoder may include a luma transform, a transformer, and a chroma transform. The luma transform may determine a linear luminance value based upon a plurality of primary color values of a pixel. The transformer may generate a transformed luminance value based upon the linear luminance value and a plurality of transformed color values based upon corresponding more than one of the primary color values of the pixel. The chroma transform may determine a plurality of chroma values based upon corresponding plurality of transformed color values and the transformed luminance value of the pixel.
Abstract:
A system may include a receiver, a decoder, a post-processor, and a controller. The receiver may receive encoded video data. The decoder may decode the encoded video data. The post-processor may perform post-processing on frames of decoded video sequence from the decoder. The controller may adjust post-processing of a current frame, based upon at least one condition parameters detected at the system.
Abstract:
Computing devices may implement dynamic detection of pause and resume for video communications. Video communication data may be capture at a participant device in a video communication. The video communication data may be evaluated to detect a pause or resume event for the transmission of the video communication data. Various types of video, audio, and other sensor analysis may be used to detect when a pause event or a resume event may be triggered. For triggered pause events, at least some of the video communication data my no longer be transmitted as part of the video communication. For triggered resume events, a pause state may cease and all of the video communication data may be transmitted.
Abstract:
Computing devices may implement dynamic display of video communication data. Video communication data for a video communication may be received at a computing device where another application is currently displaying image data on an electronic display. A display location may be determined for the video communication data according to display attributes that are configured by the other application at runtime. Once determined, the video communication data may then be displayed in the determined location. In some embodiments, the video communication data may be integrated with other data displayed on the electronic display for the other application.
Abstract:
A method and system of using a pre-encoder to improve encoder efficiency. The encoder may conform to ITU-T H.265 and the pre-encoder may conform to ITU-T H. 264. The pre-encoder may receive source video data and provide information regarding various coding modes, candidate modes, and a selected mode for coding the source video data. In an embodiment, the encoder may directly use the mode selected by the pre-encoder. In another embodiment, the encoder may receive both the source video data and information regarding the various coding modes (e.g., motion information, macroblock size, intra prediction direction, rate-distortion cost, and block pixel statistics) to simplify and/or refine its mode decision process. For example, the information provided by the pre-encoder may indicate unlikely modes, which unlikely modes need not be tested by the encoder, thus saving power and time.
Abstract:
During video coding, frame rate conversion (FRC) capabilities of a decoder may be estimated. Based on the estimated FRC capabilities, an encoder may select a frame rate for a video coding session and may alter a frame rate of source video to match the selected frame rate. Thereafter, the resultant video may be coded and output to a channel. By incorporating knowledge of a decoder's FRC capabilities as source video is being coded, an encoder may reduce the frame rate of source video opportunistically. Bandwidth that is conserved by avoiding coding of video data in excess of the selected frame rate may be directed to coding of the remaining video at a higher bitrate, which can lead to increased quality of the coding session as a whole.
Abstract:
Systems and methods are provided for capturing high quality video data, including data having a high dynamic range, for use with conventional encoders and decoders. High dynamic range data is captured using multiple groups of pixels where each group is captured using different exposure times to create groups of pixels. The pixels that are captured at different exposure times may be determined adaptively based on the content of the image, the parameters of the encoding system, or on the available resources within the encoding system. The transition from single exposure to using two different exposure times may be implemented gradually.
Abstract:
In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX.
Abstract:
Video encoders often produce banding artifacts on areas with smooth gradients and low levels of detail/noise. In this disclosure, a video encoding system identifies the banded areas and adjusts coding parameters accordingly. The video coder may include a pre-coding banding detector and a post-coding banding detector. The pre-coding detector may identify regions in the input picture with smooth gradients that are likely to have banding artifacts after encoding. The post-coding detector may identify regions in the reconstructed picture with visible banding. Usage of pre-coding detector and/or post-coding detector depends on how an encoder operates. In a single-pass encoding or during the first pass of a multi-pass encoding, the pre-coding detection maps are used. During picture re-encoding or during later passes of a multi-pass encoding, the post-coding detector maps are used.