Abstract:
Computing devices may implement dynamic detection of pause and resume for video communications. Video communication data may be capture at a participant device in a video communication. The video communication data may be evaluated to detect a pause or resume event for the transmission of the video communication data. Various types of video, audio, and other sensor analysis may be used to detect when a pause event or a resume event may be triggered. For triggered pause events, at least some of the video communication data my no longer be transmitted as part of the video communication. For triggered resume events, a pause state may cease and all of the video communication data may be transmitted.
Abstract:
Computing devices may implement dynamic display of video communication data. Video communication data for a video communication may be received at a computing device where another application is currently displaying image data on an electronic display. A display location may be determined for the video communication data according to display attributes that are configured by the other application at runtime. Once determined, the video communication data may then be displayed in the determined location. In some embodiments, the video communication data may be integrated with other data displayed on the electronic display for the other application.
Abstract:
A method and system of using a pre-encoder to improve encoder efficiency. The encoder may conform to ITU-T H.265 and the pre-encoder may conform to ITU-T H. 264. The pre-encoder may receive source video data and provide information regarding various coding modes, candidate modes, and a selected mode for coding the source video data. In an embodiment, the encoder may directly use the mode selected by the pre-encoder. In another embodiment, the encoder may receive both the source video data and information regarding the various coding modes (e.g., motion information, macroblock size, intra prediction direction, rate-distortion cost, and block pixel statistics) to simplify and/or refine its mode decision process. For example, the information provided by the pre-encoder may indicate unlikely modes, which unlikely modes need not be tested by the encoder, thus saving power and time.
Abstract:
During video coding, frame rate conversion (FRC) capabilities of a decoder may be estimated. Based on the estimated FRC capabilities, an encoder may select a frame rate for a video coding session and may alter a frame rate of source video to match the selected frame rate. Thereafter, the resultant video may be coded and output to a channel. By incorporating knowledge of a decoder's FRC capabilities as source video is being coded, an encoder may reduce the frame rate of source video opportunistically. Bandwidth that is conserved by avoiding coding of video data in excess of the selected frame rate may be directed to coding of the remaining video at a higher bitrate, which can lead to increased quality of the coding session as a whole.
Abstract:
Systems and methods are provided for capturing high quality video data, including data having a high dynamic range, for use with conventional encoders and decoders. High dynamic range data is captured using multiple groups of pixels where each group is captured using different exposure times to create groups of pixels. The pixels that are captured at different exposure times may be determined adaptively based on the content of the image, the parameters of the encoding system, or on the available resources within the encoding system. The transition from single exposure to using two different exposure times may be implemented gradually.
Abstract:
In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX.
Abstract:
Chroma deblock filtering of reconstructed video samples may be performed to remove blockiness artifacts and reduce color artifacts without over-smoothing. In a first method, chroma deblocking may be performed for boundary samples of a smallest transform size, regardless of partitions and coding modes. In a second method, chroma deblocking may be performed when a boundary strength is greater than 0. In a third method, chroma deblocking may be performed regardless of boundary strengths. In a fourth method, the type of chroma deblocking to be performed may be signaled in a slice header by a flag. Furthermore, luma deblock filtering techniques may be applied to chroma deblock filtering.
Abstract:
Techniques are disclosed for managing memory allocations when coding video data according to multiple codec configurations. According to these techniques, devices may negotiate parameters of a coding session that include parameters of a plurality of different codec configurations that may be used during the coding session. A device may estimate sizes of decoded picture buffers for each of the negotiated codec configurations and allocate in its memory a portion of memory sized according to a largest size of the estimated decoded picture buffers. Thereafter, the devices may exchange coded video data. The exchange may involve decoding coded data of reference pictures and storing the decoded reference pictures in the allocated memory. During the coding session, the devices may toggle among the different negotiated codec configurations. As they do, reallocations of memory may be avoided.
Abstract:
Techniques are disclosed for coding and decoding video captured as cube map images. According to these techniques, padded reference images are generated for use during predicting input data. A reference image is stored in a cube map format. A padded reference image is generated from the reference image in which image data of a first view contained in reference image is replicated and placed adjacent to a second view contained in the cube map image. When coding a pixel block of an input image, a prediction search may be performed between the input pixel block and content of the padded reference image. When the prediction search identifies a match, the pixel block may be coded with respect to matching data from the padded reference image. Presence of replicated data in the padded reference image is expected to increase the likelihood that adequate prediction matches will be identified for input pixel block data, which will increase overall efficiency of the video coding.
Abstract:
Coding techniques for image data may cause a still image to be converted to a “phantom” video sequence, which is coded by motion compensated prediction techniques. Thus, coded video data obtained from the coding operation may include temporal prediction references between frames of the video sequence. Metadata may be generated that identifies allocations of content from the still image to the frames of the video sequence. The coded data and the metadata may be transmitted to another device, whereupon it may be decoded by motion compensated prediction techniques and converted back to a still image data. Other techniques may involve coding an image in both a base layer representation and at least one coded enhancement layer representation. The enhancement layer representation may be coded predictively with reference to the base layer representation. The coded base layer representation may be partitioned into a plurality of individually-transmittable segments and stored. Prediction references of elements of the enhancement layer representation may be confined to segments of the base layer representation that correspond to a location of those elements. Meaning, when a pixel block of an enhancement layer maps to a given segment of the base layer representation, prediction references are confined to that segment and do not reference portions of the base layer representation that may be found in other segment(s).