Abstract:
Coding techniques for input video may include assigning picture identifiers to input frames in either long-form or short-form formats. If a network error has occurred that results in loss of previously-coded video data, a new input frame may be assigned a picture identifier that is coded in a long-form coding format. If no network error has occurred, the input frame may be assigned a picture identifier that is coded in a short-form coding format. Long-form coding may mitigate against loss of synchronization between an encoder and a decoder by picture identifiers.
Abstract:
Error mitigation techniques are provided for video coding system in which input frames are selected for coding either as a Random Access Pictures (“RAP frames”) or as a non-RAP frame. Coded RAP frames may include RAP identifiers that set an ID context for subsequent frames. Coded non-RAP frames may include RAP identifiers that match the RAP identifiers that were included in the coded RAP frames. Thus, in the absence of transmission errors, a coded non-RAP frame should include a RAP identifier that matches the identifier of the preceding RAP frame. If the identifier of a non-RAP frame does not match the identifier of the RAP frame that immediately preceded it, then it indicates that a RAP frame was lost during transmission. In this case, the decoder may engage error recovery processes.
Abstract:
Computing devices may implement dynamic transitions from video messages to video communications. Video communication data for a video message may be received at a recipient device. The video communication data may be displayed as it is received, and recorded for subsequent playback. An indication of a selection to establish a video communication with the sender of the video message may be received, or an indication that display of the video communication is to be ceased may be received. If a video communication is to be established, then a video communication connection with the sender of the video message may be created so that subsequent video communication data may be sent via the established connection.
Abstract:
Computing devices may implement dynamic detection of pause and resume for video communications. Video communication data may be capture at a participant device in a video communication. The video communication data may be evaluated to detect a pause or resume event for the transmission of the video communication data. Various types of video, audio, and other sensor analysis may be used to detect when a pause event or a resume event may be triggered. For triggered pause events, at least some of the video communication data may no longer be transmitted as part of the video communication. For triggered resume events, a pause state may cease and all of the video communication data may be transmitted.
Abstract:
Methods and systems provide efficient sample adaptive offset (SAO) signaling by reducing a number of bits consumed for signaling SAO compared with conventional methods. In an embodiment, a single flag is used if a coding unit to a first scanning direction with respect to a given coding unit is off. In an embodiment, further bits may be saved if some neighboring coding units are not present, i.e. the given coding unit is an edge. For example, a flag may be skipped, e.g., not signaled, if the given coding unit does not have a neighbor. In an embodiment, a syntax element, one or more flags may signal whether SAO filtering is performed in a coding unit. Based on the syntax element, a merge flag may be skipped to save bits. In an embodiment, SAO syntax may be signaled at a slice level.
Abstract:
A wireless device described herein can use information on data flow, in addition to indications from the physical network, to decide on suitable bandwidth usage for audio and video information. This data flow information is further used to determine an efficient network route to use for high-quality reception and transmission of audio and video data, as well as the appropriate time to switch between available network routes to improve bandwidth performance.
Abstract:
Some embodiments provide a method for conducting a video conference between a first mobile device and a second device. The first mobile device includes first and second cameras. The method selects the first camera for capturing images. The method transmits images captured by the first camera to the second device. The method receives selections of the second camera for capturing images during the video conference. The method terminates the transmission of images captured by the first camera and transmits images captured by the second camera of the first mobile device to the second device during the video conference.
Abstract:
System and methods for improved playback of a video stream are presented. Video snippets are identified that include a number of consecutive frames for playback. Snippets may be evenly temporally spaced in the video stream or may be content adaptive. Then the first frame of a snippet may be selected as the first frame of a scene or other appropriate stopping point. Scene detection, object detection, motion detection, video metadata, or other information generated during encoding or decoding of the video stream may aid in appropriate snippet selection.
Abstract:
In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX.
Abstract:
A system may include a receiver, a decoder, a post-processor, and a controller. The receiver may receive encoded video data. The decoder may decode the encoded video data. The post-processor may perform post-processing on frames of decoded video sequence from the decoder. The controller may adjust post-processing of a current frame, based upon at least one condition parameters detected at the system.