Abstract:
Coding techniques for input video may include assigning picture identifiers to input frames in either long-form or short-form formats. If a network error has occurred that results in loss of previously-coded video data, a new input frame may be assigned a picture identifier that is coded in a long-form coding format. If no network error has occurred, the input frame may be assigned a picture identifier that is coded in a short-form coding format. Long-form coding may mitigate against loss of synchronization between an encoder and a decoder by picture identifiers.
Abstract:
Error mitigation techniques are provided for video coding system in which input frames are selected for coding either as a Random Access Pictures (“RAP frames”) or as a non-RAP frame. Coded RAP frames may include RAP identifiers that set an ID context for subsequent frames. Coded non-RAP frames may include RAP identifiers that match the RAP identifiers that were included in the coded RAP frames. Thus, in the absence of transmission errors, a coded non-RAP frame should include a RAP identifier that matches the identifier of the preceding RAP frame. If the identifier of a non-RAP frame does not match the identifier of the RAP frame that immediately preceded it, then it indicates that a RAP frame was lost during transmission. In this case, the decoder may engage error recovery processes.
Abstract:
Computing devices may implement dynamic transitions from video messages to video communications. Video communication data for a video message may be received at a recipient device. The video communication data may be displayed as it is received, and recorded for subsequent playback. An indication of a selection to establish a video communication with the sender of the video message may be received, or an indication that display of the video communication is to be ceased may be received. If a video communication is to be established, then a video communication connection with the sender of the video message may be created so that subsequent video communication data may be sent via the established connection.
Abstract:
Computing devices may implement dynamic detection of pause and resume for video communications. Video communication data may be capture at a participant device in a video communication. The video communication data may be evaluated to detect a pause or resume event for the transmission of the video communication data. Various types of video, audio, and other sensor analysis may be used to detect when a pause event or a resume event may be triggered. For triggered pause events, at least some of the video communication data may no longer be transmitted as part of the video communication. For triggered resume events, a pause state may cease and all of the video communication data may be transmitted.
Abstract:
Methods and systems provide efficient sample adaptive offset (SAO) signaling by reducing a number of bits consumed for signaling SAO compared with conventional methods. In an embodiment, a single flag is used if a coding unit to a first scanning direction with respect to a given coding unit is off. In an embodiment, further bits may be saved if some neighboring coding units are not present, i.e. the given coding unit is an edge. For example, a flag may be skipped, e.g., not signaled, if the given coding unit does not have a neighbor. In an embodiment, a syntax element, one or more flags may signal whether SAO filtering is performed in a coding unit. Based on the syntax element, a merge flag may be skipped to save bits. In an embodiment, SAO syntax may be signaled at a slice level.
Abstract:
Method For Implementing A Quantizer In A Multimedia Compression And Encoding System is disclosed. In the Quantizer system of the present invention, several new quantization ideas are disclosed. In one embodiment, adjacent macroblocks are grouped together into macroblock groups. The macroblock groups are then assigned a common quantizer value. The common quantizer value may be selected based upon how the macroblocks are encoded, the type of macroblocks within the macroblock group (intra-blocks or inter-blocks), the history of the motion vectors associated with the macroblocks in the macroblock group, the residuals of the macroblocks in the macroblock group, and the energy of the macroblocks in the macroblock group. The quantizer value may be adjusted in a manner that is dependent on the current quantizer value. Specifically, if the quantizer value is at the low end of the quantizer scale, then only small adjustments are made. If the quantizer value is at the high end then larger adjustments may be made to the quantizer. Finally, in one embodiment, the quantizer is implemented along with an inverse quantizer for efficient operation.
Abstract:
A wireless device described herein can use information on data flow, in addition to indications from the physical network, to decide on suitable bandwidth usage for audio and video information. This data flow information is further used to determine an efficient network route to use for high-quality reception and transmission of audio and video data, as well as the appropriate time to switch between available network routes to improve bandwidth performance.
Abstract:
Some embodiments provide a method for conducting a video conference between a first mobile device and a second device. The first mobile device includes first and second cameras. The method selects the first camera for capturing images. The method transmits images captured by the first camera to the second device. The method receives selections of the second camera for capturing images during the video conference. The method terminates the transmission of images captured by the first camera and transmits images captured by the second camera of the first mobile device to the second device during the video conference.
Abstract:
Scalable video coding and multiplexing compatible with non-scalable decoders is disclosed. In some embodiments, video data is received and encoded in a manner that renders at least a base layer to be compatible with a non-scalable video encoding standard, including by assigning for at least the base layer default values to one or more scalability parameters. In some embodiments, video data is received and encoded to produce an encoded video data that includes a base layer that conforms to a non-scalable video encoding standard and one or more subordinate non-scalable layers, which subordinate non-scalable layers do not by themselves conform to the non-scalable video encoding standard but which can he combined with the base layer to produce a result that does conform to the non-scalable video encoding standard, such that the result can be decoded by a non-scalable decoder. An identification data identifying those portions of the encoded video data that are associated with a subordinate non-scalable layer is included in the encoded video data.
Abstract:
System and methods for improved playback of a video stream are presented. Video snippets are identified that include a number of consecutive frames for playback. Snippets may be evenly temporally spaced in the video stream or may be content adaptive. Then the first frame of a snippet may be selected as the first frame of a scene or other appropriate stopping point. Scene detection, object detection, motion detection, video metadata, or other information generated during encoding or decoding of the video stream may aid in appropriate snippet selection.