Abstract:
Techniques are disclosed for coding and decoding video captured as cube map images. According to these techniques, padded reference images are generated for use during predicting input data. A reference image is stored in a cube map format. A padded reference image is generated from the reference image in which image data of a first view contained in reference image is replicated and placed adjacent to a second view contained in the cube map image. When coding a pixel block of an input image, a prediction search may be performed between the input pixel block and content of the padded reference image. When the prediction search identifies a match, the pixel block may be coded with respect to matching data from the padded reference image. Presence of replicated data in the padded reference image is expected to increase the likelihood that adequate prediction matches will be identified for input pixel block data, which will increase overall efficiency of the video coding.
Abstract:
Methods for organizing media data by automatically segmenting media data into hierarchical layers of scenes are described. The media data may include metadata and content having still image, video or audio data. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to organize the media data.
Abstract:
Techniques are disclosed for overcoming communication lag between interactive operations among devices in a streaming session. According to the techniques, a first device streaming video content to a second device and an annotation is entered to a first frame being displayed at the second device, which is communicated back to the first device. Responsive to a communication that identifies the annotation, a first device may identify an element of video content from the first frame to which the annotation applies and determine whether the identified element is present in a second frame of video content currently displayed at the first terminal. If so, the first device may display the annotation with the second frame in a location where the identified element is present. If not, the first device may display the annotation via an alternate technique.
Abstract:
A system may include a receiver, a decoder, a post-processor, and a controller. The receiver may receive encoded video data. The decoder may decode the encoded video data. The post-processor may perform post-processing on frames of decoded video sequence from the decoder. The controller may adjust post-processing of a current frame, based upon at least one condition parameters detected at the system.
Abstract:
Computing devices may implement instant video communication connections for video communications. Connection information for mobile computing devices may be maintained. A request to initiate an instant video communication may be received, and if authorized, the connection information for the particular recipient mobile computing device may be accessed. Video communication data may then be sent to the recipient mobile computing device according to the connection information so that the video communication data may be displayed at the recipient device as it is received. New connection information for different mobile computing devices may be added, or updates to existing connection information may also be performed. Connection information for some mobile computing devices may be removed.
Abstract:
Methods and systems provide video compression to reduce a “popping” effect caused by differences in quality between a refresh frame (e.g., I-frame or IDR-frame) and neighboring frame(s). In an embodiment, a quality of a pre-definable number of frames preceding a refresh frame (“preceding frames”) may be increased. A quantization parameter may be decreased for a region within at least one of the preceding frame(s). In an embodiment, preceding frames may be coded according to an open group of pictures (“GOP”) structure. In an embodiment, a refresh frame may be first coded as a P-frame, then coded based on the coded P-frame. In an embodiment, preceding frames may be first coded based on the refresh frame, then coded based on the coded frames without referencing the refresh frame. Each of these methods may increase consistency of quality in a sequence of frames, and, correspondingly, minimize or remove I-frame popping.
Abstract:
Computing devices may implement instant video communication connections for video communications. Connection information for mobile computing devices may be maintained. A request to initiate an instant video communication may be received, and if authorized, the connection information for the particular recipient mobile computing device may be accessed. Video communication data may then be sent to the recipient mobile computing device according to the connection information so that the video communication data may be displayed at the recipient device as it is received. New connection information for different mobile computing devices may be added, or updates to existing connection information may also be performed. Connection information for some mobile computing devices may be removed.
Abstract:
Systems and processes for improved video editing, summarization and navigation based on generation and analysis of metadata are described. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to automatically remove undesirable portions of the video, generate suggestions during editing or automatically generate summary video.
Abstract:
Computing devices may implement instant video communication connections for video communications. Connection information for mobile computing devices may be maintained. A request to initiate an instant video communication may be received, and if authorized, the connection information for the particular recipient mobile computing device may be accessed. Video communication data may then be sent to the recipient mobile computing device according to the connection information so that the video communication data may be displayed at the recipient device as it is received. New connection information for different mobile computing devices may be added, or updates to existing connection information may also be performed. Connection information for some mobile computing devices may be removed.
Abstract:
A method and system for caching and streaming media content, including predictively delivering and/or acquiring content is provided. In the system, client devices may be communicatively coupled in a network, and may access and share cached content. Video segments making up a media stream may be selectively delivered to the clients such that a complete media stream may be formed from the different segments delivered to the different clients. Video segments may be pushed by the server to the client or requested by the client according to a prioritization scheme, including downloading: partial items on a client's subscription log, lower quality version(s) of content before higher quality version(s), higher bitrate segments before lower bitrate segments, summaries of full-length content, advertisements and splash screens common to multiple video clips.