Abstract:
A system and method for enabling the removal of decoded pictures from a decoded picture buffer as soon as the decoded pictures are no longer needed for prediction reference and future output. An indication is introduced into the bitstream as to whether a picture may be used for inter-layer prediction reference, as well as a decoded picture buffer management method which uses the indication. The present invention includes a process for marking a picture as being used for inter-layer reference or unused for inter-layer reference, a storage process of decoded pictures into the decoded picture buffer, a marking process of reference pictures, and output and removal processes of decoded pictures from the decoded picture buffer.
Abstract:
The present invention discloses methods, devices and systems for effective and improved video data scalable coding and/or decoding based on Fine Grain Scalability (FGS) information. According to a first aspect of the present invention, a method for scalable encoding video data is provided. Said method comprises the following operations: obtaining said video data, generating a base layer based on said obtained video data, generating at least one corresponding scalable enhancement layer depending on said video data and said base layer, wherein said at least one enhancement layer comprises FGS information based on one or more enhancement FGS-slices, said FGS-slices describing certain regions within said base layer; and defining at least one of said one or more generated enhancement FGS-slices in such manner that said at least one generated enhancement FGS-slice covers a different region than the region covered by said the corresponding slice in the base layer picture and encoding said base layer and said at least one enhancement layer resulting in encoded video data.
Abstract:
Methods and devices for video encoding and decoding, where video data is obtained, followed by generating a base layer based thereon, the base layer comprising at least one picture, generating at least one enhancement layer based on the obtained video data, the enhancement layer comprising at least one picture, generating a dependency identifier for each of the base and enhancement layers, each dependency identifier being associated with a reference number; determining a respective sequence parameter set for each of the base layer and the enhancement layer having different dependency identifier values, wherein for a number of base and enhancement layers having sequence parameter set parameters substantially the same, using one sequence parameter set; and encoding the base layer and the at least one enhancement layer by using determined sequence parameter sets.
Abstract:
A method for indicating size, shape and location of a region within a digital picture the picture being divided into a set of blocks. A value for at least one size parameter, which is indicative of a number of the blocks within said region is defined, and a value for at least one shape evolution parameter, which is indicative of a selection order of the blocks in said region is selected. Then preferably the values for said at least one size parameter and said at least one shape evolution parameter are encoded into a bitstream of a video sequence in order to indicate size, shape and location of the region within the picture.
Abstract:
A method of encoding video data including at least one primary picture and at least one redundant picture corresponding to the information content of the primary picture. A reference picture list of the at least one redundant picture includes multiple reference pictures. The video sequence is encoded such that a number of reference pictures are disabled from the reference picture list of the at least one redundant picture, the number being at least one, but less than the total number of the reference pictures on the reference picture list.
Abstract:
A method, electronic device, computer program product, system and circuit assembly are provided for allocating one or more redundant pictures by taking into consideration the information content of the primary pictures, with which the redundant pictures would be associated. In particular, primary pictures that are determined to be more sensitive to transmission loss or corruption may be allocated one or more redundant pictures, while those that are less sensitive may not be so allocated. By selectively allocating redundant pictures to only those primary pictures that are more sensitive, the method disclosed reduces the amount of overhead associated with redundant pictures and increases the coding efficiency, without sacrificing the integrity of the video data.
Abstract:
A file format design supports storage of multi-source multimedia presentations via the inclusion of indications as to whether a presentation is a multi-source presentation, and for one media type, the tracks of which are from different sources and should be played simultaneously. If a multi-source presentation exists, additional indications may be provided including: an indication of a multi-source presentation type being stored; indications regarding the source of each track and which tracks have the same source; indications of different parties' information such as phone numbers, etc. Thus, a player may play back a recorded presentation in the same or substantially the same manner as it was presented during the actual session, and may automatically manipulate the presentation to be more informative or efficient. The file format design further supports storage of other types of multi-source presentations that render more than one media stream for at least one type of media.
Abstract:
In one example, a video coder is configured to code information indicative of whether view synthesis prediction is enabled for video data. When the information indicates that view synthesis prediction is enabled for the video data, the video coder may generate a view synthesis picture using the video data and code at least a portion of a current picture relative to the view synthesis picture. The at least portion of the current picture may comprise, for example, a block (e.g., a PU, a CU, a macroblock, or a partition of a macroblock), a slice, a tile, a wavefront, or the entirety of the current picture. On the other hand, when the information indicates that view synthesis prediction is not enabled for the video data, the video coder may code the current picture using at least one of intra-prediction, temporal inter-prediction, and inter-view prediction without reference to any view synthesis pictures.
Abstract:
An encoder for encoding a video signal, wherein the encoder is configured to generate an encoded scalable data stream comprising a base layer and at least one enhancement layer, wherein the encoder is further configured to generate information associated with each of the base layer and the at least one enhancement layer.
Abstract:
An adapter is provided that has both an electrical coupling configuration that complies with the RJ-45 wiring standard for electrical communications and an optical coupling configuration for optical communications. The adapter is configured as an interface for at least two modular connector assemblies to enable the modular connector assemblies to communicate with each other either optically or electrically, depending on whether the plugs of the assemblies are configured to have optical or electrical communications capabilities.