Abstract:
Several implementations relate to view synthesis with heuristic view merging for 3D Video (3DV) applications. According to one aspect, a first candidate pixel from a first warped reference view and a second candidate pixel from a second warped reference view are assessed based on at least one of a backward synthesis process to assess a quality of the first and second candidate pixels, a hole distribution around the first and second candidate pixels, or on an amount of energy around the first and second candidate pixels above a specified frequency. The assessing occurs as part of merging at least the first and second warped reference views into a signal synthesized view. Based on the assessing, a result is determined for a given target pixel in the single synthesized view. The result may be determining a value for the given target pixel, or marking the given target pixel as a hole.
Abstract:
Various implementations are described. Several implementations relate to view synthesis with heuristic view blending for 3D Video (3DV) applications. According to one aspect, at least one reference picture, or a portion thereof, is warped from at least one reference view location to a virtual view location to produce at least one warped reference. A first candidate pixel and a second candidate pixel are identified in the at least one warped reference. The first candidate pixel and the second candidate pixel are candidates for a target pixel location in a virtual picture from the virtual view location. A value for a pixel at the target pixel location is determined based on values of the first and second candidate pixels.
Abstract:
Several implementations relate, for example, to depth encoding and/or filtering for 3D video (3DV) coding formats. A sparse dyadic mode for partitioning macroblocks (MBs) along edges in a depth map is provided as well as techniques for trilateral (or bilateral) filtering of depth maps that may include adaptive selection between filters sensitive to changes in video intensity and/or changes in depth. One implementation partitions a depth picture, and then refines the partitions based on a corresponding image picture. Another implementation filters a portion of a depth picture based on values for a range of pixels in the portion. For a given pixel in the portion that is being filtered, the filter weights a value of a particular pixel in the range by a weight that is based on one or more of location distance, depth difference, and image difference.
Abstract:
Implementations are provided that relate, for example, to view tiling in video encoding and decoding. A particular method includes accessing a video picture that includes multiple pictures combined into a single picture, accessing information indicating how the multiple pictures in the accessed video picture are combined, decoding the video picture to provide a decoded representation of at least one of the multiple pictures, and providing the accessed information and the decoded video picture as output. Some other implementations format or process the information that indicates how multiple pictures included in a single video picture are combined into the single video picture, and format or process an encoded representation of the combined multiple pictures.
Abstract:
Noise, either in the form of comfort noise or film grain, is added to a three dimensional image in accordance with image depth information to reduce human sensitivity to coding artifacts, thereby improving subjective image quality.
Abstract:
A remote control device is operative to enable and facilitate user control of video systems that are operative to provide one or more three-dimensional (3D) viewing effects. According to an exemplary embodiment, the remote control device includes a user input terminal having an input element operative to receive user inputs to adjust at least one of a volume setting and a channel setting of a video system, and further operative to receive user inputs to adjust a three-dimensional (3-D) viewing effect of the video system. A transmitter is operative to transmit control signals to the video system in response to the user inputs.
Abstract:
A quality of a virtual image for a synthetic viewpoint in a 3D scene is determined. The 3D scene is acquired by texture images, and each texture image is associated with a depth image acquired by a camera arranged at a real viewpoint. A texture noise power is based on the acquired texture images and reconstructed texture images corresponding to a virtual texture image. A depth noise power is based on the depth images and reconstructed depth images corresponding to a virtual depth image. The quality of the virtual image is based on a combination of the texture noise power and the depth noise power, and the virtual image is rendered from the reconstructed texture images and the reconstructed depth images.
Abstract:
A quality of a virtual image for a synthetic viewpoint in a 3D scene is determined. The 3D scene is acquired by texture images, and each texture image is associated with a depth image acquired by a camera arranged at a real viewpoint. A texture noise power is based on the acquired texture images and reconstructed texture images corresponding to a virtual texture image. A depth noise power is based on the depth images and reconstructed depth images corresponding to a virtual depth image. The quality of the virtual image is based on a combination of the texture noise power and the depth noise power, and the virtual image is rendered from the reconstructed texture images and the reconstructed depth images.
Abstract:
An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depths associated with each pixel of a selected image is determined. For each candidate depth, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth with a least cost is selected to produce an optimal depth for the pixel. Then, the virtual image is synthesized based on the optimal depth of each pixel and the texture images. The method also applies first and second depth enhancement before, and during view synthesis to correct errors or suppress noise due to the estimation or acquisition of the dense depth images and sparse depth features.
Abstract:
An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depth values associated with each pixel of a selected image is determined. For each candidate depth value, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth value with a least cost is selected to produce an optimal depth value for the pixel. Then, the virtual image is synthesized based on the optimal depth value of each pixel and the texture images.