Abstract:
A frame sequence of moving picture data is divided into a tile image sequence 250, and the color space of the tile image sequence 250 is converted to generate a YCbCr image sequence 252 (S10). Each frame is reduced to ½ time in the vertical and horizontal directions (S12), and a compression process is carried out to generate compression data 260 of a reference image (S14). The compression data 260 of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence 262 is generated from the reference image and the original YCbCr image 252 (S16). Then, compression data 266 of a difference image is generated (S18), and compression data 268 obtained by connecting the compression data 260 of the reference image and the compression data 266 of the difference image is generated for every four frames of a tile image (S20).
Abstract:
An information processor includes a detection plane definition portion defining a detection plane in a 3D space of a camera coordinate system of a first camera and calculates vertex coordinates of a detection area by projecting the detection plane onto a plane of a left image shot by the first camera. A feature quantity calculation portion generates feature point image of left and right images. A parallax correction area derivation portion derives an area as a parallax correction area. The parallax correction area is obtained by moving, to a left, an area of a right image identical to the detection area of the left image by as much as the parallax appropriate to a position of the detection plane in a depth direction. A matching portion performs block matching for the feature point images of each area, thus deriving a highly rated feature point. A position information output portion generates information to be used by an output information generation section based on the matching result and output that information.
Abstract:
Frames of a moving image are configured as a hierarchical structure where each frame is represented with a plurality of resolutions. Some layers are set as original image layers, and the other layers are set as difference image layers in hierarchical data representing a frame at each time step. In the case that an area is to be displayed in the resolution of the difference image layer, to respective pixel values of a difference image of the area, respective pixel values of an image of a corresponding area retained by the original image layer of lower resolution, the image enlarged to the resolution of the difference image layer, are added. A layer to be set as a difference image layer is switched to another layer as time passes.
Abstract:
An imaging device includes a first camera and a second camera and shoots the same object under different shooting conditions. A shot-image data acquirer of an image analyzer acquires data of two images simultaneously shot from the imaging device. A correcting section aligns the distributions of the luminance value between the two images by carrying out correction for either one of the two images. A correction table managing section switches and generates a correction table to be used according to the function implemented by the information processing device. A correction table storage stores the correction table showing the correspondence relationship between the luminance values before and after correction. A depth image generator performs stereo matching by using the two images and generates a depth image.
Abstract:
An image storage section 48 stores shot image data with a plurality of resolutions transmitted from an imaging device. Depth images 152 with a plurality of resolutions are generated using stereo images with a plurality of resolution levels from the shot image data (S10). Next, template matching is performed using a reference template image 154 that represents a desired shape and size, thus extracting a candidate area for a target picture having the shape and size for each distance range associated with one of the resolutions (S12). A more detailed analysis is performed on the extracted candidate areas using the shot image stored in the image storage section 48 (S14). In some cases, a further image analysis is performed based on the analysis result using a shot image with a higher resolution level (S16a and S16b).
Abstract:
A viewpoint detection unit detects a user viewing a stereoscopic image, including a parallax image of a subject as viewed from a predetermined position defined as a reference view position, and tracks a viewpoint of the detected user. A motion parallax correction unit determines, if a speed of movement of the viewpoint becomes equal to or higher than a predetermined level, an amount of motion parallax correction for the parallax image, on the basis of an amount of movement of the viewpoint, so as to generate a stereoscopic image corrected for motion parallax, generates, if the speed of movement of the viewpoint subsequently becomes lower than a predetermined level, a stereoscopic image by changing the amount of motion parallax correction in steps until the parallax image return to parallax images as seen from the reference view position.
Abstract:
A tile image sequence 250 obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel 252) (S10). If a redundancy in a space direction or a time direction exists, then data is reduced in the direction (S12), and sequences in the time direction are deployed on a two-dimensional plane (S14). Voxel images are placed on an image plane of a predetermined size to generate one integrated image 258 (S16). In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image 262) (S18). The integrated image 262 after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image 266 and reference information 264 for determining the position of a needed pixel (S20).
Abstract:
There is provided an image processing apparatus including a stereo matching unit configured to obtain right and left disparity images by using stereo matching, based on a pair of images captured by right and left cameras, respectively, a filter processing unit configured to perform filter processing on the disparity images, and a first merging unit configured to make a comparison, in the disparity images that have undergone the filter processing, between disparity values at mutually corresponding positions in the right and left disparity images and to merge the disparity values of the right and left disparity images based on a comparison result.
Abstract:
A hard disk drive stores hierarchical image data, a speed map holding, for each tile image, an index of the processing time required to render a tile image having a predetermined image size obtained by partitioning the image, and scenario data which defines viewpoint shifting. In a control unit of an information processing apparatus having a function of displaying an image, an input information acquisition unit acquires information with respect to the user's input operation via an input device. A loading unit loads necessary data for image displaying from the hard disk drive. A shifting condition adjustment unit adjusts the viewpoint shifting speed based upon the speed map. A frame coordinate determination unit sequentially determines frame coordinates of a display area. A decoding unit decodes compressed image data. A display image processing unit renders a display image.
Abstract:
An input information acquisition unit of an information processing device acknowledges a user input. An imaging condition control unit initiates imaging using the imaging condition determined according to the user input or a result of analyzing a captured image. An imaging condition storage unit stores an imaging condition table that maps target functions to imaging conditions. First and second image analysis units and acquire images captured by first and second cameras and installed in the imaging device and perform necessary image analysis. An information integration unit integrates images captured by the pair of cameras and results of analysis. An image data generation unit generates data for an image output as a result of the process.