Abstract:
A viewpoint detection unit detects a user viewing a stereoscopic image, including a parallax image of a subject as viewed from a predetermined position defined as a reference view position, and tracks a viewpoint of the detected user. A motion parallax correction unit determines, if a speed of movement of the viewpoint becomes equal to or higher than a predetermined level, an amount of motion parallax correction for the parallax image, on the basis of an amount of movement of the viewpoint, so as to generate a stereoscopic image corrected for motion parallax, generates, if the speed of movement of the viewpoint subsequently becomes lower than a predetermined level, a stereoscopic image by changing the amount of motion parallax correction in steps until the parallax image return to parallax images as seen from the reference view position.
Abstract:
A tile image sequence 250 obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel 252) (S10). If a redundancy in a space direction or a time direction exists, then data is reduced in the direction (S12), and sequences in the time direction are deployed on a two-dimensional plane (S14). Voxel images are placed on an image plane of a predetermined size to generate one integrated image 258 (S16). In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image 262) (S18). The integrated image 262 after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image 266 and reference information 264 for determining the position of a needed pixel (S20).
Abstract:
A hard disk drive stores hierarchical image data, a speed map holding, for each tile image, an index of the processing time required to render a tile image having a predetermined image size obtained by partitioning the image, and scenario data which defines viewpoint shifting. In a control unit of an information processing apparatus having a function of displaying an image, an input information acquisition unit acquires information with respect to the user's input operation via an input device. A loading unit loads necessary data for image displaying from the hard disk drive. A shifting condition adjustment unit adjusts the viewpoint shifting speed based upon the speed map. A frame coordinate determination unit sequentially determines frame coordinates of a display area. A decoding unit decodes compressed image data. A display image processing unit renders a display image.
Abstract:
An input information acquisition unit of an information processing device acknowledges a user input. An imaging condition control unit initiates imaging using the imaging condition determined according to the user input or a result of analyzing a captured image. An imaging condition storage unit stores an imaging condition table that maps target functions to imaging conditions. First and second image analysis units and acquire images captured by first and second cameras and installed in the imaging device and perform necessary image analysis. An information integration unit integrates images captured by the pair of cameras and results of analysis. An image data generation unit generates data for an image output as a result of the process.
Abstract:
A hard disk drive stores hierarchical image data, a speed map holding, for each tile image, an index of the processing time required to render a tile image having a predetermined image size obtained by partitioning the image, and scenario data which defines viewpoint shifting. In a control unit of an information processing apparatus having a function of displaying an image, an input information acquisition unit acquires information with respect to the user's input operation via an input device. A loading unit loads necessary data for image displaying from the hard disk drive. A shifting condition adjustment unit adjusts the viewpoint shifting speed based upon the speed map. A frame coordinate determination unit sequentially determines frame coordinates of a display area. A decoding unit decodes compressed image data. A display image processing unit renders a display image.
Abstract:
[Object] To perform more stable and highly accurate attitude estimation.[Solution] The attitude optimization unit optimizes the articulation position, the angle, the number of articulations, and the like which are attitude parameters of a human body model (tree structure) by a plurality of optimization techniques so as to match a region in which a human body can exist, and switches among a plurality of optimization techniques and uses an optimum technique. Note that optimization techniques include 1. initial value, 2. algorithm, and 3, restriction, and optimization is performed by switching among these three. For example, it is possible to apply the present disclosure to an image processing device that performs image processing of optimizing the articulation position and angle of a human body model.
Abstract:
[Object] To perform more stable and highly accurate attitude estimation. [Solution] The attitude optimization unit optimizes the articulation position, the angle, the number of articulations, and the like which are attitude parameters of a human body model (tree structure) by a plurality of optimization techniques so as to match a region in which a human body can exist, and switches among a plurality of optimization techniques and uses an optimum technique. Note that optimization techniques include 1. initial value, 2. algorithm, and 3, restriction, and optimization is performed by switching among these three. For example, it is possible to apply the present disclosure to an image processing device that performs image processing of optimizing the articulation position and angle of a human body model.
Abstract:
A first estimating unit estimates at least one of a position and an attitude of a predetermined object on the basis of an image of a periphery of the object, the image being obtained from an imaging device, and generates an estimation result not including an accumulated error. A second estimating unit estimates at least one of the position and the attitude of the object on the basis of the image, and generates an estimation result including an accumulated error. A correcting unit compares the estimation result of the first estimating unit and the estimation result of the second estimating unit with each other, and corrects, on the basis of a result of the comparison, a subsequent estimation result of the second estimating unit, the subsequent estimation result being subsequent to the estimation result of the second estimating unit which estimation result is used for the comparison. An App executing unit performs predetermined data processing on the basis of the estimation result of the second estimating unit which estimation result is corrected by the correcting unit.
Abstract:
A first estimating unit estimates at least one of a position and an attitude of a predetermined object on the basis of an image of a periphery of the object, the image being obtained from an imaging device, and generates an estimation result not including an accumulated error. A second estimating unit estimates at least one of the position and the attitude of the object on the basis of the image, and generates an estimation result including an accumulated error. A correcting unit compares the estimation result of the first estimating unit and the estimation result of the second estimating unit with each other, and corrects, on the basis of a result of the comparison, a subsequent estimation result of the second estimating unit, the subsequent estimation result being subsequent to the estimation result of the second estimating unit which estimation result is used for the comparison. An App executing unit performs predetermined data processing on the basis of the estimation result of the second estimating unit which estimation result is corrected by the correcting unit.