Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
An image processing device includes: an input information obtaining section for obtaining input information for changing a display region in an image as a display object; a display image processing section for generating an image inside the display region determined on a basis of the input information as a display image; and a display section for displaying the generated display image on a display, wherein when the input information obtaining section obtains input information for scaling the display image, the display image processing section scales the display image according to the input information, and performs image manipulation making visibility of a region in a predetermined range including a focus as a center of scaling in an image plane different from visibility of another region.
Abstract:
An original image to be edited is displayed using hierarchical data. As a user draws a figure in a region of the image as an edit action, an image data updating unit generates a layer having a hierarchical structure composed of the rendered region only. More specifically, the image of the region to be edited is used as the lowermost hierarchical level, and upper hierarchical levels than this lowermost level are generated by reducing the lowermost level, as appropriate, so as to produce hierarchical data. As, during image display, it is checked that the updated region is contained in a frame to be displayed anew, the image of the layer is displayed by superposing the frame on the original hierarchical data.
Abstract:
An imaging device includes a first camera and a second camera and shoots the same object under different shooting conditions. A shot-image data acquirer of an image analyzer acquires data of two images simultaneously shot from the imaging device. A correcting section aligns the distributions of the luminance value between the two images by carrying out correction for either one of the two images. A correction table managing section switches and generates a correction table to be used according to the function implemented by the information processing device. A correction table storage stores the correction table showing the correspondence relationship between the luminance values before and after correction. A depth image generator performs stereo matching by using the two images and generates a depth image.
Abstract:
An image storage section 48 stores shot image data with a plurality of resolutions transmitted from an imaging device. Depth images 152 with a plurality of resolutions are generated using stereo images with a plurality of resolution levels from the shot image data (S10). Next, template matching is performed using a reference template image 154 that represents a desired shape and size, thus extracting a candidate area for a target picture having the shape and size for each distance range associated with one of the resolutions (S12). A more detailed analysis is performed on the extracted candidate areas using the shot image stored in the image storage section 48 (S14). In some cases, a further image analysis is performed based on the analysis result using a shot image with a higher resolution level (S16a and S16b).
Abstract:
A viewpoint detection unit detects a user viewing a stereoscopic image, including a parallax image of a subject as viewed from a predetermined position defined as a reference view position, and tracks a viewpoint of the detected user. A motion parallax correction unit determines, if a speed of movement of the viewpoint becomes equal to or higher than a predetermined level, an amount of motion parallax correction for the parallax image, on the basis of an amount of movement of the viewpoint, so as to generate a stereoscopic image corrected for motion parallax, generates, if the speed of movement of the viewpoint subsequently becomes lower than a predetermined level, a stereoscopic image by changing the amount of motion parallax correction in steps until the parallax image return to parallax images as seen from the reference view position.
Abstract:
A tile image sequence 250 obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel 252) (S10). If a redundancy in a space direction or a time direction exists, then data is reduced in the direction (S12), and sequences in the time direction are deployed on a two-dimensional plane (S14). Voxel images are placed on an image plane of a predetermined size to generate one integrated image 258 (S16). In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image 262) (S18). The integrated image 262 after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image 266 and reference information 264 for determining the position of a needed pixel (S20).
Abstract:
There is provided an image processing apparatus including a stereo matching unit configured to obtain right and left disparity images by using stereo matching, based on a pair of images captured by right and left cameras, respectively, a filter processing unit configured to perform filter processing on the disparity images, and a first merging unit configured to make a comparison, in the disparity images that have undergone the filter processing, between disparity values at mutually corresponding positions in the right and left disparity images and to merge the disparity values of the right and left disparity images based on a comparison result.
Abstract:
A hard disk drive stores hierarchical image data, a speed map holding, for each tile image, an index of the processing time required to render a tile image having a predetermined image size obtained by partitioning the image, and scenario data which defines viewpoint shifting. In a control unit of an information processing apparatus having a function of displaying an image, an input information acquisition unit acquires information with respect to the user's input operation via an input device. A loading unit loads necessary data for image displaying from the hard disk drive. A shifting condition adjustment unit adjusts the viewpoint shifting speed based upon the speed map. A frame coordinate determination unit sequentially determines frame coordinates of a display area. A decoding unit decodes compressed image data. A display image processing unit renders a display image.
Abstract:
An input information acquisition unit of an information processing device acknowledges a user input. An imaging condition control unit initiates imaging using the imaging condition determined according to the user input or a result of analyzing a captured image. An imaging condition storage unit stores an imaging condition table that maps target functions to imaging conditions. First and second image analysis units and acquire images captured by first and second cameras and installed in the imaging device and perform necessary image analysis. An information integration unit integrates images captured by the pair of cameras and results of analysis. An image data generation unit generates data for an image output as a result of the process.