Abstract:
A frame sequence of moving picture data is divided into a tile image sequence, and the color space of the tile image sequence is converted to generate a YCbCr image sequence. Each frame is reduced to ½ time in the vertical and horizontal directions, and a compression process is carried out to generate compression data of a reference image. The compression data of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence is generated from the reference image and the original YCbCr image. Then, compression data of a difference image is generated, and compression data obtained by connecting the compression data of the reference image and the compression data of the difference image is generated for every four frames of a tile image.
Abstract:
Methods and Apparatus provide for obtaining a data sequence representative of a three-dimensional parameter space; forming a plurality of coding units by dividing, in three dimensions, the data sequence subject; and generating, for each of the plurality of coding units: (i) a palette defined by two representative values, and (ii) a plurality of indices, each index representing a respective original data point as a value, determined by linear interpolation, to be one of, or an intermediate value between, the representative values, and setting the palette and the plurality of indices for each of the coding units as compressed data.
Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
An image processing device includes: an input information obtaining section for obtaining input information for changing a display region in an image as a display object; a display image processing section for generating an image inside the display region determined on a basis of the input information as a display image; and a display section for displaying the generated display image on a display, wherein when the input information obtaining section obtains input information for scaling the display image, the display image processing section scales the display image according to the input information, and performs image manipulation making visibility of a region in a predetermined range including a focus as a center of scaling in an image plane different from visibility of another region.
Abstract:
An original image to be edited is displayed using hierarchical data. As a user draws a figure in a region of the image as an edit action, an image data updating unit generates a layer having a hierarchical structure composed of the rendered region only. More specifically, the image of the region to be edited is used as the lowermost hierarchical level, and upper hierarchical levels than this lowermost level are generated by reducing the lowermost level, as appropriate, so as to produce hierarchical data. As, during image display, it is checked that the updated region is contained in a frame to be displayed anew, the image of the layer is displayed by superposing the frame on the original hierarchical data.
Abstract:
An imaging device includes a first camera and a second camera and shoots the same object under different shooting conditions. A shot-image data acquirer of an image analyzer acquires data of two images simultaneously shot from the imaging device. A correcting section aligns the distributions of the luminance value between the two images by carrying out correction for either one of the two images. A correction table managing section switches and generates a correction table to be used according to the function implemented by the information processing device. A correction table storage stores the correction table showing the correspondence relationship between the luminance values before and after correction. A depth image generator performs stereo matching by using the two images and generates a depth image.
Abstract:
An image storage section 48 stores shot image data with a plurality of resolutions transmitted from an imaging device. Depth images 152 with a plurality of resolutions are generated using stereo images with a plurality of resolution levels from the shot image data (S10). Next, template matching is performed using a reference template image 154 that represents a desired shape and size, thus extracting a candidate area for a target picture having the shape and size for each distance range associated with one of the resolutions (S12). A more detailed analysis is performed on the extracted candidate areas using the shot image stored in the image storage section 48 (S14). In some cases, a further image analysis is performed based on the analysis result using a shot image with a higher resolution level (S16a and S16b).
Abstract:
A viewpoint detection unit detects a user viewing a stereoscopic image, including a parallax image of a subject as viewed from a predetermined position defined as a reference view position, and tracks a viewpoint of the detected user. A motion parallax correction unit determines, if a speed of movement of the viewpoint becomes equal to or higher than a predetermined level, an amount of motion parallax correction for the parallax image, on the basis of an amount of movement of the viewpoint, so as to generate a stereoscopic image corrected for motion parallax, generates, if the speed of movement of the viewpoint subsequently becomes lower than a predetermined level, a stereoscopic image by changing the amount of motion parallax correction in steps until the parallax image return to parallax images as seen from the reference view position.
Abstract:
A tile image sequence 250 obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel 252) (S10). If a redundancy in a space direction or a time direction exists, then data is reduced in the direction (S12), and sequences in the time direction are deployed on a two-dimensional plane (S14). Voxel images are placed on an image plane of a predetermined size to generate one integrated image 258 (S16). In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image 262) (S18). The integrated image 262 after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image 266 and reference information 264 for determining the position of a needed pixel (S20).
Abstract:
A hard disk drive stores hierarchical image data, a speed map holding, for each tile image, an index of the processing time required to render a tile image having a predetermined image size obtained by partitioning the image, and scenario data which defines viewpoint shifting. In a control unit of an information processing apparatus having a function of displaying an image, an input information acquisition unit acquires information with respect to the user's input operation via an input device. A loading unit loads necessary data for image displaying from the hard disk drive. A shifting condition adjustment unit adjusts the viewpoint shifting speed based upon the speed map. A frame coordinate determination unit sequentially determines frame coordinates of a display area. A decoding unit decodes compressed image data. A display image processing unit renders a display image.