Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
An image acquisition section of an information processor acquires stereo images from an imaging device. An input information acquisition section acquires an instruction input from a user. A depth image acquisition portion of a position information generation section generates a depth image representing a position distribution of subjects existing in the field of view of the imaging device in the depth direction using stereo images. A matching portion adjusts the size of a reference template image in accordance with the position of each of the subjects in the depth direction represented by the depth image first, then performs template matching on the depth image, thus identifying the position of a target having a given shape and size in the three-dimensional space. An output information generation section generates output information by performing necessary processes based on the target position.
Abstract:
A frame sequence of moving picture data is divided into a tile image sequence 250, and the color space of the tile image sequence 250 is converted to generate a YCbCr image sequence 252 (S10). Each frame is reduced to ½ time in the vertical and horizontal directions (S12), and a compression process is carried out to generate compression data 260 of a reference image (S14). The compression data 260 of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence 262 is generated from the reference image and the original YCbCr image 252 (S16). Then, compression data 266 of a difference image is generated (S18), and compression data 268 obtained by connecting the compression data 260 of the reference image and the compression data 266 of the difference image is generated for every four frames of a tile image (S20).
Abstract:
An information processor includes a detection plane definition portion defining a detection plane in a 3D space of a camera coordinate system of a first camera and calculates vertex coordinates of a detection area by projecting the detection plane onto a plane of a left image shot by the first camera. A feature quantity calculation portion generates feature point image of left and right images. A parallax correction area derivation portion derives an area as a parallax correction area. The parallax correction area is obtained by moving, to a left, an area of a right image identical to the detection area of the left image by as much as the parallax appropriate to a position of the detection plane in a depth direction. A matching portion performs block matching for the feature point images of each area, thus deriving a highly rated feature point. A position information output portion generates information to be used by an output information generation section based on the matching result and output that information.
Abstract:
Frames of a moving image are configured as a hierarchical structure where each frame is represented with a plurality of resolutions. Some layers are set as original image layers, and the other layers are set as difference image layers in hierarchical data representing a frame at each time step. In the case that an area is to be displayed in the resolution of the difference image layer, to respective pixel values of a difference image of the area, respective pixel values of an image of a corresponding area retained by the original image layer of lower resolution, the image enlarged to the resolution of the difference image layer, are added. A layer to be set as a difference image layer is switched to another layer as time passes.
Abstract:
A tile image sequence obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel. If a redundancy in a space direction or a time direction exists, then data is reduced in the direction, and sequences in the time direction are deployed on a two-dimensional plane. Voxel images are placed on an image plane of a predetermined size to generate one integrated image. In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image). The integrated image after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image and reference information for determining the position of a needed pixel.
Abstract:
An image pickup apparatus includes: an image data production unit configured to produce data of a plurality of kinds of images from an image frame obtained by picking up an image of an object as a moving picture for each of pixel strings which configure a row; and an image sending unit configured to extract a pixel string in a region requested from a host terminal from within the data of each of the plurality of kinds of images and connect the pixel strings to each other for each unit number of pixels for connection determined on the basis of a given rule to produce a stream and then transmit the stream to the host terminal. The image sending unit switchably determines whether the unit number of pixels for connection is to be set to a fixed value or a variable value in response to the kind of each image.
Abstract:
A frame sequence of moving picture data is divided into a tile image sequence, and the color space of the tile image sequence is converted to generate a YCbCr image sequence. Each frame is reduced to ½ time in the vertical and horizontal directions, and a compression process is carried out to generate compression data of a reference image. The compression data of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence is generated from the reference image and the original YCbCr image. Then, compression data of a difference image is generated, and compression data obtained by connecting the compression data of the reference image and the compression data of the difference image is generated for every four frames of a tile image.
Abstract:
There is provided an image processing apparatus including a stereo matching unit configured to obtain right and left disparity images by using stereo matching, based on a pair of images captured by right and left cameras, respectively, a filter processing unit configured to perform filter processing on the disparity images, and a first merging unit configured to make a comparison, in the disparity images that have undergone the filter processing, between disparity values at mutually corresponding positions in the right and left disparity images and to merge the disparity values of the right and left disparity images based on a comparison result.
Abstract:
Methods and Apparatus provide for obtaining a data sequence representative of a three-dimensional parameter space; forming a plurality of coding units by dividing, in three dimensions, the data sequence subject; and generating, for each of the plurality of coding units: (i) a palette defined by two representative values, and (ii) a plurality of indices, each index representing a respective original data point as a value, determined by linear interpolation, to be one of, or an intermediate value between, the representative values, and setting the palette and the plurality of indices for each of the coding units as compressed data.