Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
An image processing device includes: an input information obtaining section for obtaining input information for changing a display region in an image as a display object; a display image processing section for generating an image inside the display region determined on a basis of the input information as a display image; and a display section for displaying the generated display image on a display, wherein when the input information obtaining section obtains input information for scaling the display image, the display image processing section scales the display image according to the input information, and performs image manipulation making visibility of a region in a predetermined range including a focus as a center of scaling in an image plane different from visibility of another region.
Abstract:
An input information acquisition unit of an information processing device acknowledges a user input. An imaging condition control unit initiates imaging using the imaging condition determined according to the user input or a result of analyzing a captured image. An imaging condition storage unit stores an imaging condition table that maps target functions to imaging conditions. First and second image analysis units and acquire images captured by first and second cameras and installed in the imaging device and perform necessary image analysis. An information integration unit integrates images captured by the pair of cameras and results of analysis. An image data generation unit generates data for an image output as a result of the process.
Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
An image acquisition section of an information processor acquires stereo images from an imaging device. An input information acquisition section acquires an instruction input from a user. A depth image acquisition portion of a position information generation section generates a depth image representing a position distribution of subjects existing in the field of view of the imaging device in the depth direction using stereo images. A matching portion adjusts the size of a reference template image in accordance with the position of each of the subjects in the depth direction represented by the depth image first, then performs template matching on the depth image, thus identifying the position of a target having a given shape and size in the three-dimensional space. An output information generation section generates output information by performing necessary processes based on the target position.
Abstract:
A frame sequence of moving picture data is divided into a tile image sequence 250, and the color space of the tile image sequence 250 is converted to generate a YCbCr image sequence 252 (S10). Each frame is reduced to ½ time in the vertical and horizontal directions (S12), and a compression process is carried out to generate compression data 260 of a reference image (S14). The compression data 260 of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence 262 is generated from the reference image and the original YCbCr image 252 (S16). Then, compression data 266 of a difference image is generated (S18), and compression data 268 obtained by connecting the compression data 260 of the reference image and the compression data 266 of the difference image is generated for every four frames of a tile image (S20).
Abstract:
An information processor includes a detection plane definition portion defining a detection plane in a 3D space of a camera coordinate system of a first camera and calculates vertex coordinates of a detection area by projecting the detection plane onto a plane of a left image shot by the first camera. A feature quantity calculation portion generates feature point image of left and right images. A parallax correction area derivation portion derives an area as a parallax correction area. The parallax correction area is obtained by moving, to a left, an area of a right image identical to the detection area of the left image by as much as the parallax appropriate to a position of the detection plane in a depth direction. A matching portion performs block matching for the feature point images of each area, thus deriving a highly rated feature point. A position information output portion generates information to be used by an output information generation section based on the matching result and output that information.
Abstract:
Frames of a moving image are configured as a hierarchical structure where each frame is represented with a plurality of resolutions. Some layers are set as original image layers, and the other layers are set as difference image layers in hierarchical data representing a frame at each time step. In the case that an area is to be displayed in the resolution of the difference image layer, to respective pixel values of a difference image of the area, respective pixel values of an image of a corresponding area retained by the original image layer of lower resolution, the image enlarged to the resolution of the difference image layer, are added. A layer to be set as a difference image layer is switched to another layer as time passes.
Abstract:
A tile image sequence obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel. If a redundancy in a space direction or a time direction exists, then data is reduced in the direction, and sequences in the time direction are deployed on a two-dimensional plane. Voxel images are placed on an image plane of a predetermined size to generate one integrated image. In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image). The integrated image after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image and reference information for determining the position of a needed pixel.
Abstract:
An image pickup apparatus includes: an image data production unit configured to produce data of a plurality of kinds of images from an image frame obtained by picking up an image of an object as a moving picture for each of pixel strings which configure a row; and an image sending unit configured to extract a pixel string in a region requested from a host terminal from within the data of each of the plurality of kinds of images and connect the pixel strings to each other for each unit number of pixels for connection determined on the basis of a given rule to produce a stream and then transmit the stream to the host terminal. The image sending unit switchably determines whether the unit number of pixels for connection is to be set to a fixed value or a variable value in response to the kind of each image.