Abstract:
An image processing device includes: an input information obtaining section for obtaining input information for changing a display region in an image as a display object; a display image processing section for generating an image inside the display region determined on a basis of the input information as a display image; and a display section for displaying the generated display image on a display, wherein when the input information obtaining section obtains input information for scaling the display image, the display image processing section scales the display image according to the input information, and performs image manipulation making visibility of a region in a predetermined range including a focus as a center of scaling in an image plane different from visibility of another region.
Abstract:
An input information acquisition unit of an information processing device acknowledges a user input. An imaging condition control unit initiates imaging using the imaging condition determined according to the user input or a result of analyzing a captured image. An imaging condition storage unit stores an imaging condition table that maps target functions to imaging conditions. First and second image analysis units and acquire images captured by first and second cameras and installed in the imaging device and perform necessary image analysis. An information integration unit integrates images captured by the pair of cameras and results of analysis. An image data generation unit generates data for an image output as a result of the process.
Abstract:
An image acquisition section of an information processor acquires stereo images from an imaging device. An input information acquisition section acquires an instruction input from a user. A depth image acquisition portion of a position information generation section generates a depth image representing a position distribution of subjects existing in the field of view of the imaging device in the depth direction using stereo images. A matching portion adjusts the size of a reference template image in accordance with the position of each of the subjects in the depth direction represented by the depth image first, then performs template matching on the depth image, thus identifying the position of a target having a given shape and size in the three-dimensional space. An output information generation section generates output information by performing necessary processes based on the target position.
Abstract:
A frame sequence of moving picture data is divided into a tile image sequence 250, and the color space of the tile image sequence 250 is converted to generate a YCbCr image sequence 252 (S10). Each frame is reduced to ½ time in the vertical and horizontal directions (S12), and a compression process is carried out to generate compression data 260 of a reference image (S14). The compression data 260 of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence 262 is generated from the reference image and the original YCbCr image 252 (S16). Then, compression data 266 of a difference image is generated (S18), and compression data 268 obtained by connecting the compression data 260 of the reference image and the compression data 266 of the difference image is generated for every four frames of a tile image (S20).
Abstract:
An information processor includes a detection plane definition portion defining a detection plane in a 3D space of a camera coordinate system of a first camera and calculates vertex coordinates of a detection area by projecting the detection plane onto a plane of a left image shot by the first camera. A feature quantity calculation portion generates feature point image of left and right images. A parallax correction area derivation portion derives an area as a parallax correction area. The parallax correction area is obtained by moving, to a left, an area of a right image identical to the detection area of the left image by as much as the parallax appropriate to a position of the detection plane in a depth direction. A matching portion performs block matching for the feature point images of each area, thus deriving a highly rated feature point. A position information output portion generates information to be used by an output information generation section based on the matching result and output that information.
Abstract:
Frames of a moving image are configured as a hierarchical structure where each frame is represented with a plurality of resolutions. Some layers are set as original image layers, and the other layers are set as difference image layers in hierarchical data representing a frame at each time step. In the case that an area is to be displayed in the resolution of the difference image layer, to respective pixel values of a difference image of the area, respective pixel values of an image of a corresponding area retained by the original image layer of lower resolution, the image enlarged to the resolution of the difference image layer, are added. A layer to be set as a difference image layer is switched to another layer as time passes.
Abstract:
An information processor acquires a stereo image from an imaging device. A detection plane definition portion defines a detection plane in a three-dimensional space of a camera coordinate system of a first camera. A feature quantity calculation portion generates feature point image of left and right images. A parallax correction area derivation portion derives an area as a parallax correction area, which is obtained by moving, to a left, an area of the right image identical to the detection area of the left image. A matching portion performs matching for the feature point images of each area, thus deriving a highly rated feature point. A position information output portion generates information to be used by an output information generation section based on the matching result.
Abstract:
An information processing device includes: an information processing section configured to detect a figure of a target object from an image captured from a movie of the target object so as to perform information processing on the detected image; a main data generating section configured to generate data of a main image to be displayed as a result of the information processing; an auxiliary data generating section configured to generate data of an auxiliary image including the captured image; and an output data transmitting section configured to transmit to an output device the main image data and the auxiliary image data in relation to each other such that the main image and the auxiliary image are displayed together.
Abstract:
An input information obtaining portion of a control section obtains requests input from an input device by a user, which requests include a display region moving request to enlarge/reduce or scroll an image displayed on a display device and a request to generate/erase a viewport, change the size of a viewport, or move a viewport. A viewport control portion successively determines the number, arrangement, and size of viewports accordingly. A display region determining portion determines the region of an image to be displayed next in each viewport. A loading portion determines tile images to be newly loaded, and loads the data of the tile images from a hard disk drive. A decoding portion decodes the data of tile images used for rendering the image in each viewport. A display image processing portion updates the display region independently for each viewport.
Abstract:
An image pickup apparatus includes: an image data production unit configured to produce data of a plurality of kinds of images from a picked up image and successively output the data; an image synthesis unit configured to cyclically connect the data of the plurality of kinds of images for each pixel string within a range set in advance for each of the kinds of the images and output the connected data as a stream to produce a virtual synthetic image; and an image sending unit configured to accept, from a host terminal, a data transmission request that designates a rectangular region in the virtual synthetic image, extract and connect data from the stream and transmit the connected data as a new stream to the host terminal.