Abstract:
A pixel set formation unit in an image analysis unit of an image processing device forms pixel sets from original images subject to analysis. A principal analysis unit of the image analysis unit performs principal component analysis in units of pixel sets. A synthesis unit synthesizes results of analysis in units of pixel sets so as to generate images of eigenvectors of a size of the original images. An image generation unit displays the images of the eigenvectors and stores data for an image generated by using the images of the eigenvectors in a generated image storage unit.
Abstract:
The present invention is to provide an information processing device, an information processing method, a program, and an information storage medium with which the accuracy of detection of whether the contact of an object with a subject is present or absent is improved compared with conventional techniques. A frame image acquiring section (32) acquires plural frame images that include a subject region in which an image of a subject appears and are taken at timings different from each other. A subject partial region identifying section (38) identifies, about each of the frame images, plural subject partial regions that are each part of the subject region and are different from each other in the position in the subject region. A partial region feature identifying section (40) identifies partial region features showing variation in an image feature of an image occupying the subject partial region on the basis of the image feature of the image occupying each of the subject partial regions associated with each other in each of the frame images. A contact determining section (42) determines whether the contact of an object with the subject is present or absent on the basis of a relationship among the partial region features each associated with a respective one of plural subject partial regions.
Abstract:
In an image processing device, an information processing section performs information processing according to an instruction input by a user. An alpha buffer generating block of an image processing section generates an alpha buffer representing, in an image plane, the alpha value of each pixel when designated objects formed by a plurality of objects are collectively regarded as one object. A rendering block reads each piece of model data from an image data storage section, and renders an image also including other objects than the designated objects. A shading processing block approximately calculates degrees of occlusion of ambient light on the image plane, and subjects the rendered image to shading processing on a basis of a result of the calculation.
Abstract:
In an image processing device, an information processing section performs information processing according to an instruction input by a user. An alpha buffer generating block of an image processing section generates an alpha buffer representing, in an image plane, the alpha value of each pixel when designated objects formed by a plurality of objects are collectively regarded as one object. A rendering block reads each piece of model data from an image data storage section, and renders an image also including other objects than the designated objects. A shading processing block approximately calculates degrees of occlusion of ambient light on the image plane, and subjects the rendered image to shading processing on a basis of a result of the calculation.
Abstract:
Thresholds b1, b2, b3, and b4 are set in the luminance of a raw image. In an image 84a of the lowermost layer, the region in which the luminance is equal to or higher than b1 is left. In an image 84b over it, the region in which the luminance is equal to or higher than b2 is left. In an image 84c over it, the region in which the luminance is equal to or higher than b3 is left. In an image 84d of the uppermost layer, the region in which the luminance is equal to or higher than b4 is left. In each of these images, the alpha value of the other region is set to 0. The images are integrated with the color information of the raw image to generate final slice images. A display image is generated by stacking the generated slice images sequentially from the lowermost layer at predetermined intervals and performing drawing according to the point of sight.
Abstract:
In an image processing apparatus, a sensor output data acquisition unit acquires data of layered image information from a sensor group. A slice image generation unit generates data of a two-dimensional image for each slice plane surfaces in which distribution is acquired. An image axis conversion unit generates similar data of a two-dimensional image for a plurality of plane surfaces that are perpendicular to an axis different from a sensor axis. A slice image management unit of a display processing unit manages a slice image that is used for drawing according to the position of a viewpoint or the like. Memory for drawing sequentially stores data of an image necessary for drawing. An additional object image storage unit stores image data of an object that is additionally displayed such as a cursor or the like. An image drawing unit draws a three-dimensional object using data of the slice image.
Abstract:
In an image processing apparatus, a sensor output data acquisition unit acquires data of layered image information from a sensor group. A slice image generation unit generates data of a two-dimensional image for each slice plane surfaces in which distribution is acquired. An image axis conversion unit generates similar data of a two-dimensional image for a plurality of plane surfaces that are perpendicular to an axis different from a sensor axis. A slice image management unit of a display processing unit manages a slice image that is used for drawing according to the position of a viewpoint or the like. Memory for drawing sequentially stores data of an image necessary for drawing. An additional object image storage unit stores image data of an object that is additionally displayed such as a cursor or the like. An image drawing unit draws a three-dimensional object using data of the slice image.
Abstract:
There is provided an image processing apparatus including: an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three-dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on the basis of predetermined relations reflecting vertex movement information representing the movement of at least one of the vertexes; and a mapping part configured to map the texture image on the polygon model based on the updated polygon model information.
Abstract:
Thresholds b1, b2, b3, and b4 are set in the luminance of a raw image. In an image 84a of the lowermost layer, the region in which the luminance is equal to or higher than b1 is left. In an image 84b over it, the region in which the luminance is equal to or higher than b2 is left. In an image 84c over it, the region in which the luminance is equal to or higher than b3 is left. In an image 84d of the uppermost layer, the region in which the luminance is equal to or higher than b4 is left. In each of these images, the alpha value of the other region is set to 0. The images are integrated with the color information of the raw image to generate final slice images. A display image is generated by stacking the generated slice images sequentially from the lowermost layer at predetermined intervals and performing drawing according to the point of sight.
Abstract:
There is provided an information processing apparatus including a display controller that controls a display of an object to be displayed such that when the object to be displayed is displayed stereoscopically in accordance with 3D image data that can be displayed stereoscopically, a display format of the object to be displayed positioned outside a fusional area of an observer is different from a display format of the object to be displayed positioned inside the fusional area.