Abstract:
An information processing system that acquires image data captured by an image capturing device; identifies a density of distribution of a plurality of feature points in the acquired image data; and controls a display to display guidance information based on the density of the distribution of the plurality of feature points.
Abstract:
A method is provided for generating output image data. The method comprises receiving image data representing an input image, the input image containing at least one facial image. The method further comprises recognizing the facial image in the image data, and recognizing facial features of the facial image. The method further comprises generating data representing a makeup image based on the recognized facial features, the makeup image providing information assisting in the application of makeup. The method also comprises generating output image data representing the makeup image superimposed on the facial image.
Abstract:
An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit.
Abstract:
An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit.
Abstract:
An information processing system that acquires image data captured by an image capturing device; identifies a density of distribution of a plurality of feature points in the acquired image data; and controls a display to display guidance information based on the density of the distribution of the plurality of feature points.
Abstract:
There is provided an image processing device including a superimposition display position determining unit which determines a position of an object having a predetermined flat surface or curved surface out of an object imaged in an input image based on an environment map, a superimposition display image generating unit which generates a superimposition display image by setting superimposition display data at the position of the object determined by the superimposition display position determining unit, an image superimposing unit which superimposes the superimposition display image on a visual field of a user, an operating object recognizing unit which recognizes an operating object imaged in the input image, and a process executing unit which executes a process corresponding to an item selected based on a position of the operating object recognized by the operating object recognizing unit.
Abstract:
An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit.
Abstract:
The present technology relates to a control device and a control method that make it possible to estimate a self-position reliably with lower power consumption. An activation determination unit selects some of a plurality of cameras as activation cameras to be used for self-position estimation, and on the basis of the selection result of the activation cameras, an activation switching unit sets cameras taken as the activation cameras among the plurality of cameras to an activation state and causes the cameras to photograph images, and suspends the activation of the other cameras. A self-position estimation unit performs self-position estimation on the basis of images photographed by the activation cameras. Further, the activation determination unit selects the activation cameras again at a predetermined timing. The present technology can be applied to a self-position estimation system.
Abstract:
A first estimating unit estimates at least one of a position and an attitude of a predetermined object on the basis of an image of a periphery of the object, the image being obtained from an imaging device, and generates an estimation result not including an accumulated error. A second estimating unit estimates at least one of the position and the attitude of the object on the basis of the image, and generates an estimation result including an accumulated error. A correcting unit compares the estimation result of the first estimating unit and the estimation result of the second estimating unit with each other, and corrects, on the basis of a result of the comparison, a subsequent estimation result of the second estimating unit, the subsequent estimation result being subsequent to the estimation result of the second estimating unit which estimation result is used for the comparison. An App executing unit performs predetermined data processing on the basis of the estimation result of the second estimating unit which estimation result is corrected by the correcting unit.
Abstract:
There is provided an information processing apparatus that self position estimation with high robustness is possible, the information processing apparatus including: a tracking unit, a region estimation unit, and an estimation processing unit. The tracking unit that acquires an image captured by an image capture unit disposed at a moving object, and corresponds characteristic points included in the image captured before movement and the image captured after the movement, the moving object moving accompanying a rotation motion. The region estimation unit that acquires information about the movement, and estimates regions where two-dimensional positions of the characteristic points are less changed viewed from the moving object before and after the movement of the moving object on the basis of the information. The estimation processing unit that performs self position estimation of the moving object using the characteristic points within the regions corresponded by the tracking unit.