Abstract:
There is provided a robot device including an instruction acquisition unit that acquires an order for encouraging a robot device to establish joint attention on a target from a user, a position/posture estimation unit that estimates a position and posture of an optical indication device, which is operated by the user to indicate the target by irradiation of a beam, in response to acquisition of the order, and a target specifying unit that specifies a direction of the target indicated by irradiation of the beam based on an estimation result of the position and posture and specifies the target on an environment map representing a surrounding environment based on a specifying result of the direction.
Abstract:
An image processing apparatus includes a parameter input unit, a tap extraction unit, a predictive coefficient calculation unit, a predictive coefficient calculation unit, and a pixel value operation unit. The parameter input unit receives a parameter including an output phase, the size of an output pixel, and a variable used for a condensing model. The tap extraction unit extracts a tap including a pixel value of a focus pixel which corresponds to the output phase and pixel values of neighboring pixels of the focus pixel. The predictive coefficient calculation unit calculates a predictive coefficient to be multiplied by each of the elements of the tap. The pixel value operation unit calculates a value of the output pixel by performing a product-sum operation of the calculated predictive coefficient and each of the elements of the tap.
Abstract:
There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user.
Abstract:
There is provided an image processing device, including: a data storage unit storing feature data indicating a feature of appearance of an object; an environment map generating unit for generating an environment map representing a position of one or more objects existing in a real space based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and an output image generating unit for generating an output image obtained by erasing an erasing target object from the input image based on a position of the erasing target object specified out of objects present in the input image represented in the environment map and a position of the imaging device.
Abstract:
A motion-vector-setting section (31) sets a first motion vector in units of pixel in a target image. An exposure-time-ratio-setting section (32) sets in units of image an exposure time ratio that is a ratio between a time interval of the target image and a period of exposure time. A motion-blur-amount-setting section (33) sets a motion blur amount in units of pixel based on the exposure time ratio and the first motion vector. Based on the motion amount, a processing-region-setting section (36) sets processing regions as well as a processing-coefficient-setting section (37) sets processing coefficients. A pixel-value-generating section (38) generates pixel values that correspond to the target pixel from pixel values in the processing region and the processing coefficients. A motion-blur-adding section (41) adds a motion blur to an image containing the pixel value generated based on an input second motion vector and the first motion vector. An image-moving section (42) moves the motion-blur-added image along a counter vector of the second motion vector. Any more real arbitrary viewpoint image can be generated.
Abstract:
A motion-vector detector determines the centroid of pixels on a reference frame that is identified with position information set in a database and associated with a feature address corresponding to a feature of a target pixel. The motion-vector detector detects, as a motion vector of the target pixel, a vector that has a starting point at a pixel on the reference frame which corresponds to the target pixel on a current frame and has an end point at the determined centroid. The present invention can be applied to an apparatus for generating a motion vector and allows prompt detection of a motion vector.
Abstract:
An image processing apparatus includes: a blur removing processing section configured to carry out a blur removing process for an input image using a plurality of blur removal coefficients for removing blur of a plurality of different blur amounts to produce a plurality of different blur removal result images; a feature detection section configured to detect a feature from each of the different blur removal result images; a blur amount class determination section configured to determine blur amount classes representative of classes of the blur amounts from the features; and a prediction processing section configured to carry out mathematic operation of pixel values of predetermined pixels of the input image and prediction coefficients learned in advance and corresponding to the blur amount classes to produce an output image from which the blur is removed.
Abstract:
The scanner apparatus which can be stored in the pedestal supporting the display apparatus has slide members on its side, which are engaged with slide members provided on the pedestal, and the apparatus can be stored so as to be drawn out freely from the pedestal. In addition, the upper unit incorporating an automatic document feeder is rotatable upward on a rotary shaft provided on the side of the apparatus in a state of being drawn out from the pedestal, and when a jam occurs, the scanner apparatus is drawn out from the pedestal to rotate the upper unit upward and open a part of a sheet feeding path, and thereby a jammed sheet can be removed.
Abstract:
A motion-vector-setting section (31) sets a motion vector in units of pixel in a target image. Based on the motion vector, a target-pixel-setting section (35) sets a target pixel for each image in plural images to be processed. A motion-blur-amount-setting section (33) sets a motion blur amount in units of pixel based on the motion vector and the exposure-time ratio set in units of image in the exposure-time-ratio-setting section (32). A processing-region-setting section (36) sets processing regions corresponding to the target pixel for each of the plural images based on the motion blur amount. A processing-coefficient-setting section (37) sets processing coefficients based on the motion blur amount. A pixel-value-generating section (38) generates motion-blur-removed pixel values that correspond to the target pixel by linear combination of pixel values corresponding to pixels in the processing region and the processing coefficients, so that they can be output from an integration section (39) as one pixel value. By utilizing any time-directional information significantly, motion-blur-removing processing can be accurately performed.
Abstract:
A shooting-information-detecting section (31) detects shooting information from an image pick-up section (10). A motion-detecting section (33) detects a motion direction of an image on an overall screen based on a motion direction of the image pick-up section contained in the shooting information. A processing-region-setting section (36) sets a processing region in at least any one of a predicted target image and a peripheral image thereof, which correspond to a target pixel in the predicted target image. A processing-coefficient-setting section (37) sets a motion-blur-removing-processing coefficient that corresponds to the motion direction detected in the motion-detecting section (33). A pixel-value-generating section (38) generates a pixel value that corresponds to the target pixel based on a pixel value of a pixel in the processing region set in the processing-region-setting section (36) and the processing coefficient set in the processing-coefficient-setting section (37). Motion-blur-removing processing can be accurately performed.