Abstract:
An image processing apparatus includes: an obtaining unit configured to obtain an imprint image where an imprint is captured, the imprint having code information added to a frame portion; an identifying unit configured to identify a frame image area as a read area where the code information is read, the frame image area corresponding to an imprint of the frame portion in the imprint image obtained by the obtaining unit; and a reading unit configured to read the code information from the read area identified by the identifying unit.
Abstract:
The image processing device 1 includes an image acquiring unit 54 that acquires a first image and a second image captured by an image capture unit, while the image capture unit is moved substantially in a single direction; a characteristic point tracking unit 551 that, for a plurality of characteristic points included in each of the images acquired, calculates vectors between the images having the characteristic points; a distribution calculation unit 552 that calculates a distribution condition of the vectors calculated; and a moving amount calculation unit 553 that calculates a representative vector for adjusting a composite position between the images, by weighting vectors calculated based on a calculation result of the distribution calculation unit.
Abstract:
An information processing apparatus performs positioning processing to acquire coordinate positions inside images of light sources included in an image captured with an imaging device among multiple light sources (position indicators) whose coordinate positions in a three-dimensional space are known to derive at least either of a coordinate position and an orientation of an own device (moving body) in the three-dimensional space based on coordinate positions of the light sources in the three-dimensional space included in the image and the coordinate positions inside the images of the light sources. The information processing apparatus uses information on light sources in which an angle of a light source from the imaging device to a predetermined direction or a prospective angle between light sources in a horizontal direction satisfies a predetermined condition to derive at least either of the coordinate position and the orientation of the own device.
Abstract:
A position obtaining device includes a processor. The processor, in response to a condition being met, derives a first attitude angle as an attitude angle of the device based on first light sources and positions thereof on an image obtained by a first camera; in response to the attitude angle of the device being known, derives a three-dimensional position of the device based on two or more second light sources and positions thereof on an image obtained by a second camera, and in response to a predetermined number of second light sources or more being captured in the image, derives the three-dimensional position and a second attitude angle as the attitude angle of the device; and integrates a result of the first attitude angle and a result of the three-dimensional position and the second attitude angle to estimate the attitude angle and the three-dimensional position of the device.
Abstract:
A position information acquisition device for acquiring position information of a position acquisition target arranged in a space includes a processor configured to detect light that is based on identification information included in-common in captured images that are images of the space captured from a plurality of shooting directions that are different from each other, acquire a three-dimensional position in the space of the position information acquisition target identified by the identification information, based on detection positions of the detected light in the captured images, and position information of image capturing devices during capturing performed by the image capturing devices, acquire reliability degree information of the acquired three-dimensional position of the position information acquisition target, based on information relating to an imaging state of each image capturing device during capturing of the captured images, and store the acquired the reliability degree information in a storage.
Abstract:
A digital camera includes an image capturing unit, an image composition unit, and a display control unit. The image capturing unit captures frames at predetermined time intervals. The image composition unit sequentially combines at least a part of image data from image data of a plurality of frames sequentially captured by the image capturing unit at predetermined time intervals. The display control unit performs control to sequentially display image data combined by the image composition unit while the image data of the frames are captured by the image capturing unit at predetermined time intervals.
Abstract:
An image capture apparatus (1) includes: an image capture unit (17) that successively acquires images; an image combining unit (91) that combines the acquired images in a predetermined range so as to generate a wide image; a display control unit (51) that controls a display to simultaneously display a live-view image and a predetermined range such that a relative size between the live-view image and the predetermined range is visible on the display; a view angle setting unit (53) that changes the relative size while the live-view image and the predetermined range are being displayed on the display; and an image combining control unit (92) that controls the combining unit so as to generate the wide image an angle of view of which depends on the changed relative size.
Abstract:
An image processing apparatus 1 includes: an adjustment unit 55 that adjusts adjacent captured images in panoramic image generation processing; and a panoramic image generation unit 56 that generates data of a panoramic image by combining the adjacent captured images based on a result of this adjustment. The adjustment unit 55 includes: a characteristic point trace unit 552 that calculates vectors of a plurality of corresponding characteristic points in the adjacent captured images; and a position adjustment unit 554 that adjusts the adjacent captured images based on the vectors of the corresponding characteristic points thus calculated, by avoiding a moving object whose image is currently captured.
Abstract:
A high-precision positioning result in self-positioning is provided. An information processing apparatus acquires a first coordinate position inside an image of each light source included in an image captured with an imaging device among multiple light sources whose coordinate positions are known in a three-dimensional space to derive at least a coordinate position or an orientation of an own device in the three-dimensional space based on a coordinate position of the light source included in the image in the three-dimensional space and the first coordinate position. The apparatus estimates a height of the imaging device in the three-dimensional space to perform positioning processing using information on the estimated height of the imaging device, acquires a second coordinate position of the light source on the image corresponding to the positioning processing result, and corrects the height of the imaging device based on the first coordinate position and the second coordinate position.
Abstract:
A position coordinates setter sets position coordinates being coordinates of an image region of an LED in an image. An image region determiner determines whether or not an image region having a predetermined luminance value is present at the position coordinates. If an image region having the predetermined luminance value is not present at the position coordinates, the image region determiner determines whether or not the image region at the position coordinates is overexposed or underexposed, and the exposure controller continues setting, if the image region is overexposed, the exposure time shorter by one degree at a time until the luminance value of the image region at the position coordinates becomes the predetermined luminance value, and continues setting, if the image region is underexposed, the exposure time longer by one degree at a time until the luminance value of the image region at the position coordinates becomes the predetermined luminance value.