摘要:
A depth acquisition device capable of accurately acquiring a depth of an image is provided. A depth acquisition device (1) includes a memory (200) and a processor (110). The processor (110) performs: acquiring timing information indicating a timing at which a light source (101) irradiates a subject with infrared light; acquiring, from the memory (200), an infrared light image generated by imaging a scene including the subject with the infrared light according to the timing indicated by the timing information; acquiring, from the memory (200), a visible light image generated by imaging a substantially same scene as the scene of the infrared light image, with visible light from a substantially same viewpoint as a viewpoint of imaging the infrared light image at a substantially same time as a time of imaging the infrared light image; detecting a flare region from the infrared light image; and estimating a depth of the flare region based on the infrared light image, the visible light image, and the flare region.
摘要:
An imaging apparatus includes an imaging device, a first imaging optical system and a second imaging optical system that form respective input images from mutually different viewpoints onto the imaging device, and a first modulation mask and a second modulation mask that modulate the input images formed by the first imaging optical system and the second imaging optical system. The imaging device captures a superposed image composed of the two input images that have been formed by the first imaging optical system and the second imaging optical system, modulated by the first modulation mask and the second modulation mask, and optically superposed on each other, and the first modulation mask and the second modulation mask have mutually different optical transmittance distribution characteristics.
摘要:
A depth acquisition device (1) capable of accurately acquiring a depth to a subject includes a memory (200) and a processor (110a) performing: acquiring, from the memory (200), intensities of infrared light measured by imaging with infrared light emitted from a light source and reflected on the subject by pixels in an imaging element; generating a depth image by calculating the distance for each pixel based on the intensities; acquiring, from the memory (200), a visible light image generated by imaging, with visible light, the substantially same scene from the substantially same viewpoint at the substantially same timing as those of the infrared light image generated by imaging based on the intensities; detecting a lower reflection region showing an object having a lower reflectivity from the infrared light image in accordance with the infrared light image and the visible light image; correcting a lower reflection region in the depth image corresponding to the lower reflection region in accordance with the visible light image; and outputting the depth image with the corrected lower reflection region.
摘要:
A crossing point detector includes a memory and a crossing point detection unit that reads out a square image from a captured image in the memory, and detects a crossing point of two boundary lines in a checker pattern depicted in the square image. The crossing point detection unit decides multiple parameters of a numerical model, the parameters optimizing an evaluation value based on a difference between corresponding pixel values represented by the numerical model and the square image, respectively, and computes the position of a crossing point of two straight lines expressed by the decided multiple parameters to thereby detect the crossing point with subpixel precision. The numerical model is a model that simulates a pixel integration effect when imaging a step edge by assigning pixel values to respective regions inside a pixel divided by a straight line, and deciding a pixel value of the overall pixel.
摘要:
A three-dimensional motion obtaining apparatus (1) includes: a light source (110); a charge amount obtaining circuit (120) that includes pixels and obtains, for each of the pixels, a first charge amount under a first exposure pattern and a second charge amount under a second exposure pattern having an exposure period that at least partially overlaps an exposure period of the first exposure pattern; and a processor (130) that controls a light emission pattern for the light source (110), the first exposure pattern, and the second exposure pattern. The processor (130) estimates a distance to a subject for each of the pixels on the basis of the light emission pattern and on the basis of the first charge amount and the second charge amount of each of the pixels obtained by the charge amount obtaining circuit (120), and estimates an optical flow for each of the pixels on the basis of the first exposure pattern, the second exposure pattern, and the first charge amount and the second charge amount obtained by the charge amount obtaining circuit (120).
摘要:
A camera calibration method, which calculates camera parameters of two cameras using calibration points, includes: (a1) acquiring three-dimensional coordinate sets of the calibration points and image coordinate pairs of the calibration points in a camera image of each camera; (a2) acquiring multiple camera parameters of each camera; (a3) for each calibration point, calculating a view angle-corresponding length corresponding to a view angle of the two cameras viewing the calibration point; (a4) for each calibration point, calculating a three-dimensional position of a measurement point corresponding to a three-dimensional position of the calibration point using parallax of the calibration point between the two cameras; (a5) for each calibration point, weighting a difference between the three-dimensional coordinate set of the calibration point and the three-dimensional position of the measurement point corresponding to the calibration point using the view angle-corresponding length corresponding to the calibration point; and (a6) updating the camera parameters based on the weighted difference.
摘要:
A camera calibration method which calculates camera parameters of at least three cameras acquires three-dimensional coordinate set of a calibration point and image coordinate pair of the calibration point in each camera image, acquires camera parameters of each camera, calculates a view angle-corresponding length (L1b, L1a) corresponding to a view angle of each pair of cameras (21, 22) viewing the calibration point (P1), calculates a three-dimensional position of a measurement point corresponding to a three-dimensional position of the calibration point for each camera pair (21,22) using parallax of the calibration point between the cameras in the camera pair (21,22), weights the three-dimensional position of each measurement point using the view angle-corresponding length (L1b, L1a) corresponding to the measurement point (P1), calculates a three-dimensional position of a unified point of the weighted measurement points, and updates the camera parameters based on the three-dimensional coordinate set of the calibration point and the three-dimensional position of the unified point.