Abstract:
A calibration method of an image capture system includes an image capture device of at least one image capture device capturing an image including a plurality of intersection coordinates among a plurality of geometric blocks of a test pattern and an information of a color of each geometric block of the plurality of geometric blocks; an operation unit executing a first operation on the plurality of intersection coordinates within the image to generate a plurality of geometric calibration parameters; the operation unit executing a second operation on an information of a color of each geometric block of the plurality of geometric blocks within the image to generate a plurality of color calibration parameters; and a calibration unit calibrating the image capture device according to the plurality of geometric calibration parameters and the plurality of color calibration parameters.
Abstract:
An image processing device applied to an RGB-IR sensor includes an interpolation unit and a color correction unit. Pixels included in the RGB-IR sensor are arranged into a plurality of bayer pattern units. The interpolation unit generates interpolation values of a red color component, a green color component, a blue color component, and an IR component of each pixel of each bayer pattern unit of the plurality of bayer pattern units according to gray levels of red pixels, green pixels, blue pixels, and IR pixels located in predetermined positions of the plurality of bayer pattern units. The color correction unit generates correction values of the red color component, the green color component, and the blue color component of the each pixel according to a correction matrix corresponding to the each pixel and the interpolation values.
Abstract:
A calibration system for a multi-camera system is disclosed. The calibration system includes a connection device, a storage device, and a processor. The processor is configured to control each camera of the multi-camera system to capture a calibration image of a calibration board having a pattern including multiple conventional features and at least one non-conventional feature in which an FOV of the calibration image of at least one camera does not contain at least one conventional feature of the pattern, detect the conventional features and the non-conventional feature in the calibration image and record positions thereof in the storage device, transform a position of each conventional feature into absolute coordinates relative to reference coordinates by using a position of the non-conventional feature as the reference coordinates, and according to the absolute coordinates of the transformed conventional features, match the conventional features in the calibration images captured by the cameras to calibrate the cameras.
Abstract:
An image device utilizing non-planar projection images to generate a depth map includes two image capturers and a depth engine. The two image capturers are used for generating two non-planar projection images, wherein when each non-planar projection image of the two non-planar projection images is projected to a space corresponding to an image capturer corresponding to the each non-planar projection image, projection positions of each row of pixels of the each non-planar projection image in the space and optical centers of the two image capturers share a plane. The depth engine is used for generating a depth map according to the two non-planar projection images.
Abstract:
An image device utilizing non-planar projection images to generate a depth map includes two image capturers and a depth engine. The two image capturers are used for generating two non-planar projection images, wherein when each non-planar projection image of the two non-planar projection images is projected to a space corresponding to an image capturer corresponding to the each non-planar projection image, projection positions of each row of pixels of the each non-planar projection image in the space and optical centers of the two image capturers share a plane. The depth engine is used for generating a depth map according to the two non-planar projection images.
Abstract:
An image system for generating depth maps and color images includes a plurality of image sensors, at least one image processor, and at least one depth map generator. An image processor of the at least one image processor is coupled to at least one image sensor of the plurality of image sensors for generating luminance information represented by a first bit number and at least one color image represented by a second bit number according to at least one image captured by the at least one image sensor, wherein the at least one color image corresponds to the at least one image. A depth map generator of the at least one depth map generator is coupled to the image processor for generating a depth map corresponding to the at least one image according to the luminance information represented by the first bit number.
Abstract:
A remote control system includes an object detection unit, an object determination unit, and a static gesture processing unit. The object detection unit detects an object corresponding to an operator according to a depth image including the operator and a face detection result corresponding to the operator. The object determination unit utilizes a combination of a gesture database, a color image of the object, and a two-dimensional image corresponding to the depth map to determine a gesture formed by the object when the operator moves the object to a predetermined position. The operator moves the object to the predetermined position and pulls the object after the operator moves the object to the predetermined position within a first predetermined period. The static gesture processing unit generates a first control command to control an electronic device according to at least one static gesture determined by the object determination unit.
Abstract:
An embodiment of the present invention provides an image device for generating velocity maps. The image device includes an image capturing group, a depth map generator, an optical flow generator, and a velocity map generator. The image capturing group includes at least one image capturer, each image capturer of the image capturing group captures a first image at a first time and a second image at a second time. The depth map generator generates a first depth map according to the first image and a second depth map according to the second image. The optical flow generator generates first optical flow according to the first image and the second image. The velocity map generator generates a first velocity map according to the first depth map, the second depth map and the first optical flow, wherein the first velocity map corresponds to the first image.
Abstract:
An image device for generating depth images includes at least two image capturers and a rotating device. When the rotating device rotates the at least two image capturers, multiple images captured by the at least two image capturers are utilized to generate a depth image, wherein a view angle corresponding to the depth image is not less than a view angle of each image capturer of the at least two image capturers.
Abstract:
An image capture device includes a light source, an image capture circuit, and a processor. The light source is used for generating emitting light. The image capture circuit is used for capturing an image corresponding to the emitting light. The processor is coupled to the light source and the image capture circuit for optionally adjusting the intensity of the emitting light of the light source according to luminance corresponding to the image and a target value.