Abstract:
The invention relates to an image processing apparatus for determining a depth of a pixel of a reference image of a plurality of images representing a visual scene relative to a plurality of locations, wherein the plurality of locations define a two-dimensional grid with rows and columns and wherein the location of the reference image is associated with a reference row and a reference column of the grid. The image processing apparatus comprises a depth determiner configured to determine a first depth estimate on the basis of the reference image and a first subset of the plurality of images for determining the depth of the pixel of the reference image, wherein the images of the first subset are associated with locations being associated with a row of the grid different than the reference row and with a column of the grid different than the reference column.
Abstract:
The present disclosure describes structured-stereo imaging assemblies including separate imagers for different wavelengths. The imaging assembly can include, for example, multiple imager sub-arrays, each of which includes a first imager to sense light of a first wavelength or range of wavelengths and a second imager to sense light of a different second wavelength or range of wavelengths. Images acquired from the imagers can be processed to obtain depth information and/or improved accuracy. Various techniques are described that can facilitate determining whether any of the imagers or sub-arrays are misaligned.
Abstract:
The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
Abstract:
Embodiments may take the form of three-dimensional image sensing devices configured to capture an image including one or more objects. In one embodiments, the three-dimensional image sensing device includes a first image device configured to capture a first image and extract depth information for the one or more objects. Additionally, the image sensing device includes a second imaging device configured to capture a second image and determine an orientation of a surface of the one or more objects.
Abstract:
The present invention relates to a method for displaying a virtual image of three dimensional objects in an area using stereo recordings (23) of the area for storing (24) a pixel and a height for each point of the area. The object of the invention is to obtain a method enabling displaying of vertical surfaces or even slightly downwards and inwards inclined surfaces. The object is obtained by a method using stereo recordings from at least three different stereo recordings of different solid angles. For each different solid angle at least one database comprising data about texture and height pixel-pointwise is established. Data for displaying the virtual image are combined (26) from the different databases in dependence of the direction in which the virtual image is to be displayed (27).
Abstract:
A system and method for acquiring three-dimensional (3-D) images of a scene. The system includes a projection device for projecting a locally unique pattern (LUP) onto a scene, and sensors for imaging the scene containing the LUP at two or more viewpoints. A computing device matches corresponding pixels in the images by using the local uniqueness of the pattern to produce a disparity map. A range map can then be generated by triangulating points in the imaged scene.
Abstract:
A digital imaging system (10) is described that facilitates the location of anchors or targets (17) in images of a scene. In one aspect, the digital imaging system makes use of differences as between the properties of the surfaces of the targets and the properties of the surfaces of the objects that are to be mensurated, reconstructed, etc. to facilitate providing uniform illumination of the target when recording a set of images of the scene, thereby reducing noise that may arise in connection with determining the locations of the targets if they were illuminated by structured illumination. In a second aspect, the digital imaging system makes use of one or more of a plurality of algorithms to determine the locations of targets in the images of the scene in the respective objects. In this aspect, the digital imaging system records two sets of images, including a baseline set and a working set.
Abstract:
A method for generating information regarding a 3D object from at least one 2D projection thereof. The method includes providing at least one 2D projection (40) of a 3D object, generating an array of numbers (50, 60) described by: aijk = vi'bik - vj''ajk (i,j,k = 1,2,3), where ajk and bjk are elements of matrices A and B respectively and vi' and vi'' are elements of vectors v' and v'' respectively, wherein the matrices (50) and vectors (60) together describe camera parameters of three views (102) of the 3D object and employing the array to generate information regarding the 3D object (70).