Abstract:
An electronic device [100] estimates a depth map of an environment based on stereo depth images [410, 415] captured by depth cameras [114, 115] having exposure times that are offset from each other in conjunction with illuminators [118, 1119] pulsing illumination patterns [305, 310] into the environment. A processor [220] of the electronic device matches small sections [430, 432] of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch "cube"). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.
Abstract:
Systems and methods are disclosed for making three-dimensional models of the inside of an ear canal using a projected pattern. A system comprises a probe adapted to be inserted into the ear canal. The probe comprises a narrow portion adapted to fit inside the ear canal and a wide portion adapted to be wider than the ear canal, which may be formed by a tapered stop. An illumination subsystem projects a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the three-dimensional surface of the ear canal. An imaging subsystem captures a series of individual images of the pattern of light projected onto the surface of the ear canal. A computer subsystem calculates digital three-dimensional representations from the individual images and stitches them together to generate a digital three-dimensional model of the ear canal.
Abstract:
An exemplary depth capture system (system) emits, from a first fixed position with respect to a real-world scene and within a first frequency band, a first structured light pattern onto surfaces of objects included in a real-world scene. The system also emits, from a second fixed position with respect to the real-world scene and within a second frequency band, a second structured light pattern onto the surfaces of the objects. The system detects the first and second structured light patterns using one or more optical sensors by way of first and second optical filters, respectively. The first and second optical filters are each configured to only pass one of the structured light patterns and to block the other. Based on the detection of the structured light patterns, the system generates depth data representative of the surfaces of the objects included in the real-world scene.
Abstract:
족부 스캔 장치가 개시된다. 족부 스캔 장치는, 본체, 본체에 족부를 지지할 수 있도록 설치되는 투명 지지대, 본체의 상부에서 회전 가능하도록 설치되어 족부의 상부에 대한 제1 영상을 획득하는 제1 촬상부, 본체의 하부에서 직선 이동하도록 설치되어 족부의 하부에 대한 제2 영상을 획득하는 제2 촬상부 및 제1 촬상부 및 제2 촬상부의 동작을 제어하며, 제1 영상 및 제2 영상을 정합하여 족부의 3차원 모델을 생성하는 제어부를 포함한다.
Abstract:
Das vorgeschlagene Verfahren zum Betrieb eines Laserentfernungsmessgeräts (10), insbesondere eines handgehaltenen Laserentfernungsmessgeräts (10), geht aus von einem Verfahren, bei dem mit einer Laserentfernungsmesseinheit des Laserentfernungsmessgeräts (10) eine erste Entfernung (28a) zu einem ersten Zielpunkt (30a) durch Aussenden eines Laserstrahls (20a) in einer ersten Entfernungsmessrichtung (24a) ermittelt wird und anschließend zumindest eine zweite Entfernung (24b) zu einem zweiten, anvisierten Zielpunkt (30b) ermittelt wird. Erfindungsgemäß wird ein mittels einer Kamera (32) des Laserentfernungsmessgeräts (10) aufgenommenes Bild (34b, 52a, 52b) zumindest der Zielumgebung (36a, b) des zweiten Zielpunkts (30b) auf einem Display (14) des Laserentfernungsmessgeräts (10) ausgegeben, wobei mit dem Bild (34b, 52a, 52b) überlagert zumindest ein Teil einer Verbindungslinie (50) dargestellt wird, die den ersten Zielpunkt (30a) und den zweiten Zielpunkt (30b) in dem ausgegebenen Bild (34b, 52a, 52b) verbindet. Ferner wird ein Laserentfernungsmessgerät (10) zur Durchführung des Verfahrens vorgeschlagen.
Abstract:
Vehicle monitoring employs three-dimensional (3D) information in a region adjacent to a vehicle to visually highlight objects that are closer to the vehicle than a threshold distance. A vehicle monitoring system includes a 3D scanner to scan the region adjacent to the vehicle and provide a 3D model including a spatial configuration of objects located within the scanned region. The vehicle monitoring system further includes an electronic display to display a portion of the scanned region using the 3D model and to visually highlight an object within the displayed portion that is located less than the threshold distance from the vehicle.
Abstract:
To determine depth of an object within a volume, structured light is projected into the volume. The structured light comprises a pattern over which intensity of the light varies, A sensor detects light from the volume and uses variations in intensity of the detected light to correlate the detected light with the pattern. Based on the correlation, depth of objects within the volume is determined.
Abstract:
A detection system (S) for detecting and determining an integrity of pharmaceutical/parapharmaceutical articles comprises a conveyor device (1), for conveying and advancing articles having an advancement section (10) along which the articles are advanced on a flat plane, in a line one after another in an advancement direction (A). The system (S) further comprises a processor (E) for data processing; at least a colour matrix video camera (2) for acquiring images of the articles advancing along the advancement section, a laser projector (P) able to emit and project a laser beam (L) so that the laser beam (L) crosses the advancement section (10) and a high-speed linear three-dimensional video camera (3) for acquiring the images of the cut profiles of the articles crossing the laser beam.
Abstract:
Systems, apparatuses, and methods are provided for developing a fingerprint database and extracting feature geometries for determining the geographic location of an end-user device. A device collects, or a processor receives, a depth map of a location in a path network (S101). A physical structure is identified within the depth map (S103). The depth map is divided, at the physical structure, into a horizontal plane at an elevation from the road level (S105). A two- dimensional feature geometry is extracted from the horizontal plane of the depth map using a linear regression algorithm, a curvilinear regression algorithm, or a machine learning algorithm (S107).