Abstract:
A structured light three-dimensional (3D) depth map based on content filtering is disclosed. In a particular embodiment, a method includes receiving, at a receiver device, image data that corresponds to a structured light image. The method further includes processing the image data to decode depth information based on a pattern of projected coded light. The depth information corresponds to a depth map. The method also includes performing one or more filtering operations on the image data. An output of the one or more filtering operations includes filtered image data. The method further includes performing a comparison of the depth information to the filtered image data and modifying the depth information based on the comparison to generate a modified depth map.
Abstract:
Described are methods and apparatus for adjusting images of a stereoscopic image pair. The methods and apparatus may capture a first and second image with first and second imaging sensors. The two imaging sensors have intrinsic and extrinsic parameters. A normalized focal distance of a reference imaging sensor may also be determined based on intrinsic and extrinsic parameters. A calibration matrix is then adjusted based on the normalized focal distance. The calibration matrix may be applied to an image captured by a image sensor.
Abstract:
A method operational on a transmitter device is provided for projecting a composite code mask. A composite code mask on a tangible medium is obtained, where the composite code mask includes a code layer combined with a carrier layer. The code layer may include uniquely identifiable spatially-coded codewords defined by a plurality of symbols. The carrier layer may be independently ascertainable and distinct from the code layer and includes a plurality of reference objects that are robust to distortion upon projection. At least one of the code layer and carrier layer may be pre-shaped by a synthetic point spread function prior to projection. At least a portion of the composite code mask is projected, by the transmitter device, onto a target object to help a receiver ascertain depth information for the target object with a single projection of the composite code mask.
Abstract:
A method operational on a transmitter device is provided for projecting a composite code mask. A composite code mask on a tangible medium is obtained, where the composite code mask includes a code layer combined with a carrier layer. The code layer may include uniquely identifiable spatially-coded codewords defined by a plurality of symbols. The carrier layer may be independently ascertainable and distinct from the code layer and includes a plurality of reference objects that are robust to distortion upon projection. At least one of the code layer and carrier layer may be pre-shaped by a synthetic point spread function prior to projection. At least a portion of the composite code mask is projected, by the transmitter device, onto a target object to help a receiver ascertain depth information for the target object with a single projection of the composite code mask.
Abstract:
Methods and apparatus for sharing a bus between multiple imaging sensors, include, in some aspects, a device having at least two imaging sensors, an electronic hardware processor, and an imaging sensor controller. The imaging sensor controller includes first clock and data lines, operably coupling the electronic hardware processor to the imaging sensor controller, and a second clock line, operably coupling the imaging sensor controller to the first imaging sensor and the second imaging sensor. A second data line operably couples the imaging sensor controller to the first imaging sensor. A third data line operably couples the sensor controller to the second imaging sensor. The imaging sensor controller is configured to use the second clock line, and second data line to send a first command to the first imaging sensor, and, use the second clock line, and third data line to send a second command to the second imaging sensor.
Abstract:
An interactive display, including a cover glass having a front surface that includes a viewing area provides an input/output (I/O) interface for a user of an electronic device. An arrangement includes a processor, a light source, and a camera disposed outside the periphery of the viewing area coplanar with or behind the cover glass. The camera receives scattered light resulting from interaction, with an object, of light outputted from the interactive display, the outputted light being received by the cover glass from the object and directed toward the camera. The processor determines, from image data output by the camera, an azimuthal angle of the object with respect to an optical axis of the camera and/or a distance of the object from the camera.
Abstract:
Described are methods and apparatus for adjusting images of a stereoscopic image pair. The methods and apparatus may capture a first and second image with first and second imaging sensors. The two imaging sensors have intrinsic and extrinsic parameters. A normalized focal distance of a reference imaging sensor may also be determined based on intrinsic and extrinsic parameters. A calibration matrix is then adjusted based on the normalized focal distance. The calibration matrix may be applied to an image captured by an image sensor.
Abstract:
Systems and methods for correcting stereo yaw of a stereoscopic image sensor pair using autofocus feedback are disclosed. A stereo depth of an object in an image is estimated from the disparity of the object between the images captured by each sensor of the image sensor pair. An autofocus depth to the object is found from the autofocus lens position. If the difference between the stereo depth and the autofocus depth is non zero, one of the images is warped and the disparity is recalculated until the stereo depth and the autofocus depth to the object is substantially the same.
Abstract:
Systems and methods for correcting stereo yaw of a stereoscopic image sensor pair using autofocus feedback are disclosed. A stereo depth of an object in an image is estimated from the disparity of the object between the images captured by each sensor of the image sensor pair. An autofocus depth to the object is found from the autofocus lens position. If the difference between the stereo depth and the autofocus depth is non zero, one of the images is warped and the disparity is recalculated until the stereo depth and the autofocus depth to the object is substantially the same.
Abstract:
Techniques are described for determining a contact location on a touch screen panel. The techniques transmit an optical signal that includes digital bits through the touch screen, and determine for which digital bits the optical power level reduced. Based on the determined digital bits, the techniques determine the contact location on the touch screen panel.