Abstract:
A range sensor and a method thereof. The range sensor includes a light source configured to project a plurality of sheets of light at an angle within a field of view (FOV); an image sensor, wherein the image sensor is offset from the light source; collection optics; and a controller connected to the light source, the image sensor, and the collection optics, and configured to simultaneously determine a range of a distant object based on direct time-of-flight (TOF) and a range of a near object based on triangulation.
Abstract:
A Dynamic Vision Sensor (DVS) pose-estimation system includes a DVS, a transformation estimator, an inertial measurement unit (IMU) and a camera-pose estimator based on sensor fusion. The DVS detects DVS events and shapes frames based on a number of accumulated DVS events. The transformation estimator estimates a 3D transformation of the DVS camera based on an estimated depth and matches confidence-level values within a camera-projection model such that at least one of a plurality of DVS events detected during a first frame corresponds to a DVS event detected during a second subsequent frame. The IMU detects inertial movements of the DVS with respect to world coordinates between the first and second frames. The camera-pose estimator combines information from a change in a pose of the camera-projection model between the first frame and the second frame based on the estimated transformation and the detected inertial movements of the DVS.
Abstract:
A method of reconstructing a three dimensional image using a structured light pattern system is provided as follows. A class identifier of an observed pixel on a captured image by a camera is extracted. The observed pixel has a coordinate (x, y) on the captured image. A first relative position of the x coordinate of the observed pixel in a tile domain of the captured image is calculated. A second relative position of one of a plurality of dots in a tile domain of a reference image using the extracted class identifier is calculated. A disparity of the observed pixel using the first relative position and the second relative position is calculated.
Abstract:
An image sensor includes a plurality of a first type of diodes and a time-resolving sensor. The time-resolving sensor outputs first and second reset signals, and first and second measurement signals. The two reset signals respective represent a reset-charge level of a first and a second floating diffusion. The measurement signals are output in response the diodes detecting at least one incident photon. First and second time-of-flight (TOF) signals are formed by respective subtracting the first and second reset signals from the first and second measurement signals. A first ratio of a magnitude of the first signal to a sum of the magnitudes of the first and second signals is proportional to a TOF of the detected photon, and a second ratio of the magnitude of the second signal to the sum of the magnitudes of the first and second signals is proportional to the TOF of the detected photons.
Abstract:
A Time-of-Flight (TOF) technique is combined with analog amplitude modulation within each pixel in a pixel array using multiple Single Photon Avalanche Diodes (SPADs) in conjunction with a single Pinned Photo Diode (PPD) in each pixel. A SPAD may be shared among multiple neighboring pixels. The TOF information is added to the received light signal by the analog domain-based single-ended to differential converter inside the pixel itself. The spatial-temporal correlation among outputs of multiple, adjacent SPADs in a pixel is used to control the operation of the PPD to facilitate recording of TOF values and range of an object. Erroneous range measurements due to ambient light are prevented by stopping the charge transfer from the PPD—and, hence, recording a TOF value—only when two or more SPADs in the pixel are triggered within a pre-defined time interval. An autonomous navigation system with multi-SPAD pixels provides improved vision for drivers under difficult driving conditions.
Abstract:
An image sensor includes a plurality of nanoantennas that satisfy sub-wavelength conditions. Each of the nanoantennas includes a diode and a transistor. Each diode is either a PN diode or a PIN diode.
Abstract:
A method and a system are disclosed for detecting a depth of an object illuminated by at least one first light pulse. Detection of light reflected from the object illuminated by the at least one first light pulse by a first row of pixels of 2D pixel array is enabled for a first predetermined period of time in which the first row of pixels forms an epipolar line of a scanning line of a first light pulse. Enabling of the detection by the first row of pixels for the first predetermined period of time occurs a second predetermined period of time after a beginning of a pulse cycle T of the at least one first light pulse. Detection signals are generated corresponding to the detected light reflected from the object, and the generated detection signals are used to determine a depth of the object.
Abstract:
Methods, systems and devices are disclosed including a substrate, a computational device mounted on the substrate, with a first surface of the substrate having one or more nanoelements formed within, the one or more nanoelements having a diameter of 100 nm-5,000 nm. In some embodiments, a heat dissipator may be mounted on the substrate, the heat dissipator having at least one nanostructure, and the one or more nanoelements forming a thermal conductive pathway between the computational device and the heat dissipator.
Abstract:
A device includes an electronic integrated circuit, the electronic integrated circuit including an optical demultiplexer and at least one photodetector optically coupled to the optical demultiplexer. The optical demultiplexer may have at least one nanostructured layer able to receive an incoming optical signal and separate the incoming optical signal into a first separated optical signal and a second separated optical signal. The device may have a first photodetector and a second photodetector, where the first photodetector may receive the first separated optical signal and the second photodetector may receive the second separated optical signal.
Abstract:
Provided are systems, methods, and apparatuses for non-scattering nanostructures of silicon pixel image sensors. In one or more examples, the systems, devices, and methods include forming a metal layer on a substrate layer of the pixel, the metal layer to reflect electromagnetic radiation incident on the pixel; forming a photodetector on a silicon layer of the pixel, the photodetector to generate photoelectrons based on the electromagnetic radiation; and forming a passivation layer over the silicon layer, the passivation layer including a thin film dielectric. In one or more examples, the systems, devices, and methods include forming a nanostructure on the passivation layer, the nanostructure to allow the electromagnetic radiation to pass through the nanostructure and steer the electromagnetic radiation linearly towards the photodetector, and forming a microlens on the nanostructure, the microlens including at least one of a flat coat layer or a curved lensing layer.