Abstract:
A Time-of-Flight (TOF) technique is combined with analog amplitude modulation within each pixel in a pixel array using multiple Single Photon Avalanche Diodes (SPADs) in conjunction with a single Pinned Photo Diode (PPD) in each pixel. A SPAD may be shared among multiple neighboring pixels. The TOF information is added to the received light signal by the analog domain-based single-ended to differential converter inside the pixel itself. The spatial-temporal correlation among outputs of multiple, adjacent SPADs in a pixel is used to control the operation of the PPD to facilitate recording of TOF values and range of an object. Erroneous range measurements due to ambient light are prevented by stopping the charge transfer from the PPD—and, hence, recording a TOF value—only when two or more SPADs in the pixel are triggered within a pre-defined time interval. An autonomous navigation system with multi-SPAD pixels provides improved vision for drivers under difficult driving conditions.
Abstract:
A Dynamic Vision Sensor (DVS) pose-estimation system includes a DVS, a transformation estimator, an inertial measurement unit (IMU) and a camera-pose estimator based on sensor fusion. The DVS detects DVS events and shapes frames based on a number of accumulated DVS events. The transformation estimator estimates a 3D transformation of the DVS camera based on an estimated depth and matches confidence-level values within a camera-projection model such that at least one of a plurality of DVS events detected during a first frame corresponds to a DVS event detected during a second subsequent frame. The IMU detects inertial movements of the DVS with respect to world coordinates between the first and second frames. The camera-pose estimator combines information from a change in a pose of the camera-projection model between the first frame and the second frame based on the estimated transformation and the detected inertial movements of the DVS.
Abstract:
A client device configured with a neural network includes a processor, a memory, a user interface, a communications interface, a power supply and an input device, wherein the memory includes a trained neural network received from a server system that has trained and configured the neural network for the client device. A server system and a method of training a neural network are disclosed.
Abstract:
A system and a method determines a traveling time for a light pulse between a light pulse source and a pixel of a light sensor array based on a “Find Frequent Items in a Data Steam” technique. In one embodiment, raw timestamp data output from a pixel as a data stream may be temporarily stored, processed twice and then discarded to provide an exact determination of a traveling time estimate. In another embodiment, the raw timestamp data is processed once and discarded to provide an approximate determination of a traveling time estimate. The traveling time estimate may be updated during processing and the most-frequently occurring timestamp is available when processing the data stream is complete. There is no need to keep the raw data in a memory, thereby reducing the memory requirement associated with determining the traveling time of a light pulse.
Abstract:
A structured-light pattern for a structured-light system includes a base light pattern having a row of a plurality of sub-patterns extending in a first direction. Each sub-pattern is adjacent to at least one other sub-pattern. Each sub-pattern is different from each other sub-pattern. Each sub-pattern includes n dots in a sub-row and n dots in a sub-column in which n is an integer. Each dot is substantially a same size. Each sub-row extends in the first direction, and each sub-column extends in a second direction that is substantially orthogonal to the first direction. The dots that are aligned in a sub-column are offset in the second direction from the dots of the base light pattern that are aligned in an adjacent sub-column. In one embodiment, a size of each sub-pattern in the second direction is larger than a size of each sub-pattern in the first direction by a stretching factor.
Abstract:
A system and a method are disclosed for a structured-light system to estimate depth in an image. An image is received in which the image is of a scene onto which a reference light pattern has been projected. The projection of the reference light pattern includes a predetermined number of particular sub-patterns. A patch of the received image and a sub-pattern of the reference light pattern are matched based on either a hardcode template matching technique or a probability that the patch corresponds to the sub-pattern. If a lookup table is used, the table may be a probability matrix, may contain precomputed correlations scores or may contain precomputed class IDs. An estimate of depth of the patch is determined based on a disparity between the patch and the sub-pattern.
Abstract:
A structured-light pattern for a structured-light system includes a base light pattern that includes a row of a plurality of sub-patterns extending in a first direction. Each sub-pattern is adjacent to at least one other sub-pattern, and each sub-pattern is different from each other sub-pattern. Each sub-pattern includes a first number of portions in a sub-row and a second number of portions in a sub-column. Each sub-row extends in the first direction and each sub-column extends in a second direction that is substantially orthogonal to the first direction. Each portion may be a first-type portion or a second-type portion. A size of a first-type portion is larger in the first direction and in the second direction than a size of a second-type portion in the first direction and in the second direction. In one embodiment, a first-type portion is a black portion and the second-type portion is a white portion.
Abstract:
A color-edge contrast preserver includes a demosaicing module, a color-correcting module, a converter module and a chromatic-denoising module. The demosaicing module may demosaic a red-white-blue (RWB) pixel image of the image. The color-correcting module may color correct the demosaiced RWB pixel image and may produce a red-green-blue (RGB) pixel image from the color-corrected demosaiced RWB pixel image. The converter module to convert the RGB pixel image to a hue-saturation-value (HSV) pixel image and to generate a similarity kernel ΔY. The chromatic-denoising module may denoise a red pixel image and a blue pixel image of the RWB pixel image using the similarity kernel ΔY.
Abstract:
A stack-type image sensor may include a photodiode and a meta-filter. The photodiode may include a first photodiode configured to absorb first light of a first wavelength band and a second photodiode disposed on the first photodiode and configured to absorb second light of a second wavelength band. The meta-filter may include a first meta-filter disposed in a lower portion of the first photodiode and configured to reflect the first light of the first wavelength band to the first photodiode.
Abstract:
An image sensor includes a first array of pixels exhibiting a first sensitivity; a second array of pixels exhibiting a second sensitivity; wherein, the first array of pixels is electrically separated from the second array of pixels. Methods of use and devices using the image sensor are disclosed.