Abstract:
An apparatus and a method are provided. The apparatus includes a light source configured to project light in a changing pattern that reduces the light's noticeability; collection optics through which light passes and forms an epipolar plane with the light source; and an image sensor configured to receive light passed through the collection optics to acquire image information and depth information simultaneously. The method includes projecting light by a light source in a changing pattern that reduces the light's noticeability; passing light through collection optics and forming an epipolar plane between the collection optics and the light source; and receiving in an image sensor light passed through the collection optics to acquire image information and depth information simultaneously.
Abstract:
A client device configured with a neural network includes a processor, a memory, a user interface, a communications interface, a power supply and an input device, wherein the memory includes a trained neural network received from a server system that has trained and configured the neural network for the client device. A server system and a method of training a neural network are disclosed.
Abstract:
Using the same image sensor to capture both a two-dimensional (2D) image of a three-dimensional (3D) object and 3D depth measurements for the object. A laser point-scans the surface of the object with light spots, which are detected by a pixel array in the image sensor to generate the 3D depth profile of the object using triangulation. Each row of pixels in the pixel array forms an epipolar line of the corresponding laser scan line. Timestamping provides a correspondence between the pixel location of a captured light spot and the respective scan angle of the laser to remove any ambiguity in triangulation. An Analog-to-Digital Converter (ADC) in the image sensor generates a multi-bit output in the 2D mode and a binary output in the 3D mode to generate timestamps. Strong ambient light is rejected by switching the image sensor to a 3D logarithmic mode from a 3D linear mode.
Abstract:
Using the same image sensor to capture a two-dimensional (2D) image and three-dimensional (3D) depth measurements for a 3D object. A laser point-scans the surface of the object with light spots, which are detected by a pixel array in the image sensor to generate the 3D depth profile of the object using triangulation. Each row of pixels in the pixel array forms an epipolar line of the corresponding laser scan line. Timestamping provides a correspondence between the pixel location of a captured light spot and the respective scan angle of the laser to remove any ambiguity in triangulation. An Analog-to-Digital Converter (ADC) in the image sensor operates as a Time-to-Digital (TDC) converter to generate timestamps. A timestamp calibration circuit is provided on-board to record the propagation delay of each column of pixels in the pixel array and to provide necessary corrections to the timestamp values generated during 3D depth measurements.
Abstract:
The Time-of-Flight (TOF) technique is combined with analog amplitude modulation within each pixel in an image sensor. The pixel may be a two-tap pixel or a one-tap pixel. Two photoelectron receiver circuits in the pixel receive respective analog modulating signals. The distribution of the received photoelectron charge between these two circuits is controlled by the difference (or ratio) of the two analog modulating voltages. The differential signals generated in this manner within the pixel are modulated in time domain for TOF measurement. Thus, the TOF information is added to the received light signal by the analog domain-based single-ended to differential converter inside the pixel itself. The TOF-based measurement of range and its resolution are controllable by changing the duration of modulation. An autonomous navigation system with these features may provide improved vision for drivers under difficult driving conditions like low light, fog, bad weather, or strong ambient light.
Abstract:
An apparatus and a method are provided. The apparatus includes a light source configured to project light in a changing pattern that reduces the light's noticeability; collection optics through which light passes and forms an epipolar plane with the light source; and an image sensor configured to receive light passed through the collection optics to acquire image information and depth information simultaneously. The method includes projecting light by a light source in a changing pattern that reduces the light's noticeability; passing light through collection optics and forming an epipolar plane between the collection optics and the light source; and receiving in an image sensor light passed through the collection optics to acquire image information and depth information simultaneously.
Abstract:
Using the same image sensor to capture a two-dimensional (2D) image and three-dimensional (3D) depth measurements for a 3D object. A laser point-scans the surface of the object with light spots, which are detected by a pixel array in the image sensor to generate the 3D depth profile of the object using triangulation. Each row of pixels in the pixel array forms an epipolar line of the corresponding laser scan line. Timestamping provides a correspondence between the pixel location of a captured light spot and the respective scan angle of the laser to remove any ambiguity in triangulation. An Analog-to-Digital Converter (ADC) in the image sensor operates as a Time-to-Digital Converter (TDC) to generate timestamps. When the epipolar line is misaligned or curved, multiple TDC arrays acquire timestamps of multiple pixels (in multiple rows) substantially simultaneously. Multiple timestamp values are reconciled to obtain a single timestamp value for a light spot.
Abstract:
A unit pixel of a stacked image sensor includes a stacked photoelectric conversion unit, a first and second signal generating units. The stacked photoelectric conversion unit includes first, second and third photoelectric conversion elements that are stacked on each other. The first, second and third photoelectric conversion elements collect first, second and third photocharges based on first, second and third components of incident light. The first signal generating unit generates a first pixel signal based on the first photocharges and a first signal node and generates a second pixel signal based on the second photocharges and the first signal node. The second signal generating unit generates a third pixel signal based on the third photocharges and a second signal node. At least a portion of the second signal generating unit is shared by the first signal generating unit.