Abstract:
Systems, methods, and non-transitory media are provided for low-power visual tracking systems. An example method can include receiving one or more images captured by each image sensor system from a set of image sensor systems on a first device, the one or more images capturing a set of patterns on a second device, wherein the first device has lower power requirements than the second device, the set of patterns having a predetermined configuration on the second device; determining, from the one or more images captured by each image sensor system, a set of pixels corresponding to the set of patterns on the second device; determining, based on the set of pixels corresponding to the set of patterns, a location and relative pose in space of each pattern; and determining, based on the location and relative pose of each pattern, a pose of the first device relative to the second device.
Abstract:
In one example, an image sensor module comprises one or more covers having at least a first opening and a second opening, a first lens mounted in the first opening and having a first field of view (FOV) centered at a first axis having a first orientation, a second lens mounted in the second opening and having a second FOV centered at a second axis having a second orientation different from the first orientation, a first image sensor housed within the one or more covers and configured to detect light via the first lens, and a second image sensor housed within the one or more covers and configured to detect light via the second lens. The first image sensor and the second image sensor are configured to provide, based on the detected light, image data of a combined FOV larger than each of the first FOV and the second FOV.
Abstract:
Various aspects of the present disclosure generally relate to a sensor module. In some aspects, a sensor module may include a collar configured to be attached to a camera module for a user device. The collar may include a first opening that is configured to align with an aperture of a camera of the camera module, and a second opening. The sensor module may include a sensor embedded in the collar. The sensor may be aligned with the second opening of the collar. Numerous other aspects are provided.
Abstract:
An interactive display, including a cover glass having a front surface that includes a viewing area provides an input/output (I/O) interface for a user of an electronic device. An arrangement includes a processor, a light source, and a camera disposed outside the periphery of the viewing area coplanar with or behind the cover glass. The camera receives scattered light resulting from interaction, with an object, of light outputted from the interactive display, the outputted light being received by the cover glass from the object and directed toward the camera. The processor determines, from image data output by the camera, an azimuthal angle of the object with respect to an optical axis of the camera and/or a distance of the object from the camera.
Abstract:
An image sensor including a planar sensor array, a lens configured to form an optical image on the planar sensor array and characterized by a locus of focal points on a curved surface, and a cover glass with multiple thickness levels or multiple cover glasses of different sizes. The one or more cover glasses are configured to shift the locus of focal points for large field angles, such that there are multiple intersections between the planar sensor array and the locus of focal points for a large FOV, and thus multiple zones with best focus on the planar sensor array.
Abstract:
An optical stylus may be capable of providing active illumination for a touch/proximity sensing apparatus. The optical stylus also may be capable of determining a tilt angle of the optical stylus and/or an amount of pressure exerted upon the optical stylus. In some examples, an optical stylus may determine a tilt angle and/or pressure according to changes in optical flux distributions inside the optical stylus. In some examples, an optical stylus may include a deformable tip. The deformable tip and/or associated features may be capable of altering optical flux distributions inside the optical stylus in response to applied pressure and/or optical stylus tilt. In some implementations, the optical flux provided to the light guide by the optical stylus may vary according to pressure applied to the optical stylus.
Abstract:
A touch and hover-sensitive sensor system is provided. The system may include a planar light guide that has a plurality of light sources located along a first edge of the light guide and a plurality of light sensors located along a second edge of the light guide orthogonal to the first edge. The light guide may include light-turning arrangements that are configured to redirect light passing through a first side of the light guide from/along orthogonal directions within the light guide. A controller may illuminate proper subsets of the light sources; light that is emitted from the first side and that encounters an object, e.g., a fingertip, may be reflected back into the first side and then redirected to the light sensors. Depending on the light sensors that detect the highest redirected reflected light intensity and the active light sources, the controller may determine the XY/XYZ location of the object.
Abstract:
In some aspects, a head mounted device may include an eye portion configured to face an eye of a user wearing the head mounted device, where the eye portion includes a display. The head mounted device may include at least one light emitter configured to emit light for illuminating at least a portion of a nose of the user. The head mounted device may include at least one image sensor configured to capture an image of at least a portion of the nose of the user for use in determining a shift or a rotation of the head mounted device relative to the user. Numerous other aspects are provided.
Abstract:
An image sensor device includes two or more image sensor arrays (or two or more regions of an image sensor array) and a low-power processor in a same package for capturing two or more images of an object, such as an eye of a user, using light in two or more wavelength bands, such as visible band, near-infrared band, and short-wave infrared band. The image sensor device includes one or more lens assemblies and/or a beam splitter for forming an image of the object on each of the two or more image sensor arrays. The image sensor device also includes one or more filters configured to select light from multiple wavelength bands for imaging by the respective image sensor arrays.
Abstract:
Various aspects of the present disclosure generally relate to a sensor module. In some aspects, a sensor module may include a collar configured to be attached to a camera module for a user device. The collar may include a first opening that is configured to align with an aperture of a camera of the camera module, and a second opening. The sensor module may include a sensor embedded in the collar. The sensor may be aligned with the second opening of the collar. Numerous other aspects are provided.