Abstract:
Example methods and systems for displaying one or more indications that indicate (i) the direction of a source of sound and (ii) the intensity level of the sound are disclosed. A method may involve receiving audio data corresponding to sound detected by a wearable computing system. Further, the method may involve analyzing the audio data to determine both (i) a direction from the wearable computing system of a source of the sound and (ii) an intensity level of the sound. Still further, the method may involve causing the wearable computing system to display one or more indications that indicate (i) the direction of the source of the sound and (ii) the intensity level of the sound.
Abstract:
Disclosed are methods and devices for rendering interactions between virtual and physical objects on a substantially transparent display are disclosed. In one embodiment, the method includes displaying a user-interface on a substantially transparent display of a wearable computing device. The method further includes displaying a virtual object in the view region at a focal length along a first line of sight and detecting a physical object at a physical distance along a second line of sight. The method still further includes determining that a relationship between the focal length and the physical distance is such that the virtual object and the physical object appear substantially co-located in a user-view through the view region and, responsive to the determination, initiating a collision action between the virtual object and the physical object.
Abstract:
An imaging device includes an image sensor and an array of wafer lenses. The image sensor has rows and columns of pixels partitioned into an array of sensor subsections. The array of wafer lenses is disposed over the image sensor. Each of the wafer lenses in the array of wafer lenses is optically positioned to focus image light onto a corresponding sensor subsection in the array of sensor subsections. Each sensor subsection includes unlit pixels that do not receive the image light focused from the wafer lenses and each sensor subsection also includes lit pixels that receive image the image light focused by the wafer lenses. A rectangular subset of the lit pixels from each sensor subsection are arranged to capture images.
Abstract:
Reducing light damage in a shutterless imaging device includes receiving a signal from a hardware device and analyzing the signal to predict a use demand of the shutterless imaging device. In response to the analysis of the signal from the hardware device, a lens of the shutterless imaging device is adjusted. Adjusting the lens spreads out energy of far-field image light incident on an image sensor of the shutterless imaging device.
Abstract:
Exemplary methods and systems relate to detecting physical objects near a substantially transparent head-mounted display (HMD) system and activating a collision-avoidance action to alert a user of the detected objects. Detection techniques may include receiving data from distance and/or relative movement sensors and using this data as a basis for determining an appropriate collision-avoidance action. Exemplary collision-avoidance actions may include de-emphasizing virtual objects displayed on the HMD to provide a less cluttered view of the physical objects through the substantially transparent display and/or presenting new virtual objects.
Abstract:
Exemplary methods and systems provide for eye-tracking. An exemplary method may involve: causing a projection of a beam of light onto an eye and receiving data regarding a reflection of light from the beam of light off of the eye. The method further includes correlating a pupil of the eye with a darkest region from the data. The darkest region comprises a region that is darker relative to other regions of the reflection data. Once the pupil has been correlated and the pupil location is known, the method includes executing instructions to follow the pupil as the eye moves.
Abstract:
Reducing light damage in a shutterless imaging device includes receiving a signal from a hardware device and analyzing the signal to predict a use demand of the shutterless imaging device. In response to the analysis of the signal from the hardware device, a lens of the shutterless imaging device is adjusted. Adjusting the lens spreads out energy of far-field image light incident on an image sensor of the shutterless imaging device.
Abstract:
A wearable computing device includes a head-mounted display (HMD) that provides a field of view in which at least a portion of the environment of the wearable computing device is viewable. The HMD is operable to display images superimposed over the field of view. When the wearable computing device determines that a target device is within its environment, the wearable computing device obtains target device information related to the target device. The target device information may include information that defines a virtual control interface for controlling the target device and an identification of a defined area of the target device on which the virtual control image is to be provided. The wearable computing device controls the HMD to display the virtual control image as an image superimposed over the defined area of the target device in the field of view.
Abstract:
A camera system includes an image sensor, a stop aperture, an infrared cut filter disposed between the image sensor and the stop aperture, and a lens assembly. The lens assembly has a field of view ranging between 80 degrees and 110 degrees and is disposed between the infrared cut filter on an image side of the lens assembly and the stop aperture on an object side of the lens assembly. The lens assembly includes six lenses. Four of the six lenses have positive optical power and two of the six lenses have negative optical power. The six lenses include first, second, third, fourth, fifth, and sixth lenses having first inline, second inline, third inline, fourth inline, fifth inline, and sixth inline relative positions, respectively, along an optical path through the lens assembly.
Abstract:
A camera system includes a single pixel photo-sensor disposed in or on a substrate to acquire image data. A micro-lens is adjustably positioned above the single pixel photo-sensor to focus external scene light onto the single pixel photo-sensor. An actuator is coupled to the micro-lens to adjust a position of the micro-lens relative to the single pixel photo-sensor. A controller controls the actuator to sequentially reposition the micro-lens to focus the external scene light incident from different angles onto the single pixel photo-sensor. Readout circuitry is coupled to sequentially readout the image data associated with each of the different angles from the single pixel photo-sensor.