Abstract:
A near-eye display system includes a display panel, a beam steering assembly facing the display panel, a display controller, and a beam steering controller. The beam steering assembly imparts one of a plurality of net deflection angles to incident light. The display controller drives the display panel to display a sequence of images, and the beam steering controller controls the beam steering assembly to impart a different net deflection angle for each displayed image of the sequence. The sequence of images, when displayed within the visual perception interval, may be perceived as a single image having a resolution greater than the resolution of the display panel or having larger apparent pixel sizes that conceal the black space between pixels of the display, or the sequence of images may represent a lightfield with the angular information represented in the net deflection angles imparted into the images as they are projected.
Abstract:
A method and apparatus for gesture interaction with an image displayed on a painted wall is described. The method may include capturing image data of the image displayed on the painted wall and a user motion performed relative to the image. The method may also include analyzing the captured image data to determine a sequence of one or more physical movements of the user relative to the image displayed on the painted wall. The method may also include determining, based on the analysis, that the user motion is indicative of a gesture associated with the image displayed on the painted wall, and controlling a connected system in response to the gesture.
Abstract:
A method and apparatus for generating dynamic signage using a painted surface display system is described. The method may include capturing image data with at least a camera of a painted surface display system. The method may also include analyzing the image data to determine a real-world context proximate to a painted surface, wherein the painted surface is painted with a photo-active paint. The method may also include determining electronic signage data based on the determined real-world context. The method may also include generating a sign image from the determined electronic signage data based on the determined real-world context, and driving a spatial electromagnetic modulator to emit electromagnetic stimulation in the form of the sign image to cause the photo active paint to display the sign image.
Abstract:
A method and apparatus for gesture interaction with a photo-active painted surface is described. The method may include driving a spatial electromagnetic modulator to emit electromagnetic stimulation in the form of an image to cause photo-active paint to display the image. The method may also include capturing, with at least a camera of a painted surface display system, image data of the image displayed on the photo-active paint applied to a surface and a user motion performed relative to the image. The method may also include analyzing the captured image data to determine a sequence of one or more physical movements of the user relative to the image displayed on the photo-active paint. The method may also include determining, based on the analysis, that the user motion is indicative of a gesture, and driving the spatial electromagnetic modulator to update.
Abstract:
Methods and systems for acquiring sensor data using multiple acquisition modes are described. An example method involves receiving, by a co-processor and from an application processor, a request for sensor data. The request identifies at least two sensors of a plurality of sensors for which data is requested. The at least two sensors are configured to acquire sensor data in a plurality of acquisition modes, and the request further identifies for the at least two sensors respective acquisition modes for acquiring data that are selected from among the plurality of acquisition modes. In response to receiving the request, the co-processor causes the at least two sensors to acquire data in the respective acquisition modes. The co-processor receives first sensor data from a first sensor and second sensor data from a second sensor, and the co-processor provides the first sensor data and the second sensor data to the application processor.
Abstract:
A modular display including a monolithic backlight module to generate light and a plurality of display panels disposed over the monolithic backlight module, and tiled such that each display panel abuts another display panel along at least one edge thereof to form a seam. Each display panel includes a light modulation layer disposed adjacent to the monolithic backlight module to modulate the lamp light received from a first side and to output a display image from a second side, and seam-concealing optics disposed over the second side of the light modulation layer. Other embodiments are disclosed and claimed.
Abstract:
A near-eye display system includes a display panel, a beam steering assembly facing the display panel, a display controller, and a beam steering controller. The beam steering assembly imparts one of a plurality of net deflection angles to incident light. The display controller drives the display panel to display a sequence of images, and the beam steering controller controls the beam steering assembly to impart a different net deflection angle for each displayed image of the sequence. The sequence of images, when displayed within the visual perception interval, may be perceived as a single image having a resolution greater than the resolution of the display panel or having larger apparent pixel sizes that conceal the black space between pixels of the display, or the sequence of images may represent a lightfield with the angular information represented in the net deflection angles imparted into the images as they are projected.
Abstract:
A display system includes a wedge optical element, a photoactive layer, light director, and light modulator. The wedge optical element has a clear substrate. The photoactive layer receives emitted light that generates an image. The light director is disposed between the photoactive layer and the wedge optical element. The light modulator generates emitted light and is optically coupled to the wedge optical element to direct the emitted light to an angled side of the wedge optical element. The angled side of the wedge optical element is configured to reflect the emitted light toward a backside of the photoactive layer to generate an image viewable by a user on a frontside of the photoactive layer. The light director is disposed to receive the emitted light from the angled side of the wedge optical element and direct the emitted light toward propagating substantially normal to the backside of the photoactive layer.
Abstract:
Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.
Abstract:
Embodiments of an apparatus, system and method for creating light projection solutions for user guidance are described herein. A user may request that projected light be used to assist in a plurality of operations involving objects in the physical space around the user. A user can use voice commands and hand gestures to request that a projector project light or images on or near objects involved in one or more operations. Embodiments of the disclosure perform an image recognition process to scan the physical space around the user and to identify any user gesture performed (e.g., a user pointing at a plurality of objects, a user holding an object); a steerable projector may be actuated to project light or image data based on the user's request and a plurality of operations associated with the objects.