Abstract:
A method and apparatus for gesture interaction with an image displayed on a painted wall is described. The method may include capturing image data of the image displayed on the painted wall and a user motion performed relative to the image. The method may also include analyzing the captured image data to determine a sequence of one or more physical movements of the user relative to the image displayed on the painted wall. The method may also include determining, based on the analysis, that the user motion is indicative of a gesture associated with the image displayed on the painted wall, and controlling a connected system in response to the gesture.
Abstract:
Example methods and systems for adjusting sensor viewpoint to a virtual viewpoint are provided. An example method may involve receiving data from a first camera; receiving data from a second camera; transforming, from the first viewpoint to a virtual viewpoint within the device, frames in a first plurality of frames based on an offset from the first camera to the virtual viewpoint; determining, in a second plurality of frames, one or more features and a movement, relative to the second viewpoint, of the one or more features; and transforming, from the second viewpoint to the virtual viewpoint, the movement of the one or more features based on an offset from the second camera to the virtual viewpoint; adjusting the transformed frames of the virtual viewpoint by an amount that is proportional to the transformed movement; and providing for display the adjusted and transformed frames of the first plurality of frames.
Abstract:
Methods and systems for cross-validating sensor data are described. An example method involves receiving image data and first timing information associated with the image data, and receiving sensor data and second timing information associated with the sensor data. The method further involves determining a first estimation of motion of the mobile device based on the image data and the first timing information, and determining a second estimation of the motion of the mobile device based on the sensor data and the second timing information. Additionally, the method involves determining whether the first estimation is within a threshold variance of the second estimation. The method then involves providing an output indicative of a validity of the first timing information and the second timing information based on whether the first estimation is within the threshold variance of the second estimation.
Abstract:
A display panel to form a multi-panel display includes a rectangular pixel region with pixels for displaying images and an electronic housing including display logic. The electronic housing includes first, second, third, and fourth interconnects coupled to facilitate power and image signals to other electronic housings of other display panels. The first, second, third, and fourth interconnects are coupled to be interconnected on a same side of the rectangular pixel region as the first, second, third, and fourth edges of the rectangular pixel region, respectively. The third edge is mechanically coupled to overhang the third interconnect by a first offset distance and the fourth edge is mechanically coupled to overhang the fourth interconnect by a second offset distance. The first interconnect extends beyond the first edge by the first offset distance and the second interconnect extends beyond the second edge by the second offset distance.
Abstract:
A system includes a first electronic device and a second electronic device. The first electronic device is to display a first augmented reality overlay on imagery of a local environment captured by the first electronic device, the first augmented reality overlay including a depiction of a virtual object from a first perspective that is based on a position and orientation of the first electronic device using a three-dimensional mapping of the local environment. The second electronic device is to display a second augmented reality overlay on imagery of the local environment captured by the second electronic device, the second augmented reality overlay including a depiction of the virtual object from a perspective based on a position and orientation of the second electronic device using the three-dimensional mapping.
Abstract:
A method and apparatus for enabling themes using photo-active surface paint is described. The method may include capturing image data with at least a camera of a painted surface display system. The method may also include analyzing the image data to determine a real-world context proximate to a painted surface, wherein the surface is painted with a photo-active paint. The method may also include selecting a theme based on the determined real-world context. The method may also include generating a theme image, and driving a spatial electromagnetic modulator to emit electromagnetic stimulation in the form of the theme image to cause the photo active paint to display the theme image.
Abstract:
Methods, apparatuses and systems for adaptive light projection are described herein. According to embodiments of the disclosure, optical data of a physical space around a user is received. Light from a light source is then projected onto a projection area that is determined based, at least in part, on the received optical data.User commands may include requests to locate and/or track objects within the physical space around the user. User commands may comprise a combination of the audible user command and a physical user gesture—e.g., a gesture identifying an object or a surface to receive the projected light.
Abstract:
A display panel for use with a multi-panel display. The display panel includes a display panel including an array of display pixels disposed surrounded by a bezel, the array of display pixels for emitting a display image having a first size, and an optical expansion layer disposed over the array of display pixels to magnify the display image to appear to have a second size larger than the first size and to at least partially conceal the bezel surrounding the housing. The optical expansion layer includes a first array of microlenses optically coupled to the array of display pixels to cause light from the display pixels to diverge, a second array of microlenses having complementary optical power to the first array of microlenses; and an optically transparent offset layer disposed between the first and second arrays of microlenses. Other embodiments are disclosed and claimed.
Abstract:
A method for controller tracking with multiple degrees of freedom includes generating depth data at an electronic device based on a local environment proximate the electronic device. A set of positional data is generated for at least one spatial feature associated with a controller based on a pose of the electronic device, as determined using the depth data, relative to the at least one spatial feature associated with the controller. A set of rotational data is received that represents three degrees-of-freedom (3DoF) orientation of the controller within the local environment, and a six degrees-of-freedom (6DoF) position of the controller within the local environment is tracked based on the set of positional data and the set of rotational data.
Abstract:
Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.