Abstract:
Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.
Abstract:
Embodiments of a tiled multi-panel display including first and second display panels, each including a substrate with a main portion having a main thickness, an abutting edge with an thickness less than the main thickness, and a taper surface extending between the main portion and the abutting edge, and an array of display pixels disposed in the main portion and extending at least partially around the curved surface. The abutting edge of the first display panel abuts the abutting edge of the second display panel to form a seam, and a seam-concealing optical element is disposed in a void formed by the first and second taper surfaces. Other embodiments are disclosed and claimed.
Abstract:
A multi-panel display includes at least one anchoring platform, a plurality of display panels, vibration mechanisms, and control logic. The anchoring platform(s) are to be secured to a fixed surface. The plurality of display panels is aligned to form the multi-panel display and the display panels are substantially rectangular. The vibration mechanisms are configured to vibrate the plurality of display panels along a vibration axis. The vibration mechanisms are coupled to the anchoring platform(s), and the vibration axis is common to each of the display panels in the plurality of display panels. The control logic is coupled to drive the vibration mechanisms and configured to drive the plurality of display panels to display images corresponding with positions along the vibration axis to disguise seams between the plurality of display panels.
Abstract:
Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.
Abstract:
A user portable device includes a device chassis comprising at least one opening at a surface of the device chassis and a sensor assembly aligned with the at least one opening. The sensor assembly includes a mounting structure and a plurality of sensors mounted to the mounting structure. The sensors include at least two sensors utilized by the user portable device based on a specified geometric configuration between the at least two sensors. The user portable device further includes a mounting fastener that mounts the sensor assembly to the device chassis so as to isolate the sensor assembly from deformation of the surface of the device chassis along one or more axes during user handling, and thus aid in preventing alteration of a baseline geometric configuration of one or more sensors of the sensor assembly due to the chassis deformation.
Abstract:
Methods and systems for acquiring sensor data using multiple acquisition modes are described. An example method involves receiving, by a co-processor and from an application processor, a request for sensor data. The request identifies at least two sensors of a plurality of sensors for which data is requested. The at least two sensors are configured to acquire sensor data in a plurality of acquisition modes, and the request further identifies for the at least two sensors respective acquisition modes for acquiring data that are selected from among the plurality of acquisition modes. In response to receiving the request, the co-processor causes the at least two sensors to acquire data in the respective acquisition modes. The co-processor receives first sensor data from a first sensor and second sensor data from a second sensor, and the co-processor provides the first sensor data and the second sensor data to the application processor.
Abstract:
An image generating system includes an electromagnetic (“EM”) modulator, a camera module and a logic engine. The EM modulator is positioned to direct EM waves to a photoactive surface to stimulate the photoactive surface to generate an image. The camera module is positioned to monitor the photoactive surface to generate image data. The logic engine is communicatively coupled to the camera module and configured to receive the image data from the camera module and analyze the image data. The logic engine is communicatively coupled to the EM modulator to command the EM modulator where to direct the EM waves in response to the image data.
Abstract:
Methods and systems for detecting frame tears are described. As one example, a mobile device may include at least one camera, a sensor, a co-processor, and an application processor. The co-processor is configured to generate a digital image including image data from the at least one camera and sensor data from the sensor. The co-processor is further configured to embed a frame identifier corresponding to the digital image at least two corner pixels of the digital image. The application processor is configured to receive the digital image from the co-processor, determine a first value embedded in a first corner pixel of the digital image, and determined a second value embedded in a second corner pixel of the digital image. The application processor is also configured to provide an output indicative of a validity of the digital image based on a comparison between the first value and the second value.
Abstract:
Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.
Abstract:
Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.