Abstract:
Example methods and systems for adjusting sensor viewpoint to a virtual viewpoint are provided. An example method may involve receiving data from a first camera; receiving data from a second camera; transforming, from the first viewpoint to a virtual viewpoint within the device, frames in a first plurality of frames based on an offset from the first camera to the virtual viewpoint; determining, in a second plurality of frames, one or more features and a movement, relative to the second viewpoint, of the one or more features; and transforming, from the second viewpoint to the virtual viewpoint, the movement of the one or more features based on an offset from the second camera to the virtual viewpoint; adjusting the transformed frames of the virtual viewpoint by an amount that is proportional to the transformed movement; and providing for display the adjusted and transformed frames of the first plurality of frames.
Abstract:
Methods and systems for cross-validating sensor data are described. An example method involves receiving image data and first timing information associated with the image data, and receiving sensor data and second timing information associated with the sensor data. The method further involves determining a first estimation of motion of the mobile device based on the image data and the first timing information, and determining a second estimation of the motion of the mobile device based on the sensor data and the second timing information. Additionally, the method involves determining whether the first estimation is within a threshold variance of the second estimation. The method then involves providing an output indicative of a validity of the first timing information and the second timing information based on whether the first estimation is within the threshold variance of the second estimation.
Abstract:
An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
Abstract:
An electronic device includes one or more imaging sensors (e.g, imaging cameras) and includes one or more non-image sensors, such as an inertial measurement unit (IMU), that can provide information indicative of the pose of the electronic device. The electronic device estimates its pose based on two independent sources of pose information: pose information generated at a relatively high rate based on non-visual information generated by the non-image sensors and pose information generated at a relatively low rate based on imagery captured by the one or more imaging sensors. To achieve both a high pose-estimation rate and high degree of pose estimation accuracy, the electronic device adjusts a pose estimate based on the non-visual pose information at a high rate, and at a lower rate spatially smoothes the pose estimate based on the visual pose information.
Abstract:
Methods and systems for sensor calibration are described. An example method involves receiving image data from a first sensor and sensor data associated with the image data from a second sensor. The image data includes data representative of a target object. The method further involves determining an object identification for the target object based on the captured image data. Additionally, the method includes retrieving object data based on the object identification, where the object data includes data related to a three-dimensional representation of the target object. Additionally, the method includes determining a predicted sensor value based on the based on the object data and the image data. Further, the method includes determining a sensor calibration value based on a different between the received sensor data and the predicted sensor value. Moreover, the method includes adjusting the second sensor based on the sensor calibration value.
Abstract:
Methods and systems for acquiring sensor data using multiple acquisition modes are described. An example method involves receiving, by a co-processor and from an application processor, a request for sensor data. The request identifies at least two sensors of a plurality of sensors for which data is requested. The at least two sensors are configured to acquire sensor data in a plurality of acquisition modes, and the request further identifies for the at least two sensors respective acquisition modes for acquiring data that are selected from among the plurality of acquisition modes. In response to receiving the request, the co-processor causes the at least two sensors to acquire data in the respective acquisition modes. The co-processor receives first sensor data from a first sensor and second sensor data from a second sensor, and the co-processor provides the first sensor data and the second sensor data to the application processor.
Abstract:
A method for controller tracking with multiple degrees of freedom includes generating depth data at an electronic device based on a local environment proximate the electronic device. A set of positional data is generated for at least one spatial feature associated with a controller based on a pose of the electronic device, as determined using the depth data, relative to the at least one spatial feature associated with the controller. A set of rotational data is received that represents three degrees-of-freedom (3DoF) orientation of the controller within the local environment, and a six degrees-of-freedom (6DoF) position of the controller within the local environment is tracked based on the set of positional data and the set of rotational data.
Abstract:
An electronic device includes one or more imaging sensors (e.g, imaging cameras) and includes one or more non-image sensors, such as an inertial measurement unit (IMU), that can provide information indicative of the pose of the electronic device. The electronic device estimates its pose based on two independent sources of pose information: pose information generated at a relatively high rate based on non-visual information generated by the non-image sensors and pose information generated at a relatively low rate based on imagery captured by the one or more imaging sensors. To achieve both a high pose-estimation rate and high degree of pose estimation accuracy, the electronic device adjusts a pose estimate based on the non-visual pose information at a high rate, and at a lower rate spatially smoothes the pose estimate based on the visual pose information.
Abstract:
An electronic device includes at least one sensor, a display, and a processor. The processor is configured to determine a dimension of a physical object along an axis based on a change in position of the electronic device when the electronic device is moved from a first end of the physical object along the axis to a second end of the physical object along the axis. A method includes capturing and displaying imagery of a physical object at an electronic device, and receiving user input identifying at least two points of the physical object in the displayed imagery. The method further includes determining, at the electronic device, at least one dimensional aspect of the physical object based on the at least two points of the physical object using a three-dimensional mapping of the physical object.
Abstract:
Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.