Abstract:
A portable device includes a sensor, a video capture module, a processor, and a computer-readable memory that stores instructions. When executed on the processor, the instructions operate to cause the sensor to generate raw sensor data indicative of a physical quantity, cause the video capture module to capture video imagery of a reference object concurrently with the sensor generating raw sensor data when the portable device is moving relative to the reference object, and cause the processor to calculate correction parameters for the sensor based on the captured video imagery of the reference object and the raw sensor data.
Abstract:
A technique for providing search results may include determining a first entity type, a second entity type, and a relationship type based on a compositional query. The technique may also include identifying nodes of a knowledge graph corresponding to entity references of the first entity type and entity references of the second entity type. The technique may also include determining from the knowledge graph an attribute value corresponding to the relationship type for each entity reference of the first entity type and for each entity reference of the second entity type. The technique may also include comparing the attribute value of each entity reference of the first entity type with the attribute value of each entity reference of the second entity type. The technique may also include determining one or more resultant entity references from the entity references of the first entity type based on the comparing.
Abstract:
Embodiments include a computer-implemented method for generating a three-dimensional (3D) model. The method includes receiving a first and second sets of sensed position data indicative of a position of a camera device(s) at or near a time when it is used to acquire first and second images of an image pair, respectively, determining a sensed rotation matrix and/or a sensed translation vector for the image pair using the first and second sets of sensed position data, identifying a calculated transformation including a calculated translation vector and rotation matrix, generating a sensed camera transformation including the sensed rotation matrix and/or the sensed translation vector, and, if the sensed camera transformation is associated with a lower error than the calculated camera transformation, using it to generate a 3D model.
Abstract:
Embodiments include a computer-implemented method for generating a three-dimensional (3D) model. The method includes receiving a first and second sets of sensed position data indicative of a position of a camera device(s) at or near a time when it is used to acquire first and second images of an image pair, respectively, determining a sensed rotation matrix and/or a sensed translation vector for the image pair using the first and second sets of sensed position data, identifying a calculated transformation including a calculated translation vector and rotation matrix, generating a sensed camera transformation including the sensed rotation matrix and/or the sensed translation vector, and, if the sensed camera transformation is associated with a lower error than the calculated camera transformation, using it to generate a 3D model.