Abstract:
Techniques are presented for constructing a digital representation of a physical environment. In some embodiments, a method includes obtaining image data indicative of the physical environment; receiving gesture input data from a user corresponding to at least one location in the physical environment, based on the obtained image data; detecting at least one discontinuity in the physical environment near the at least one location corresponding to the received gesture input data; and generating a digital surface corresponding to a surface in the physical environment, based on the received gesture input data and the at least one discontinuity.
Abstract:
An example system includes a first computing device comprising a first graphics processing unit (GPU) implemented in circuitry, and a second computing device comprising a second GPU implemented in circuitry. The first GPU is configured to determine graphics primitives of a computer graphics scene that are visible from a camera viewpoint, generate a primitive atlas that includes data representing the graphics primitives that are visible from the camera viewpoint, and shade the visible graphics primitives in the primitive atlas to produce a shaded primitive atlas. The second GPU is configured to render an image using the shaded primitive atlas.
Abstract:
An example system includes a first computing device comprising a first graphics processing unit (GPU) implemented in circuitry, and a second computing device comprising a second GPU implemented in circuitry. The first GPU is configured to perform a first portion of an image rendering process to generate intermediate graphics data and send the intermediate graphics data to the second computing device. The second GPU is configured to perform a second portion of the image rendering process to render an image from the intermediate graphics data. The first computing device may be a video game console, and the second computing device may be a virtual reality (VR) headset that warps the rendered image to produce a stereoscopic image pair.
Abstract:
An example system includes a first computing device comprising a first graphics processing unit (GPU) implemented in circuitry, and a second computing device comprising a second GPU implemented in circuitry. The first GPU is configured to determine graphics primitives of a computer graphics scene that are visible from a camera viewpoint, generate a primitive atlas that includes data representing the graphics primitives that are visible from the camera viewpoint, and shade the visible graphics primitives in the primitive atlas to produce a shaded primitive atlas. The second GPU is configured to render an image using the shaded primitive atlas.
Abstract:
Disclosed are a system, apparatus, and method for 3D object segmentation within an environment. Image frames are obtained from one or more depth cameras or at different times and planar segments are extracted from data obtained from the image frames. Candidate segments that comprise a non-planar object surface are identified from the extracted planar segments. In one aspect, certain extracted planar segments are identified as comprising a non-planar object surface, and are referred to as candidate segments. Confidence of preexisting candidate segments are adjusted in response to determining correspondence with a candidate segment. In one aspect, one or more preexisting candidate segments are determined to comprise a surface of a preexisting non-planar object hypothesis. Confidence in the non-planar object hypothesis is updated in response to determining correspondence with a candidate segment.
Abstract:
Methods, systems, computer-readable media, and apparatuses for incremental object detection using a staged process and a band-pass feature extractor are presented. At each stage of the staged process, a different band of features from a plurality of bands of features in image data can be extracted using a dual-threshold local binary pattern operator, and compared with features of a target object within the band for a partial decision. The staged process exits if a rejection decision is made at any stage of the staged process. If no rejection decision is made in each stage of the staged process, the target object is detected. Features extracted at each stage may be from a different image for some applications.
Abstract:
Apparatuses and methods for fast visual simultaneous localization and mapping are described. In one embodiment, a three-dimensional (3D) target is initialized immediately from a first reference image and prior to processing a subsequent image. In one embodiment, one or more subsequent reference images are processed, and the 3D target is tracked in six degrees of freedom. In one embodiment, the 3D target is refined based on the processed the one or more subsequent images.
Abstract:
Methods, systems, computer-readable media, and apparatuses for calibrating an event-based camera are presented. One example method includes the steps of receiving a calibration image comprising intensity information for a set of image elements defining the calibration image; iteratively, until a threshold number of different image elements have been projected: selecting a portion of the calibration image corresponding to a subset of image elements of the set of image elements, the subset comprising less than all image elements in the set of image elements, and comprising at least one image element not previously selected; projecting the selected portion of the calibration image onto a sensor of the event-based camera; detecting, by the sensor, the selected portion of the calibration image; generating a set of detected pixels corresponding to the detecting; and discontinuing projection of the selected portion; and determining, for a position of the event-based camera, at least one calibration parameter using the generated sets of detected pixels.
Abstract:
Disclosed are example methods, apparatuses, and articles of manufacture for determining and providing a suitability of an image target for Color Transfer. In an example embodiment, a method, which may be implemented using a computing device, may comprise: receiving image data representative of the image target; determining a suitability of the image target for Color Transfer based, at least in part, on one or more colors of the image data; and providing an indication indicative of the suitability of the image target for Color Transfer.
Abstract:
Disclosed are a system, apparatus, and method for monocular visual simultaneous localization and mapping that handles general 6DOF and panorama camera movements. A 3D map of an environment containing features with finite or infinite depth observed in regular or panorama keyframes is received. The camera is tracked in 6DOF from finite, infinite, or mixed feature sets. Upon detection of a panorama camera movement towards unmapped scene regions, a reference panorama keyframe with infinite features is created and inserted into the 3D map. When panoramic camera movement extends toward unmapped scene regions, the reference keyframe is extended with further dependent panorama keyframes. Panorama keyframes are robustly localized in 6DOF with respect to finite 3D map features. Localized panorama keyframes contain 2D observations of infinite map features that are matched with 2D observations in other localized keyframes. 2D-2D correspondences are triangulated, resulting in new finite 3D map features.