Abstract:
Methods and apparatus are described that enable augmented or virtual reality based on a light field. A geometric proxy of a mobile device such as a smart phone is used during the process of inserting a virtual object from the light field into the real world images being acquired. For example, a mobile device includes a processor and a camera coupled to the processor. The processor is configured to define a view-dependent geometric proxy, record images with the camera to produce recorded frames and, based on the view-dependent geometric proxy, render the recorded frames with an inserted light field virtual object.
Abstract:
An embodiment method for computationally adjusting images from a multi-camera system includes receiving calibrated image sequences, with each of the calibrated image sequences corresponding to a camera in a camera array and having one or more image frames. A target camera model is computed for each camera in the camera array and according to target camera poses or target camera intrinsic matrices for the respective camera. The computing generates a transformation matrix for each of the one or more first cameras. The transformation matrix for each of the one or more first cameras is applied to the calibrated image sequence corresponding to the respective camera. The transformation matrix warps each image frame of the calibrated image sequence and generates target image sequences.
Abstract:
Embodiments are provided for achieving multi-view video foreground-background segmentation with spatial-temporal graph cuts. A multi-view segmentation algorithm is used where a four-dimensional (4D) graph-cut is constructed by adding links across neighboring views over space and for consecutive frames over time. The segmentation uses both the color values of each input image and the image difference between the input image and the background image to obtain an initial graph-cut, before adding the temporal and spatial links. By using the background subtraction results as the initial segmentation seed, no user annotation is needed to perform multi-view segmentation.
Abstract:
Methods and devices permit a user to insert multiple virtual objects into a real world video scene. Some inserted objects may be statically tied to the scene, while other objects are designated as moving with certain moving objects in the scene. Markers are not used to insert the virtual objects. Users of separate mobile devices can share their inserted virtual objects to create a multi-user, multi-object augmented reality (AR) experience.
Abstract:
An apparatus is configured to perform a method of parallax tolerant video stitching. The method includes determining a plurality of video sequences to be stitched together; performing a spatial-temporal localized warping computation process on the video sequences to determine a plurality of target warping maps; warping a plurality of frames among the video sequences into a plurality of target virtual frames using the target warping maps; performing a spatial-temporal content-based seam finding process on the target virtual frames to determine a plurality of target seam maps; and stitching the video sequences together using the target seam maps.
Abstract:
An apparatus is configured to perform a method for generalized view morphing The method includes determining a camera plane based on a predetermined view point of a virtual camera associated with a desired virtual image, the camera plane comprising at least three real cameras; pre-warping at least three image planes such that all of the image planes are parallel to the camera plane, each image plane associated with one of the real cameras positioned in the camera plane; determining a virtual image plane by performing a linear interpolation morphing on the at least three image planes; and post-warping the virtual image plane to a predetermined pose.
Abstract:
An apparatus is configured to perform a method of parallax tolerant video stitching. The method includes determining a plurality of video sequences to be stitched together; performing a spatial-temporal localized warping computation process on the video sequences to determine a plurality of target warping maps; warping a plurality of frames among the video sequences into a plurality of target virtual frames using the target warping maps; performing a spatial-temporal content-based seam finding process on the target virtual frames to determine a plurality of target seam maps; and stitching the video sequences together using the target seam maps.
Abstract:
Methods and apparatus are described that enable augmented or virtual reality based on a light field. A geometric proxy of a mobile device such as a smart phone is used during the process of inserting a virtual object from the light field into the real world images being acquired. For example, a mobile device includes a processor and a camera coupled to the processor. The processor is configured to define a view-dependent geometric proxy, record images with the camera to produce recorded frames and, based on the view-dependent geometric proxy, render the recorded frames with an inserted light field virtual object.
Abstract:
An apparatus is configured to perform a method for generalized view morphing. The method includes determining a camera plane based on a predetermined view point of a virtual camera associated with a desired virtual image, the camera plane comprising at least three real cameras; pre-warping at least three image planes such that all of the image planes are parallel to the camera plane, each image plane associated with one of the real cameras positioned in the camera plane; determining a virtual image plane by performing a linear interpolation morphing on the at least three image planes; and post-warping the virtual image plane to a predetermined pose.
Abstract:
Methods and devices permit a user to insert multiple virtual objects into a real world video scene. Some inserted objects may be statically tied to the scene, while other objects are designated as moving with certain moving objects in the scene. Markers are not used to insert the virtual objects. Users of separate mobile devices can share their inserted virtual objects to create a multi-user, multi-object augmented reality (AR) experience.