Abstract:
Instructions indicative of changing a view of a virtual object may be received by a device. At least a portion of the virtual object may be viewable from a viewpoint that is at a given distance from a surface of the virtual object. The device may cause a change of the view along a rotational path around the virtual object in response to the receipt of the instructions based on the given distance being greater than a threshold distance. The device may cause a change of the view along a translational path indicative of a shape of the surface of the virtual object in response to the receipt of the instructions based on the given distance being less than the threshold distance.
Abstract:
Instructions indicative of changing a view of a virtual object may be received by a device. At least a portion of the virtual object may be viewable from a viewpoint that is at a given distance from a surface of the virtual object. The device may cause a change of the view along a rotational path around the virtual object in response to the receipt of the instructions based on the given distance being greater than a threshold distance. The device may cause a change of the view along a translational path indicative of a shape of the surface of the virtual object in response to the receipt of the instructions based on the given distance being less than the threshold distance.
Abstract:
Methods and systems are provided for controlling a three-dimensional (3D) model for a head-mountable display (HMD). The HMD can receive a 3D model for the object, where the 3D model includes three-dimensional shape and texture information about the object, the three-dimensional shape and texture information about the object specified with respect to at least a first axis, a second axis, and a third axis, where each of the first axis, the second axis, and the third axis differs. The HMD can display a view of the 3D model. The HMD can receive an input gesture. The HMD can determine whether the input gesture includes a 3D model gesture. After determining that the input gesture does includes a 3D model gesture, the HMD can update the view of the 3D model based on the input gesture and can display the updated view of the 3D model.
Abstract:
Instructions indicative of changing a view of a virtual object may be received by a device. At least a portion of the virtual object may be viewable from a viewpoint that is at a given distance from a surface of the virtual object. The device may cause a change of the view along a rotational path around the virtual object in response to the receipt of the instructions based on the given distance being greater than a threshold distance. The device may cause a change of the view along a translational path indicative of a shape of the surface of the virtual object in response to the receipt of the instructions based on the given distance being less than the threshold distance.
Abstract:
Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.
Abstract:
Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.