Abstract:
A method and apparatus of a device that builds a target using a plurality of processing units is described. In an exemplary embodiment, the device receives an input audio track having a first duration, the input audio track having a plurality of points. The device further generates a transition matrix of the input audio track, wherein the transition matrix indicates a similarity metric between different pairs of the plurality of points. In addition, the device determines a set of jump points using the different pairs of the plurality of points. The device additionally generates the rearranged audio track using the set of jump points, wherein the rearranged audio track has second duration and the second duration is different than the first duration.
Abstract:
In various implementations a method includes obtaining a plurality of source images, stabilizing the plurality of source images to generate a plurality of stabilized images, and averaging the plurality of stabilized image to generate a synthetic long exposure image. In various implementations, stabilizing the plurality of source images includes: selecting one of the plurality of source images to serve as a reference frame; and registering others of the plurality of source images to the reference frame by applying a perspective transformation to others of the plurality of the source images.
Abstract:
Techniques to permit a digital image capture device to stabilize a video stream in real-time (during video capture operations) are presented. In general, techniques are disclosed for stabilizing video images using an overscan region and a look-ahead technique enabled by buffering a number of video input frames before generating a first stabilized video output frame. (Capturing a larger image than is displayed creates a buffer of pixels around the edge of an image; overscan is the term given to this buffer of pixels.) More particularly, techniques are disclosed for buffering an initial number of input frames so that a “current” frame can use motion data from both “past” and “future” frames to adjust the strength of a stabilization metric value so as to keep the current frame within its overscan. This look-ahead and look-behind capability permits a smoother stabilizing regime with fewer abrupt adjustments.
Abstract:
Techniques to permit a digital image capture device to stabilize a video stream in real-time (during video capture operations) are presented. In general, techniques are disclosed for stabilizing video images using an overscan region and a look-ahead technique enabled by buffering a number of video input frames before generating a first stabilized video output frame. (Capturing a larger image than is displayed creates a buffer of pixels around the edge of an image; overscan is the term given to this buffer of pixels.) More particularly, techniques are disclosed for buffering an initial number of input frames so that a “current” frame can use motion data from both “past” and “future” frames to adjust the strength of a stabilization metric value so as to keep the current frame within its overscan. This look-ahead and look-behind capability permits a smoother stabilizing regime with fewer abrupt adjustments.
Abstract:
A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device's sensor, the device's location and orientation is determined. A video image of the device's environment is displayed on the device's display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device's location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image.
Abstract:
A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device's sensor, the device's location and orientation is determined. A video image of the device's environment is displayed on the device's display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device's location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image.
Abstract:
Techniques and devices for creating a Forward-Reverse Loop output video and other output video variations. A pipeline may include obtaining input video and determining a start frame within the input video and a frame length parameter based on a temporal discontinuity minimization. The selected start frame and the frame length parameter may provide a reversal point within the Forward-Reverse Loop output video. The Forward-Reverse Loop output video may include a forward segment that begins at the start frame and ends at the reversal point and a reverse segment that starts after the reversal point and plays back one or more frames in the forward segment in a reverse order. The pipeline for the generating Forward-Reverse Loop output video may be part of a shared resource architecture that generates other types of output video variations, such as AutoLoop output videos and Long Exposure output videos.
Abstract:
In various implementations a method includes obtaining a plurality of source images, stabilizing the plurality of source images to generate a plurality of stabilized images, and averaging the plurality of stabilized image to generate a synthetic long exposure image. In various implementations, stabilizing the plurality of source images includes: selecting one of the plurality of source images to serve as a reference frame; and registering others of the plurality of source images to the reference frame by applying a perspective transformation to others of the plurality of the source images.
Abstract:
Techniques and devices for creating an AutoLoop output video include performing postgate operations. The AutoLoop output video is created from a set of frames. After generating the AutoLoop output video based on a plurality of loop parameters and at least a portion of the frames, postgate operations determine one or more dynamism metrics based on a variability metric and a dynamic range metric for a plurality of pixels within the video loop. Postgate operations compare the dynamism metrics to one or more postgate threshold values and reject the video loop based on the comparison of the dynamism metrics to the postgate threshold values.
Abstract:
A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device's sensor, the device's location and orientation is determined. A video image of the device's environment is displayed on the device's display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device's location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image.