Abstract:
Methods and systems for processing a video for stabilization are described. A recorded video may be stabilized by removing at least a portion of shake introduced in the video. An original camera path for a camera used to record the video may be determined. A crop window size may be selected, a crop window transform may accordingly be determined, and the crop window transform may be applied to the original video to provide a modified video from a viewpoint of the modified motion camera path.
Abstract:
Methods and systems for video retargeting and view selection using motion saliency are described. Salient features in multiple videos may be extracted. Each video may be retargeted by modifying the video to preserve the salient features. A crop path may be estimated and applied to a video to retarget each video and generate a modified video preserving the salient features. An action score may be assigned to portions or frames of each modified video to represent motion content in the modified video. Selecting a view from one of the given modified videos may be formulated as an optimization subject to constraints. An objective function for the optimization may include maximizing the action score. This optimization may also be subject to constraints to take into consideration optimal transitioning from a view from a given video to another view from another given video, for example.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for video segmentation. One of the methods includes receiving a digital video; performing hierarchical graph-based video segmentation on at least one frame of the digital video to generate a boundary representation for the at least one frame; generating a vector representation from the boundary representation for the at least one frame of the digital video, wherein generating the vector representation includes generating a polygon composed of at least three vectors, wherein each vector comprises two vertices connected by a line segment, from a boundary in the boundary representation; linking the vector representation to the at least one frame of the digital video; and storing the vector representation with the at least one frame of the digital video.
Abstract:
An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.
Abstract:
A computer-implemented method, computer program product, and computing system is provided for interacting with images having similar content. In an embodiment, a method may include identifying a plurality of photographs as including a common characteristic. The method may also include generating a flipbook media item including the plurality of photographs. The method may further include associating one or more interactive control features with the flipbook media item.
Abstract:
Methods and systems for rolling shutter removal are described. A computing device may be configured to determine, in a frame of a video, distinguishable features. The frame may include sets of pixels captured asynchronously. The computing device may be configured to determine for a pixel representing a feature in the frame, a corresponding pixel representing the feature in a consecutive frame; and determine, for a set of pixels including the pixel in the frame, a projective transform that may represent motion of the camera. The computing device may be configured to determine, for the set of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for other sets of pixels. Accordingly, the computing device may be configured to estimate a motion path of the camera to account for distortion associated with the asynchronous capturing of the sets of pixels.
Abstract:
Methods and systems for rolling shutter removal are described. A computing device may be configured to determine, in a frame of a video, distinguishable features. The frame may include sets of pixels captured asynchronously. The computing device may be configured to determine for a pixel representing a feature in the frame, a corresponding pixel representing the feature in a consecutive frame; and determine, for a set of pixels including the pixel in the frame, a projective transform that may represent motion of the camera. The computing device may be configured to determine, for the set of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for other sets of pixels. Accordingly, the computing device may be configured to estimate a motion path of the camera to account for distortion associated with the asynchronous capturing of the sets of pixels.
Abstract:
Implementations generally relate to generating compositional media content. In some implementations, a method includes receiving a plurality of photos from a user, and determining one or more composition types from the photos. The method also includes generating compositions from the selected photos based on the one or more determined composition types. The method also includes providing the one or more generated compositions to the user.
Abstract:
Systems and methods are disclosed for tracking regions within a media item. A method includes identifying a region in a first frame of a media item using a first user specified position; calculating, based on the first user specified position and on tracking data, an estimated position of the region within a second frame of the media item and an estimated position of the region within a third frame of the media item; adjusting the estimated position of the region within the second frame to a second user specified position; blending, by a processing device, the estimated position within the third frame based on the second user specified position of the second frame to generate a blended position within the third frame; and storing, in a data store, the blended position within the third frame.
Abstract:
Systems and methods are disclosed for tracking and distorting regions within a media item. A method includes identifying a region in a first frame of a media item using a first user specified position, calculating based on tracking data an estimated position of the region within a second frame of the media item and an estimated position of the region within a third frame of the media item, adjusting based on user input the estimated position of the region within the second frame to a second user specified position, blending the estimated position within the third frame based on the user specified position of the second frame to generate a blended position within the third frame, and modifying the third frame to distort the region underlying the blended position.