Abstract:
A target image captured from a fisheye lens or other lens with known distortion parameters may be transformed to align it to a reference image. Corresponding features may be detected in the target image and the reference image. The features may be transformed to a spherical coordinate space. In the spherical space, images may be re-pointed or rotated in three dimensions to align all or a subset of the features of the target image to the corresponding features of the reference image. For example, in a sequence of images, background features of the target image in the spherical image space may be aligned to background features of the reference image in the spherical image space to compensate for camera motion while preserving foreground motion. An inverse transformation may then be applied to bring the images back into the original image space.
Abstract:
A processing device generates composite images from a sequence of images. The composite images may be used as frames of video. A foreground/background segmentation is performed at selected frames to extract a plurality of foreground object images depicting a foreground object at different locations as it moves across a scene. The foreground object images are stored to a foreground object list. The foreground object images in the foreground object list are overlaid onto subsequent video frames that follow the respective frames from which they were extracted, thereby generating a composite video.
Abstract:
A processing device generates composite images from a sequence of images. The composite images may be used as frames of video. A foreground/background segmentation is performed at selected frames to extract a plurality of foreground object images depicting a foreground object at different locations as it moves across a scene. The foreground object images are stored to a foreground object list. The foreground object images in the foreground object list are overlaid onto subsequent video frames that follow the respective frames from which they were extracted, thereby generating a composite video.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. In one example, best scenes are identified based on the motion values associated with frames or portions of a frame of a video. Motion values are determined for each frame and portions of the video including frames with the most motion are identified as best scenes. Best scenes may also be identified based on the motion profile of a video. The motion profile of a video is a measure of global or local motion within frames throughout the video. For example, best scenes are identified from portion of the video including steady global motion. A video summary can be generated including one or more of the identified best scenes.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. In one example, best scenes are identified based on the motion values associated with frames or portions of a frame of a video. Motion values are determined for each frame and portions of the video including frames with the most motion are identified as best scenes. Best scenes may also be identified based on the motion profile of a video. The motion profile of a video is a measure of global or local motion within frames throughout the video. For example, best scenes are identified from portion of the video including steady global motion. A video summary can be generated including one or more of the identified best scenes.
Abstract:
A processing device generates composite images from a sequence of images. The composite images may be used as frames of video. A foreground/background segmentation is performed at selected frames to extract a plurality of foreground object images depicting a foreground object at different locations as it moves across a scene. The foreground object images are stored to a foreground object list. The foreground object images in the foreground object list are overlaid onto subsequent video frames that follow the respective frames from which they were extracted, thereby generating a composite video.
Abstract:
Methods and apparatus for the generation of interpolated frames of video data. In one embodiment, the interpolated frames of video data are generated by obtaining two or more frames of video data from a video sequence; determining frame errors for the obtained two or more frames from the video sequence, determining whether the frame errors exceed a threshold value; performing a multi-pass operation; performing a single-pass operation; performing frame blending; performing edge correction; and generating the interpolated frame of image data.
Abstract:
A target image captured from a fisheye lens or other lens with known distortion parameters may be transformed to align it to a reference image. Corresponding features may be detected in the target image and the reference image. The features may be transformed to a spherical coordinate space. In the spherical space, images may be re-pointed or rotated in three dimensions to align all or a subset of the features of the target image to the corresponding features of the reference image. For example, in a sequence of images, background features of the target image in the spherical image space may be aligned to background features of the reference image in the spherical image space to compensate for camera motion while preserving foreground motion. An inverse transformation may then be applied to bring the images back into the original image space.
Abstract:
A target image captured from a fisheye lens or other lens with known distortion parameters may be transformed to align it to a reference image. Corresponding features may be detected in the target image and the reference image. The features may be transformed to a spherical coordinate space. In the spherical space, images may be re-pointed or rotated in three dimensions to align all or a subset of the features of the target image to the corresponding features of the reference image. For example, in a sequence of images, background features of the target image in the spherical image space may be aligned to background features of the reference image in the spherical image space to compensate for camera motion while preserving foreground motion. An inverse transformation may then be applied to bring the images back into the original image space.
Abstract:
A processing device generates composite images from a sequence of images. The composite images may be used as frames of video. A foreground/background segmentation is performed at selected frames to extract a plurality of foreground object images depicting a foreground object at different locations as it moves across a scene. The foreground object images are stored to a foreground object list. The foreground object images in the foreground object list are overlaid onto subsequent video frames that follow the respective frames from which they were extracted, thereby generating a composite video.