Abstract:
Techniques for animating a view of a composite image based on metadata related to the capture of the underlying source images. According to certain implementations, the metadata may include timing or sensor data collected or generated during capture of the component source images. For example, the timing data may indicate an order or sequence in which the source images were captured. Accordingly, the corresponding regions of the composite panoramic image may be panned to in sequence, for example, using the Ken Burns Effect. In another example, sensor data from gyroscopes or accelerometers may be used to simulate the movement of the image capture device used to generate the source images. In another implementation, the source images may be associated with varying focal lengths or zoom levels. Accordingly, certain implementations may vary a level zoom, based on the metadata, while panning between source photos.
Abstract:
The disclosed technology includes switching between a normal or standard-lens UI and a panoramic or wide-angle photography UI responsive to a zoom gesture. In one implementation, a user gesture corresponding to a “zoom-out” command, when received at a mobile computing device associated with a minimum zoom state, may trigger a switch from a standard lens photo capture UI to a wide-angle photography UI. In another implementation, a user gesture corresponding to a “zoom-in” command, when received at a mobile computing device associated with a nominal wide-angle state, may trigger a switch from a wide-angle photography UI to a standard lens photo capture UI.
Abstract:
The disclosed technology includes automatically suggesting audio, video, or other media accompaniments to media content based on identified objects in the media content. Media content may include images, audio, video, or a combination. In one implementation, one or more images representative of the media content may be extracted. A visual search may be run across the images to identify objects or characteristics present in or associated with the media content. Keywords may be generated based on the identified objects and characteristics. The keywords may be used to determine suitable audio tracks to accompany the media content, for example by performing a search based on the keywords. The determined tracks may be presented to a user, or automatically arranged to match the media content. In another implementation, an aural search may be run across samples of the audio data to similarly identify objects and characteristics of the media content.