Abstract:
Techniques for determining motion saliency in video content using center-surround receptive fields. In some implementations, images or frames from a video may be apportioned into non-overlapped regions, for example, by applying a rectilinear grid. For each grid region, or cell, motion consistency may be measured between the center and surround area of that cell across frames of the video. Consistent motion across the center-surround area may indicate that the corresponding region has low variation. The larger the difference between center-surround motions in a cell, the more likely the region has high motion saliency.
Abstract:
Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.
Abstract:
Techniques for determining motion saliency in video content using center-surround receptive fields. In some implementations, images or frames from a video may be apportioned into non-overlapped regions, for example, by applying a rectilinear grid. For each grid region, or cell, motion consistency may be measured between the center and surround area of that cell across frames of the video. Consistent motion across the center-surround area may indicate that the corresponding region has low variation. The larger the difference between center-surround motions in a cell, the more likely the region has high motion saliency.
Abstract:
Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.
Abstract:
An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
Abstract:
An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
Abstract:
Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.
Abstract:
Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.