Abstract:
A content annotation tool is disclosed. In a configuration, a portion of a movie may be obtained from a database. Entities, such as an actor, background music, text, etc. may be automatically identified in the movie. A user, such as a content producer, may associate and/or provide supplemental content for an identified entity to the database. A selection of one or more automatically identified entities may be received. A database entry may be generated that links the identified entity with the supplemental content. The selected automatically identified one or more entities and//or supplemental content associated therewith may be presented to an end user.
Abstract:
An example method includes receiving a first image and a second image of a face of a user, where one or both images have been granted a match by facial recognition. The method further includes detecting a liveness gesture based on at least one of a yaw angle of the second image relative to the first image and a pitch angle of the second image relative to the first image, where the yaw angle corresponds to a transition along a horizontal axis, and where the pitch angle corresponds to a transition along a vertical axis. The method further includes generating a liveness score based on a yaw angle magnitude and/or a pitch angle magnitude, comparing the liveness score to a threshold value, and determining, based on the comparison, whether to deny authentication to the user with respect to accessing one or more functionalities controlled by the computing device.
Abstract:
In one example, a method includes determining, by a first motion module of a computing device and based on first motion data measured by a first motion sensor at a first time, that the mobile computing device has moved, wherein a display operatively coupled to the computing device is deactivated at the first time; responsive to determining that the computing device has moved, activating a second motion module; determining, by the second motion module, second motion data measured by a second motion sensor, wherein determining the second motion data uses a greater quantity of power than determining the first motion data; determining a statistic of a group of statistics based on the second motion data; and responsive to determining that at least one of the group of statistics satisfies a threshold, activating the display.
Abstract:
In general, video segmentation techniques are described. According to various examples, the video segmentation techniques may be based on video content. An example method includes determining one or more segments into which to divide video content, dividing the video content into the determined number of segments identifying a boundary frame associated with each of the segments, and adjusting the respective boundary frame associated with a first segment of the segments to generate an adjusted boundary frame associated with the first segment, wherein the adjusting is based on an one or more entity representations associated with the adjusted boundary frame.
Abstract:
A transformed video and a source video may be synchronized according to implementations disclosed herein to provide tag information to the device receiving the transformed version of the video. A synchronization signal may be computed on the source video and the transformed video using a statistic such as mean pixel intensity. The synchronization signals computed for the transformed video and source video may be compared to determine a transformed video reference point location for the requested tag information. The requested tag information may be provided to the device receiving the transformed version of the video.
Abstract:
In general, video segmentation techniques are described. According to various examples, the video segmentation techniques may be based on video content. An example method includes determining one or more segments into which to divide video content, dividing the video content into the determined number of segments identifying a boundary frame associated with each of the segments, and adjusting the respective boundary frame associated with a first segment of the segments to generate an adjusted boundary frame associated with the first segment, wherein the adjusting is based on an one or more entity representations associated with the adjusted boundary frame.
Abstract:
An example method includes receiving a first image and a second image of a face of a user, where one or both images have been granted a match by facial recognition. The method further includes detecting a liveness gesture based on at least one of a yaw angle of the second image relative to the first image and a pitch angle of the second image relative to the first image, where the yaw angle corresponds to a transition along a horizontal axis, and where the pitch angle corresponds to a transition along a vertical axis. The method further includes generating a liveness score based on a yaw angle magnitude and/or a pitch angle magnitude, comparing the liveness score to a threshold value, and determining, based on the comparison, whether to deny authentication to the user with respect to accessing one or more functionalities controlled by the computing device.
Abstract:
A content annotation tool is disclosed. In a configuration, a portion of a movie may be obtained from a database. Entities, such as an actor, background music, text, etc. may be automatically identified in the movie. A user, such as a content producer, may associate and/or provide supplemental content for an identified entity to the database. A selection of one or more automatically identified entities may be received. A database entry may be generated that links the identified entity with the supplemental content. The selected automatically identified one or more entities and//or supplemental content associated therewith may be presented to an end user.
Abstract:
A content annotation tool is disclosed. In a configuration, a portion of a movie may be obtained from a database. Entities, such as an actor, background music, text, etc. may be automatically identified in the movie. A user, such as a content producer, may associate and/or provide supplemental content for an identified entity to the database. A selection of one or more automatically identified entities may be received. A database entry may be generated that links the identified entity with the supplemental content. The selected automatically identified one or more entities and/or supplemental content associated therewith may be presented to an end user.
Abstract:
In one example, a method includes determining, by a first motion module of a computing device and based on first motion data measured by a first motion sensor at a first time, that the mobile computing device has moved, wherein a display operatively coupled to the computing device is deactivated at the first time; responsive to determining that the computing device has moved, activating a second motion module; determining, by the second motion module, second motion data measured by a second motion sensor, wherein determining the second motion data uses a greater quantity of power than determining the first motion data; determining a statistic of a group of statistics based on the second motion data; and responsive to determining that at least one of the group of statistics satisfies a threshold, activating the display.