Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract:
A peripheral device (e.g., a small wearable device) may operate in conjunction with a camera to enable in-the-moment capture and control. The peripheral device may receive voice commands and uses voice recognition to generate a control signal to control the camera, thereby enabling users to freely participate in their activities while seamlessly controlling the camera in a hands-free manner. Additionally, the peripheral device may operate as a wireless microphone source to capture high quality audio for instead of or in addition to audio captured by the camera. This may provide improved audio quality in certain operating conditions such as during narrating and interviewing.
Abstract:
A peripheral device (e.g., a small wearable device) may operate in conjunction with a camera to enable in-the-moment capture and control. The peripheral device may receive voice commands and uses voice recognition to generate a control signal to control the camera, thereby enabling users to freely participate in their activities while seamlessly controlling the camera in a hands-free manner. Additionally, the peripheral device may operate as a wireless microphone source to capture high quality audio for instead of or in addition to audio captured by the camera. This may provide improved audio quality in certain operating conditions such as during narrating and interviewing.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract:
A peripheral device (e.g., a small wearable device) may operate in conjunction with a camera to enable in-the-moment capture and control. The peripheral device may receive voice commands and uses voice recognition to generate a control signal to control the camera, thereby enabling users to freely participate in their activities while seamlessly controlling the camera in a hands-free manner. Additionally, the peripheral device may operate as a wireless microphone source to capture high quality audio for instead of or in addition to audio captured by the camera. This may provide improved audio quality in certain operating conditions such as during narrating and interviewing.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract:
A peripheral device (e.g., a small wearable device) may operate in conjunction with a camera to enable in-the-moment capture and control. The peripheral device may receive voice commands and uses voice recognition to generate a control signal to control the camera, thereby enabling users to freely participate in their activities while seamlessly controlling the camera in a hands-free manner. Additionally, the peripheral device may operate as a wireless microphone source to capture high quality audio for instead of or in addition to audio captured by the camera. This may provide improved audio quality in certain operating conditions such as during narrating and interviewing.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.