-
公开(公告)号:US11342003B1
公开(公告)日:2022-05-24
申请号:US16711797
申请日:2019-12-12
Applicant: Amazon Technologies, Inc.
Inventor: Christian Garcia Siagian , Christian Ciabattoni , David Niu , Lawrence Kyuil Chang , Gordon Zheng , Ritesh Pase , Shiva Krishnamurthy , Ramakanth Mudumba
Abstract: Disclosed are various embodiments for segmenting and classifying video content using sounds. In one embodiment, a plurality of segments of a video content item are generated by analyzing audio accompanying the video content item. A subset of the plurality of segments that correspond to music segments is selected based at least in part on an audio characteristic of the subset of the plurality of segments. Individual segments of the subset of the plurality of segments are processed to determine whether a classification applies to the individual segments. A list of segments of the video content item to which the classification applies is generated.
-
公开(公告)号:US11120839B1
公开(公告)日:2021-09-14
申请号:US16711841
申请日:2019-12-12
Applicant: Amazon Technologies, Inc.
Inventor: Christian Garcia Siagian , Christian Ciabattoni , David Niu , Lawrence Kyuil Chang , Gordon Zheng , Ritesh Pase , Shiva Krishnamurthy , Ramakanth Mudumba
Abstract: Disclosed are various embodiments for segmenting and classifying video content using conversation. In one embodiment, a plurality of segments of a video content item are generated by analyzing audio accompanying the video content item. A subset of the plurality of segments that correspond to conversation segments are selected. Individual segments of the subset of the plurality of segments are processed to determine whether a classification applies to the individual segments. A list of segments of the video content item to which the classification applies is generated.
-
公开(公告)号:US10904476B1
公开(公告)日:2021-01-26
申请号:US16712294
申请日:2019-12-12
Applicant: Amazon Technologies, Inc.
Inventor: Christian Garcia Siagian , Charles Effinger , David Niu , Yang Yu , Narayan Sundaram , Arjun Cholkar , Ramakanth Mudumba
Abstract: Techniques for automated up-sampling of media files are provided. In some examples, a title associated with a media file, a metadata file associated with the title, and the media file may be received. The media file may be partitioned into one or more scene files, each scene file including a plurality of frame images in a sequence. One or more up-sampled scene files may be generated, each corresponding to a scene file of the one or more scene files. An up-sampled media file may be generated by combining at least a subset of the one or more up-sampled scene files. Generating one or more up-sampled scene files may include identifying one or more characters in a frame image of the plurality of frame images, based at least in part on implementation of a facial recognition algorithm including deep learning features in a neural network.
-
-