-
公开(公告)号:US12217770B1
公开(公告)日:2025-02-04
申请号:US17305127
申请日:2021-06-30
Applicant: Amazon Technologies, Inc.
Inventor: Lokesh Amarnath Ravindranathan , Kaustav Nandy , Manivel Sethu , Yongjun Wu , Imran Khan , Shivam Agarwal , Yash Pandya
Abstract: Some implementations include methods generating visualization emphasis object for players and may include receiving a video clip associated with a sporting event participated by a plurality of players using a playing field with a play object. The players in a frame of the video clip may be detected. Players who are on the playing field may be identified from the detected players. Each of the players identified to be on the playing field may be associated with a rectangular bounding box that provides an outline of each of the players. A player who has possession of the play object may be identified. A visualization emphasis object may be generated and placed on the player who has possession of the play object. The visual emphasis object may have a size proportional to height of a bounding box associated with the player having the possession of the play object.
-
公开(公告)号:US12211275B1
公开(公告)日:2025-01-28
申请号:US17657302
申请日:2022-03-30
Applicant: Amazon Technologies, Inc.
Inventor: Kaustav Nandy , Lokesh Amarnath Ravindranathan , Shivam Agarwal , Yash Pandya , Imran Khan , Manivel Sethu , Abhinav Aggarwal
IPC: G06V20/40
Abstract: Techniques for reducing the latency of annotating a replay video segment may include receiving a video segment with content involving multiple individuals. An annotation task is performed concurrently with a tracking task. The annotation task receives annotation data to indicate which of the individuals is an individual of interest in a subset of frames of the video segment, and the tracking task tracks the individuals in the video segment by generating bounding objects corresponding to the individuals. The annotation data can be associated with the bounding objects to detect a bounding object for the individual of interest, and a visualization emphasis object is generated based on the detected bounding object in a replay video segment to identify the individual of interest.
-
公开(公告)号:US11151386B1
公开(公告)日:2021-10-19
申请号:US16809421
申请日:2020-03-04
Applicant: Amazon Technologies, Inc.
Inventor: Abhinav Aggarwal , Heena Bansal , Lokesh Amarnath Ravindranathan , Yash Pandya , Muhammad Raffay Hamid , Manivel Sethu
IPC: G06K9/00 , H04N21/44 , G06F16/78 , G06F16/75 , G06F16/783
Abstract: Systems, methods, and computer-readable media are disclosed for systems and methods for automated identification and tagging of video content. Example methods may include determining a first set of frames in video content, determining a first set of faces that appear in the first set of frames, and extracting first image content corresponding to respective faces of the first set of faces. Methods may include classifying the first image content into a first set of clusters, generating a second set of clusters comprising a second set of faces, wherein the second set of faces has less faces than the first set of faces, and determining a first actor identifier associated with a first face in the second set of clusters. Some methods may include determining second image content in the second set of clusters comprising the first face, and automatically associating the first actor identifier with the second image content.
-
公开(公告)号:US10897658B1
公开(公告)日:2021-01-19
申请号:US16394757
申请日:2019-04-25
Applicant: Amazon Technologies, Inc.
Inventor: Manivel Sethu , Lokesh Amarnath Ravindranathan , Yongjun Wu
Abstract: Methods and apparatus are described for automating aspects of the annotation of a media presentation. Events are identified that relate to entities associated with the scenes of the media presentation. These events are time coded relative to the media timeline of the media presentation and might represent, for example, the appearance of a particular cast member or playback of a particular music track. The video frames of the media presentation are processed to identify visually similar intervals that may serve as or be used to identify contexts (e.g., scenes) within the media presentation. Relationships between the event data and the visually similar intervals or contexts are used to identify portions of the media presentation during which the occurrence of additional meaningful events is unlikely. This information may be surfaced to a human operator tasked with annotating the content as an indication that part of the media presentation may be skipped.
-
公开(公告)号:US12169983B1
公开(公告)日:2024-12-17
申请号:US17712730
申请日:2022-04-04
Applicant: Amazon Technologies, Inc.
Inventor: Yash Pandya , Abhinav Aggarwal , Lokesh Amarnath Ravindranathan , Laxmi Shivaji Ahire , Manivel Sethu , Kaustav Nandy , Nihal Shandilya
IPC: G06V40/16 , G06T7/77 , G06V10/72 , G06V10/774
Abstract: Systems and techniques for ranking and selection of headshots from a collection of images. The systems and techniques for ranking use heuristics to rank extracted headshots from a set of images based on features of the faces within the images. The ranking may be used to generate a training dataset for a machine learning model to determine quality scores for faces within images. The ranked images may then be stored for later access in reference to a video or set of images containing the individual.
-
公开(公告)号:US11790695B1
公开(公告)日:2023-10-17
申请号:US17322753
申请日:2021-05-17
Applicant: Amazon Technologies, Inc.
Inventor: Abhinav Aggarwal , Yash Pandya , Laxmi Shivaji Ahire , Lokesh Amarnath Ravindranathan , Manivel Sethu , Muhammad Raffay Hamid
IPC: G06K9/00 , H04N21/44 , G06F16/78 , G06F16/75 , G06F16/783 , G06V40/16 , G06V20/40 , G06F18/23 , G06F18/21
CPC classification number: G06V40/173 , G06F18/2178 , G06F18/23 , G06V20/40 , G06V40/179
Abstract: Devices, systems, and methods are provided for enhanced video annotations using image analysis. A method may include identifying, by a first device, first faces of first video frames, and second faces of second video frames. The method may include determining a first score for the first video frames, the first score indicative of a first number of faces to label, the first number of faces represented by the first video frames, and determining a second score for the second video frames, the second score indicative of a second number of faces to label. The method may include selecting the first video frames for face labeling, and receiving a first face label for the first face. The method may include generating a second face label for the second faces. The method may include sending the first face label and the second face label to a second device for presentation.
-
-
-
-
-