-
公开(公告)号:US11127034B1
公开(公告)日:2021-09-21
申请号:US16933873
申请日:2020-07-20
Applicant: Amazon Technologies, Inc.
Inventor: Yashal Shakti Kanungo , Sumit Negi , Aruna Rajan , Meghana S. Shivanand Rajamane
Abstract: Technologies are provided for automated generation of directed content campaigns. The generated campaign can be optimized for performance. In some embodiments, a group of variations of attributes that define a directed content campaign can be generated and allocated traffic weights for respective impressions of the directed content campaign in a media outlet channel. The traffic weights can then be iteratively updated until a termination criterion is satisfied. At each iteration, the traffic weight can be updated by applying a machine-learning model to current performance metric values of respective impressions corresponding to the traffic weights. After termination of the updates to the traffic weights, a particular set of variations having traffic weights exceeding a threshold can be selected as directed content campaign having satisfactory performance. Those variations can be supplied to a requestor device for subsequent utilization.
-
公开(公告)号:US11917266B1
公开(公告)日:2024-02-27
申请号:US17955295
申请日:2022-09-28
Applicant: Amazon Technologies, Inc.
Inventor: Shilpa Pundi Ananth , Sai Sree Harsha , Pooja Ashok Kumar , Yashal Shakti Kanungo , Sumit Negi , Brittney C. Gannon , Lauren K. Johnson
IPC: H04N21/218 , H04N21/222 , H04N21/235 , H04N21/488 , H04N21/6379 , H04N21/81 , H04N19/46 , G06V10/74 , H04N5/262
CPC classification number: H04N21/8153 , G06V10/761 , H04N5/2628 , H04N19/46 , H04N21/812
Abstract: Devices, systems, and methods are provided for generating and selecting video clips for inclusion in video sequences based on still frame images. A method may include encoding first embeddings for a first video including first images of an item at a first scene, the first embeddings indicative of features of the first scene; encoding second embeddings for a second video including second images of the item at a second scene, the second embeddings indicative of features of the second scene; encoding third embeddings for the first video, the third embeddings indicative of features of a first type of camera shot used for the first images; encoding fourth embeddings for the second video, the fourth embeddings indicative of features of a second type of camera shot used for the second images; and generating, based on the first, second, third, and fourth embeddings, a video sequence for the item.
-
公开(公告)号:US10713821B1
公开(公告)日:2020-07-14
申请号:US16454829
申请日:2019-06-27
Applicant: Amazon Technologies, Inc.
Inventor: Shiv Surya , Arijit Biswas , Sumit Negi , Amrith Rajagopal Setlur
Abstract: Techniques are generally described for context aware text-to-image synthesis. First text data comprising a description of an object may be received. A recurrent neural network may determine a first semantic representation data representing the first text data. A generator trained using a first generative adversarial network (GAN) may determine first image data representing the object using the first semantic representation. An encoder of a second GAN may generate a first feature representation of the first image data. The first feature representation may be combined with a projection of the first semantic representation data. A decoder of the second GAN may generate second image data representing the first text data.
-
公开(公告)号:US12238390B1
公开(公告)日:2025-02-25
申请号:US18400569
申请日:2023-12-29
Applicant: Amazon Technologies, Inc.
Inventor: Shilpa Pundi Ananth , Sai Sree Harsha , Pooja Ashok Kumar , Yashal Shakti Kanungo , Sumit Negi , Brittney C. Gannon , Lauren K. Johnson
IPC: H04N21/218 , G06V10/74 , H04N5/262 , H04N19/46 , H04N21/222 , H04N21/235 , H04N21/488 , H04N21/6379 , H04N21/81
Abstract: Devices, systems, and methods are provided for generating and selecting video clips for inclusion in video sequences based on still frame images. A method may include encoding first embeddings for a first video including first images of an item at a first scene, the first embeddings indicative of features of the first scene; encoding second embeddings for a second video including second images of the item at a second scene, the second embeddings indicative of features of the second scene; encoding third embeddings for the first video, the third embeddings indicative of features of a first type of camera shot used for the first images; encoding fourth embeddings for the second video, the fourth embeddings indicative of features of a second type of camera shot used for the second images; and generating, based on the first, second, third, and fourth embeddings, a video sequence for the item.
-
-
-