-
公开(公告)号:US11640634B2
公开(公告)日:2023-05-02
申请号:US16865572
申请日:2020-05-04
Applicant: ADOBE INC.
Inventor: Kumar Ayush , Ayush Chopra , Patel Utkarsh Govind , Balaji Krishnamurthy , Anirudh Singhal
IPC: G06N3/00 , G06N3/088 , G06N3/04 , G06K9/62 , G06Q30/0601
Abstract: Systems, methods, and computer storage media are disclosed for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
-
公开(公告)号:US20200258276A1
公开(公告)日:2020-08-13
申请号:US16275261
申请日:2019-02-13
Applicant: ADOBE INC.
Inventor: Kumar Ayush , Harsh Vardhan Chopra
Abstract: The present invention enables the automatic generation and recommendation of embedded images. An embedded image includes a visual representation of a context-appropriate object embedded within a scene image. The context and aesthetic properties (e.g., the colors, textures, lighting, position, orientation, and size) of the visual representation of the object may be automatically varied to increase an associated objective compatibility score that is based on the context and aesthetics of the scene image. The scene image may depict a visual representation of a scene, e.g., a background scene. Thus, a scene image may be a background image that depicts a background and/or scene to automatically pair with the object. The object may be a three-dimensional (3D) physical or virtual object. The automatically generated embedded image may be a composite image that includes at least a partially optimized visual representation of a context-appropriate object composited within the scene image.
-
公开(公告)号:US20190340649A1
公开(公告)日:2019-11-07
申请号:US15972815
申请日:2018-05-07
Applicant: Adobe Inc.
Inventor: Kumar Ayush , Gaurush Hiranandani
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating augmented reality representations of recommended products based on style compatibility with real-world surroundings. For example, the disclosed systems can identify a real-world object within a camera feed and can utilize a 2D-3D alignment algorithm to identify a three-dimensional model that matches the real-world object. In addition, the disclosed systems can utilize a style compatibility algorithm to generate recommended products based on style compatibility in relation to the identified three-dimensional model. The disclosed systems can further utilize a color compatibility algorithm to determine product textures which are color compatible with the real-world surroundings and generate augmented reality representations of recommended products to provide as an overlay of the real-world environment of the camera feed.
-
公开(公告)号:US11238093B2
公开(公告)日:2022-02-01
申请号:US16601773
申请日:2019-10-15
Applicant: ADOBE INC.
Inventor: Kumar Ayush , Harnish Lakhani , Atishay Jain
IPC: G06F16/732 , G06N3/08 , H04N19/59 , G06F16/74
Abstract: Systems and methods for content-based video retrieval are described. The systems and methods may break a video into multiple frames, generate a feature vector from the frames based on the temporal relationship between them, and then embed the feature vector into a vector space along with a vector representing a search query. In some embodiments, the video feature vector is converted into a text caption prior to the embedding. In other embodiments, the video feature vector and a sentence vector are each embedded into a common space using a join video sentence embedding model. Once the video and the search query are embedded into a common vector space, a distance between them may be calculated. After calculating the distance between the search query and set of videos, the distances may be used to select a subset of the videos to present as the result of the search.
-
公开(公告)号:US10789622B2
公开(公告)日:2020-09-29
申请号:US15972815
申请日:2018-05-07
Applicant: Adobe Inc.
Inventor: Kumar Ayush , Gaurush Hiranandani
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating augmented reality representations of recommended products based on style compatibility with real-world surroundings. For example, the disclosed systems can identify a real-world object within a camera feed and can utilize a 2D-3D alignment algorithm to identify a three-dimensional model that matches the real-world object. In addition, the disclosed systems can utilize a style compatibility algorithm to generate recommended products based on style compatibility in relation to the identified three-dimensional model. The disclosed systems can further utilize a color compatibility algorithm to determine product textures which are color compatible with the real-world surroundings and generate augmented reality representations of recommended products to provide as an overlay of the real-world environment of the camera feed.
-
6.
公开(公告)号:US10726629B2
公开(公告)日:2020-07-28
申请号:US16189638
申请日:2018-11-13
Applicant: Adobe Inc.
Inventor: Gaurush Hiranandani , Chinnaobireddy Varsha , Sai Varun Reddy Maram , Kumar Ayush , Atanu Ranjan Sinha
Abstract: Certain embodiments involve enhancing personalization of a virtual-commerce environment by identifying an augmented-reality visual of the virtual-commerce environment. For example, a system obtains a data set that indicates a plurality of augmented-reality visuals generated in a virtual-commerce environment and provided for view by a user. The system obtains data indicating a triggering user input that corresponds to a predetermined user input provideable by the user as the user views an augmented-reality visual of the plurality of augmented-reality visuals. The system obtains data indicating a user input provided by the user. The system compares the user input to the triggering user input to determine a correspondence (e.g., a similarity) between the user input and the triggering user input. The system identifies a particular augmented-reality visual of the plurality of augmented-reality visuals that is viewed by the user based on the correspondence and stores the identified augmented-reality visual.
-
公开(公告)号:US11663463B2
公开(公告)日:2023-05-30
申请号:US16507300
申请日:2019-07-10
Applicant: Adobe Inc.
Inventor: Kumar Ayush , Atishay Jain
IPC: G06N3/08 , G06N3/082 , G06V30/262 , G06F18/213 , G06V10/46 , G06V10/82 , G06V10/44
CPC classification number: G06N3/08 , G06F18/213 , G06N3/082 , G06V10/454 , G06V10/464 , G06V10/82 , G06V30/274
Abstract: A location-sensitive saliency prediction neural network generates location-sensitive saliency data for an image. The location-sensitive saliency prediction neural network includes, at least, a filter module, an inception module, and a location-bias module. The filter module extracts visual features at multiple contextual levels, and generates a feature map of the image. The inception module generates a multi-scale semantic structure, based on multiple scales of semantic content depicted in the image. In some cases, the inception block performs parallel analysis of the feature map, such as by parallel multiple layers, to determine the multiple scales of semantic content. The location-bias module generates a location-sensitive saliency map of location-dependent context of the image based on the multi-scale semantic structure and on a bias map. In some cases, the bias map indicates location-specific weights for one or more regions of the image.
-
公开(公告)号:US11158100B2
公开(公告)日:2021-10-26
申请号:US16275261
申请日:2019-02-13
Applicant: ADOBE INC.
Inventor: Kumar Ayush , Harsh Vardhan Chopra
Abstract: The present invention enables the automatic generation and recommendation of embedded images. An embedded image includes a visual representation of a context-appropriate object embedded within a scene image. The context and aesthetic properties (e.g., the colors, textures, lighting, position, orientation, and size) of the visual representation of the object may be automatically varied to increase an associated objective compatibility score that is based on the context and aesthetics of the scene image. The scene image may depict a visual representation of a scene, e.g., a background scene. Thus, a scene image may be a background image that depicts a background and/or scene to automatically pair with the object. The object may be a three-dimensional (3D) physical or virtual object. The automatically generated embedded image may be a composite image that includes at least a partially optimized visual representation of a context-appropriate object composited within the scene image.
-
公开(公告)号:US20210142539A1
公开(公告)日:2021-05-13
申请号:US16679165
申请日:2019-11-09
Applicant: Adobe Inc.
Inventor: Kumar Ayush , Surgan Jandial , Abhijeet Kumar , Mayur Hemani , Balaji Krishnamurthy , Ayush Chopra
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.
-
公开(公告)号:US10984467B2
公开(公告)日:2021-04-20
申请号:US16281806
申请日:2019-02-21
Applicant: Adobe Inc.
Inventor: Kumar Ayush , Harnish Lakhani , Atishay Jain
Abstract: The technology described herein is directed to object compatibility-based identification and replacement of objects in digital representations of real-world environments for contextualized content delivery. In some implementations, an object compatibility and retargeting service that selects and analyzes a viewpoint (received from a user's client device) to identify objects that are the least compatible with other surrounding real-world objects in terms of style compatibility with the surrounding real-world objects and color compatibility with the background is described. The object compatibility and retargeting service also generates recommendations for replacing the least compatible object with objects/products having more style/design compatibility with the surrounding real-world objects and color compatibility with the background. Furthermore, the object compatibility and retargeting service can create personalized catalogues with the recommended objects/products embedded in the viewpoint in place of the least compatible object with similar pose and scale for retargeting the user.
-
-
-
-
-
-
-
-
-