Center-biased machine learning techniques to determine saliency in digital images

    公开(公告)号:US11663463B2

    公开(公告)日:2023-05-30

    申请号:US16507300

    申请日:2019-07-10

    Applicant: Adobe Inc.

    Abstract: A location-sensitive saliency prediction neural network generates location-sensitive saliency data for an image. The location-sensitive saliency prediction neural network includes, at least, a filter module, an inception module, and a location-bias module. The filter module extracts visual features at multiple contextual levels, and generates a feature map of the image. The inception module generates a multi-scale semantic structure, based on multiple scales of semantic content depicted in the image. In some cases, the inception block performs parallel analysis of the feature map, such as by parallel multiple layers, to determine the multiple scales of semantic content. The location-bias module generates a location-sensitive saliency map of location-dependent context of the image based on the multi-scale semantic structure and on a bias map. In some cases, the bias map indicates location-specific weights for one or more regions of the image.

    Compatibility-based identification of incompatible objects in digital representations of real-world environments

    公开(公告)号:US10984467B2

    公开(公告)日:2021-04-20

    申请号:US16281806

    申请日:2019-02-21

    Applicant: Adobe Inc.

    Abstract: The technology described herein is directed to object compatibility-based identification and replacement of objects in digital representations of real-world environments for contextualized content delivery. In some implementations, an object compatibility and retargeting service that selects and analyzes a viewpoint (received from a user's client device) to identify objects that are the least compatible with other surrounding real-world objects in terms of style compatibility with the surrounding real-world objects and color compatibility with the background is described. The object compatibility and retargeting service also generates recommendations for replacing the least compatible object with objects/products having more style/design compatibility with the surrounding real-world objects and color compatibility with the background. Furthermore, the object compatibility and retargeting service can create personalized catalogues with the recommended objects/products embedded in the viewpoint in place of the least compatible object with similar pose and scale for retargeting the user.

    MACHINE LEARNING PREDICTIONS OF RECOMMENDED PRODUCTS IN AUGMENTED REALITY ENVIRONMENTS

    公开(公告)号:US20210133850A1

    公开(公告)日:2021-05-06

    申请号:US16675606

    申请日:2019-11-06

    Applicant: ADOBE INC.

    Abstract: Techniques for providing a machine learning prediction of a recommended product to a user using augmented reality include identifying at least one real-world object and a virtual product in an AR viewpoint of the user. The AR viewpoint includes a camera image of the real-world object(s) and an image of the virtual product. The image of the virtual product is inserted into the camera image of the real-world object. A candidate product is predicted from a set of recommendation images using a machine learning algorithm based on, for example, a type of the virtual product to provide a recommendation that includes both the virtual product and the candidate product. The recommendation can include different types of products that are complementary to each other, in an embodiment. An image of the selected candidate product is inserted into the AR viewpoint along with the image of the virtual product.

    COMPATIBILITY-BASED IDENTIFICATION OF INCOMPATIBLE OBJECTS IN DIGITAL REPRESENTATIONS OF REAL-WORLD ENVIRONMENTS

    公开(公告)号:US20200273090A1

    公开(公告)日:2020-08-27

    申请号:US16281806

    申请日:2019-02-21

    Applicant: Adobe Inc.

    Abstract: The technology described herein is directed to object compatibility-based identification and replacement of objects in digital representations of real-world environments for contextualized content delivery. In some implementations, an object compatibility and retargeting service that selects and analyzes a viewpoint (received from a user's client device) to identify objects that are the least compatible with other surrounding real-world objects in terms of style compatibility with the surrounding real-world objects and color compatibility with the background is described. The object compatibility and retargeting service also generates recommendations for replacing the least compatible object with objects/products having more style/design compatibility with the surrounding real-world objects and color compatibility with the background. Furthermore, the object compatibility and retargeting service can create personalized catalogues with the recommended objects/products embedded in the viewpoint in place of the least compatible object with similar pose and scale for retargeting the user.

    Video retrieval based on encoding temporal relationships among video frames

    公开(公告)号:US11238093B2

    公开(公告)日:2022-02-01

    申请号:US16601773

    申请日:2019-10-15

    Applicant: ADOBE INC.

    Abstract: Systems and methods for content-based video retrieval are described. The systems and methods may break a video into multiple frames, generate a feature vector from the frames based on the temporal relationship between them, and then embed the feature vector into a vector space along with a vector representing a search query. In some embodiments, the video feature vector is converted into a text caption prior to the embedding. In other embodiments, the video feature vector and a sentence vector are each embedded into a common space using a join video sentence embedding model. Once the video and the search query are embedded into a common vector space, a distance between them may be calculated. After calculating the distance between the search query and set of videos, the distances may be used to select a subset of the videos to present as the result of the search.

    CENTER-BIASED MACHINE LEARNING TECHNIQUES TO DETERMINE SALIENCY IN DIGITAL IMAGES

    公开(公告)号:US20210012201A1

    公开(公告)日:2021-01-14

    申请号:US16507300

    申请日:2019-07-10

    Applicant: Adobe Inc.

    Abstract: A location-sensitive saliency prediction neural network generates location-sensitive saliency data for an image. The location-sensitive saliency prediction neural network includes, at least, a filter module, an inception module, and a location-bias module. The filter module extracts visual features at multiple contextual levels, and generates a feature map of the image. The inception module generates a multi-scale semantic structure, based on multiple scales of semantic content depicted in the image. In some cases, the inception block performs parallel analysis of the feature map, such as by parallel multiple layers, to determine the multiple scales of semantic content. The location-bias module generates a location-sensitive saliency map of location-dependent context of the image based on the multi-scale semantic structure and on a bias map. In some cases, the bias map indicates location-specific weights for one or more regions of the image.

Patent Agency Ranking