Generating and applying editing presets

    公开(公告)号:US12197713B2

    公开(公告)日:2025-01-14

    申请号:US17592341

    申请日:2022-02-03

    Applicant: Adobe Inc.

    Abstract: In implementations of systems for generating and applying editing presets, a computing device implements a preset system to detect objects depicted in a digital image that is displayed in a user interface of an application for editing digital content. Input data is received describing an edited region of the digital image and properties of an editing operation performed in the edited region. The preset system identifies a particular detected object of the detected objects based on a bounding box of the particular detected object and an area of the edited region. An additional digital image is edited by applying the properties of the editing operation to a detected object that is depicted in the additional digital image based on a classification of the detected object and a classification of the particular detected object.

    Graphics Processing Unit Instancing Control

    公开(公告)号:US20250014258A1

    公开(公告)日:2025-01-09

    申请号:US18890428

    申请日:2024-09-19

    Applicant: Adobe Inc.

    Inventor: Harish Kumar

    Abstract: Graphics processing unit instancing control techniques are described that overcome conventional challenges to expand functionality made available via a graphics processing unit. In one example, these techniques support ordering of primitives within respective instances of a single draw call made to a graphics processing unit. This is performed by ordering primitives within respective instances that correspond to polygons for rendering. The ordering of the primitives overcomes limitations of conventional techniques and reduces visual artifacts through support of correct overlaps and z-ordering of instances.

    EFFICIENT OBJECT SEGMENTATION
    63.
    发明申请

    公开(公告)号:US20250005884A1

    公开(公告)日:2025-01-02

    申请号:US18215551

    申请日:2023-06-28

    Applicant: Adobe Inc.

    Abstract: In implementations of systems for efficient object segmentation, a computing device implements a segment system to receive a user input specifying coordinates of a digital image. The segment system computes receptive fields of a machine learning model based on the coordinates of the digital image. The machine learning model is trained on training data to generate segment masks for objects depicted in digital images. The segment system processes a portion of a feature map of the digital image using the machine learning model based on the receptive fields. A segment mask is generated for an object depicted in the digital image based on processing the portion of the feature map of the digital image using the machine learning model.

    Adversarially robust visual fingerprinting and image provenance models

    公开(公告)号:US12183056B2

    公开(公告)日:2024-12-31

    申请号:US17573041

    申请日:2022-01-11

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize a deep visual fingerprinting model with parameters learned from robust contrastive learning to identify matching digital images and image provenance information. For example, the disclosed systems utilize an efficient learning procedure that leverages training on bounded adversarial examples to more accurately identify digital images (including adversarial images) with a small computational overhead. To illustrate, the disclosed systems utilize a first objective function that iteratively identifies augmentations to increase contrastive loss. Moreover, the disclosed systems utilize a second objective function that iteratively learns parameters of a deep visual fingerprinting model to reduce the contrastive loss. With these learned parameters, the disclosed systems utilize the deep visual fingerprinting model to generate visual fingerprints for digital images, retrieve and match digital images, and provide digital image provenance information.

    Multi-task equidistant embedding
    65.
    发明授权

    公开(公告)号:US12182713B2

    公开(公告)日:2024-12-31

    申请号:US16203263

    申请日:2018-11-28

    Applicant: Adobe Inc.

    Abstract: Systems and techniques for multi-task equidistant embedding are described that process categorical feature data to explore feature interactions. A digital analytics system enforces an equidistant relationship among features within a category while extracting high-order feature interactions by punishing both positive correlations and negative correlations among low-dimensional representations of different features. By enforcing an equidistant embedding, information is retained and accuracy is increased while higher order feature interactions are determined. Further, the digital analytics system shares knowledge among different tasks by connecting a shared network representation common to multiple tasks with exclusive network representations specific to particular tasks.

    GENERATING AND COMPOSITING HAIR PIXELS USING GENERATIVE NEURAL NETWORKS

    公开(公告)号:US20240428482A1

    公开(公告)日:2024-12-26

    申请号:US18338964

    申请日:2023-06-21

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating and composting pixels of a digital image that depict hair of an individual using generative neural networks. In some embodiments, the disclosed systems receive a modification to a face crop enclosing a face depicted within a digital image. In some cases, the disclosed systems determine, from the modification, modified hair pixels within the face crop of the digital image and unmodified hair pixels outside of the face crop of the digital image. The disclosed systems generate, for the unmodified hair pixels outside of the face crop, replacement hair pixels that resemble the modified hair pixels utilizing a generative neural network. Additionally, the disclosed systems generate a modified digital image by replacing the unmodified hair pixels outside of the face crop with the replacement hair pixels.

    CONTEXTUAL QUERY GENERATION
    67.
    发明申请

    公开(公告)号:US20240427998A1

    公开(公告)日:2024-12-26

    申请号:US18339694

    申请日:2023-06-22

    Applicant: Adobe Inc.

    Abstract: Contextual query generation techniques are described that enable generation of a contextual query for output to a question-answering (QA) model. A content processing system, for instance, configures a language model using in-context learning to generate queries based on semantic contexts of input documents, e.g., based on one or more linguistic cues from text of the input documents. The content processing system receives an input that includes a document having text and a reference query. The content processing system leverages the language model to generate a contextual query based on a semantic context of the text of the document and the reference query. The content processing system then outputs the contextual query and the document to a QA model. Using the QA model, the content processing system generates a response as an answer to the contextual query based on the contextual query and the document.

    DESIGN COMPOSITING USING IMAGE HARMONIZATION

    公开(公告)号:US20240420394A1

    公开(公告)日:2024-12-19

    申请号:US18334610

    申请日:2023-06-14

    Applicant: ADOBE INC.

    Abstract: Systems and methods are provided for image editing, and more particularly, for harmonizing background images with text. Embodiments of the present disclosure obtain an image including text and a region overlapping the text. In some aspects, the text includes a first color. Embodiments then select a second color that contrasts with the first color, and generate a modified image including the text and a modified region using a machine learning model that takes the image and the second color as input. The modified image is generated conditionally, so as to include the second color in a region corresponding to the text.

    USING GENERATIVE ARTIFICIAL INTELLIGENCE TO OPTIMIZE PRODUCT SEARCH QUERIES

    公开(公告)号:US20240420205A1

    公开(公告)日:2024-12-19

    申请号:US18336815

    申请日:2023-06-16

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for using generative AI to optimize product search queries. In embodiments described herein, product descriptions and product images for a plurality of products are obtained. A multi-modal style classification model classifies each product into a corresponding style of a plurality of styles based on the product's product description and product image. Relationships of each product to other products in the plurality of products are stored in a knowledge graph based on the corresponding style of each product and the corresponding product description of each product. An image is generated by a text-to-image diffusion model with a set of products of the plurality of products based on the relationships of each product of the plurality of products to other products in the plurality of products.

    Learning to Personalize Vision-Language Models through Meta-Personalization

    公开(公告)号:US20240419726A1

    公开(公告)日:2024-12-19

    申请号:US18210535

    申请日:2023-06-15

    Applicant: Adobe Inc.

    Abstract: Techniques for learning to personalize vision-language models through meta-personalization are described. In one embodiment, one or more processing devices lock a pre-trained vision-language model (VLM) during a training phase. The processing devices train the pre-trained VLM to augment a text encoder of the pre-trained VLM with a set of general named video instances to form a meta-personalized VLM, the meta-personalized VLM to include global category features. The processing devices test the meta-personalized VLM to adapt the text encoder with a set of personal named video instances to form a personal VLM, the personal VLM comprising the global category features personalized with a set of personal instance weights to form a personal instance token associated with the user. Other embodiments are described and claimed.

Patent Agency Ranking