IDENTIFYING DIGITAL ATTRIBUTES FROM MULTIPLE ATTRIBUTE GROUPS WITHIN TARGET DIGITAL IMAGES UTILIZING A DEEP COGNITIVE ATTRIBUTION NEURAL NETWORK

    公开(公告)号:US20210073267A1

    公开(公告)日:2021-03-11

    申请号:US16564831

    申请日:2019-09-09

    Applicant: Adobe, Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating tags for an object portrayed in a digital image based on predicted attributes of the object. For example, the disclosed systems can utilize interleaved neural network layers of alternating inception layers and dilated convolution layers to generate a localization feature vector. Based on the localization feature vector, the disclosed systems can generate attribute localization feature embeddings, for example, using some pooling layer such as a global average pooling layer. The disclosed systems can then apply the attribute localization feature embeddings to corresponding attribute group classifiers to generate tags based on predicted attributes. In particular, attribute group classifiers can predict attributes as associated with a query image (e.g., based on a scoring comparison with other potential attributes of an attribute group). Based on the generated tags, the disclosed systems can respond to tag queries and search queries.

    Form structure similarity detection

    公开(公告)号:US12124497B1

    公开(公告)日:2024-10-22

    申请号:US18190686

    申请日:2023-03-27

    Applicant: Adobe Inc.

    CPC classification number: G06F16/383 G06F16/332 G06V30/19147 G06V30/412

    Abstract: Form structure similarity detection techniques are described. A content processing system, for instance, receives a query snippet that depicts a query form structure. The content processing system generates a query layout string that includes semantic indicators to represent the query form structure and generates candidate layout strings that represent form structures from a target document. The content processing system calculates similarity scores between the query layout string and the candidate layout strings. Based on the similarity scores, the content processing system generates a target snippet for display that depicts a form structure that is structurally similar to the query form structure. The content processing system is further operable to generate a training dataset that includes image pairs of snippets depicting form structures that are structurally similar. The content processing system utilizes the training dataset to train a machine learning model to perform form structure similarity matching.

    Form structure extraction by predicting associations

    公开(公告)号:US12086728B2

    公开(公告)日:2024-09-10

    申请号:US18135948

    申请日:2023-04-18

    Applicant: Adobe Inc.

    CPC classification number: G06N5/04 G06N3/08 G06N20/00 G06N20/10 G06V10/82

    Abstract: Techniques described herein extract form structures from a static form to facilitate making that static form reflowable. A method described herein includes accessing low-level form elements extracted from a static form. The method includes determining, using a first set of prediction models, second-level form elements based on the low-level form elements. Each second-level form element includes a respective one or more low-level form elements. The method further includes determining, using a second set of prediction models, high-level form elements based on the second-level form elements and the low-level form elements. Each high-level form element includes a respective one or more second-level form elements or low-level form elements. The method further includes generating a reflowable form based on the static form by, for each high-level form element, linking together the respective one or more second-level form elements or low-level form elements.

    SELF-SUPERVISED HIERARCHICAL EVENT REPRESENTATION LEARNING

    公开(公告)号:US20230154186A1

    公开(公告)日:2023-05-18

    申请号:US17455126

    申请日:2021-11-16

    Applicant: ADOBE INC.

    CPC classification number: G06K9/00718 G06K9/00751 G06N3/088 G06K2009/00738

    Abstract: Systems and methods for video processing are described. Embodiments of the present disclosure generate a plurality of image feature vectors corresponding to a plurality of frames of a video; generate a plurality of low-level event representation vectors based on the plurality of image feature vectors, wherein a number of the low-level event representation vectors is less than a number of the image feature vectors; generate a plurality of high-level event representation vectors based on the plurality of low-level event representation vectors, wherein a number of the high-level event representation vectors is less than the number of the low-level event representation vectors; and identify a plurality of high-level events occurring in the video based on the plurality of high-level event representation vectors.

    TEXT CONDITIONED IMAGE SEARCH BASED ON DUAL-DISENTANGLED FEATURE COMPOSITION

    公开(公告)号:US20220237406A1

    公开(公告)日:2022-07-28

    申请号:US17160862

    申请日:2021-01-28

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.

    Model Training with Retrospective Loss

    公开(公告)号:US20210256387A1

    公开(公告)日:2021-08-19

    申请号:US16793551

    申请日:2020-02-18

    Applicant: Adobe Inc.

    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.

    Clustering product media files
    8.
    发明授权

    公开(公告)号:US11017016B2

    公开(公告)日:2021-05-25

    申请号:US15940849

    申请日:2018-03-29

    Applicant: Adobe Inc.

    Abstract: A method for clustering product media files is provided. The method includes dividing each media file corresponding to one or more products into a plurality of tiles. The media file include one of an image or a video. Feature vectors are computed for each tile of each media file. One or more patch clusters are generated using the plurality of tiles. Each patch cluster includes tiles having feature vectors similar to each other. The feature vectors of each media file are compared with feature vectors of each patch cluster. Based on comparison, product groups are then generated. All media files having comparison output similar to each other are grouped into one product group. Each product group includes one or more media files for one product. Apparatus for substantially performing the method as described herein is also provided.

    ELECTRONIC DOCUMENT SEGMENTATION USING DEEP LEARNING

    公开(公告)号:US20210049357A1

    公开(公告)日:2021-02-18

    申请号:US16539634

    申请日:2019-08-13

    Applicant: Adobe Inc.

    Abstract: Techniques for document segmentation. In an example, a document processing application segments an electronic document image into strips. A first strip overlaps a second strip. The application generates a first mask indicating one or more elements and element types in the first strip by applying a predictive model network to image content in the first strip and a prior mask generated from image content of the first strip. The application generates a second mask indicating one or more elements and element types in the second strip by applying the predictive model network to image content in the second strip and the first mask. The application computes, from a combined mask derived from the first mask and the second mask, an output electronic document that identifies elements in the electronic document and the respective element types.

    Digital Image Search Training using Aggregated Digital Images

    公开(公告)号:US20200134056A1

    公开(公告)日:2020-04-30

    申请号:US16177243

    申请日:2018-10-31

    Applicant: Adobe Inc.

    Abstract: Digital image search training techniques and machine-learning architectures are described. In one example, a query digital image is received by service provider system, which is then used to select at least one positive sample digital image, e.g., having a same product ID. A plurality of negative sample digital images is also selected by the service provider system based on the query digital image, e.g., having different product IDs. The at least one positive sample digital image and the plurality of negative samples are then aggregated by the service provider system into a single aggregated digital image. At least one neural network is then trained by the service provider system using a loss function based on a feature comparison between the query digital image and samples from the aggregated digital image in a single pass.

Patent Agency Ranking