Neural network architecture pruning
    341.
    发明授权

    公开(公告)号:US11663481B2

    公开(公告)日:2023-05-30

    申请号:US16799191

    申请日:2020-02-24

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.

    Digital Content Query-Aware Sequential Search

    公开(公告)号:US20230133522A1

    公开(公告)日:2023-05-04

    申请号:US17513127

    申请日:2021-10-28

    Applicant: Adobe Inc.

    Abstract: Digital content search techniques are described that overcome the challenges found in conventional sequence-based techniques through use of a query-aware sequential search. In one example, a search query is received and sequence input data is obtained based on the search query. The sequence input data describes a sequence of digital content and respective search queries. Embedding data is generated based on the sequence input data using an embedding module of a machine-learning model. The embedding module includes a query-aware embedding layer that generates embeddings of the sequence of digital content and respective search queries. A search result is generated referencing at least one item of digital content by processing the embedding data using at least one layer of the machine-learning model.

    Locally Constrained Self-Attentive Sequential Recommendation

    公开(公告)号:US20230116969A1

    公开(公告)日:2023-04-20

    申请号:US17501191

    申请日:2021-10-14

    Applicant: Adobe Inc.

    Abstract: Digital content search techniques are described. In one example, the techniques are incorporated as part of a multi-head self-attention module of a transformer using machine learning. A localized self-attention module, for instance, is incorporated as part of the multi-head self-attention module that applies local constraints to the sequence. This is performable in a variety of ways. In a first instance, a model-based local encoder is used, examples of which include a fixed-depth recurrent neural network (RNN) and a convolutional network. In a second instance, a masking-based local encoder is used, examples of which include use of a fixed window, Gaussian initialization, and an adaptive predictor.

    Generating modified digital images utilizing a dispersed multimodal selection model

    公开(公告)号:US11594077B2

    公开(公告)日:2023-02-28

    申请号:US17025477

    申请日:2020-09-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images based on verbal and/or gesture input by utilizing a natural language processing neural network and one or more computer vision neural networks. The disclosed systems can receive verbal input together with gesture input. The disclosed systems can further utilize a natural language processing neural network to generate a verbal command based on verbal input. The disclosed systems can select a particular computer vision neural network based on the verbal input and/or the gesture input. The disclosed systems can apply the selected computer vision neural network to identify pixels within a digital image that correspond to an object indicated by the verbal input and/or gesture input. Utilizing the identified pixels, the disclosed systems can generate a modified digital image by performing one or more editing actions indicated by the verbal input and/or gesture input.

    Semantic image manipulation using visual-semantic joint embeddings

    公开(公告)号:US11574142B2

    公开(公告)日:2023-02-07

    申请号:US16943511

    申请日:2020-07-30

    Applicant: Adobe Inc.

    Abstract: The technology described herein is directed to a reinforcement learning based framework for training a natural media agent to learn a rendering policy without human supervision or labeled datasets. The reinforcement learning based framework feeds the natural media agent a training dataset to implicitly learn the rendering policy by exploring a canvas and minimizing a loss function. Once trained, the natural media agent can be applied to any reference image to generate a series (or sequence) of continuous-valued primitive graphic actions, e.g., sequence of painting strokes, that when rendered by a synthetic rendering environment on a canvas, reproduce an identical or transformed version of the reference image subject to limitations of an action space and the learned rendering policy.

    AUTO-TAGS WITH OBJECT DETECTION AND CROPS

    公开(公告)号:US20220343108A1

    公开(公告)日:2022-10-27

    申请号:US17240246

    申请日:2021-04-26

    Applicant: ADOBE INC.

    Abstract: Systems and methods for image tagging are described. In some embodiments, images with problematic tags are identified after applying an auto-tagger. The images with problematic tags are then sent to an object detection network. In some cases, the object detection network is trained using a training set selected to improve detection of objects associated with the problematic tags. The output of the object detection network can be merged with the output of the auto-tagger to provide a combined image tagging output. In some cases, the output of the object detection network also includes a bounding box, which can be used to crop the image around a relevant object so that the auto-tagger can be reapplied to a portion of the image.

    GENERATING MODIFIED DIGITAL IMAGES USING DEEP VISUAL GUIDED PATCH MATCH MODELS FOR IMAGE INPAINTING

    公开(公告)号:US20220292650A1

    公开(公告)日:2022-09-15

    申请号:US17202019

    申请日:2021-03-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.

Patent Agency Ranking