GENERATING AND UTILIZING PRUNED NEURAL NETWORKS

    公开(公告)号:US20230259778A1

    公开(公告)日:2023-08-17

    申请号:US18309367

    申请日:2023-04-28

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.

    RETOUCHING DIGITAL IMAGES UTILIZING LAYER SPECIFIC DEEP-LEARNING NEURAL NETWORKS

    公开(公告)号:US20230058793A1

    公开(公告)日:2023-02-23

    申请号:US18045730

    申请日:2022-10-11

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to an image retouching system that automatically retouches digital images by accurately correcting face imperfections such as skin blemishes and redness. For instance, the image retouching system automatically retouches a digital image through separating digital images into multiple frequency layers, utilizing a separate corresponding neural network to apply frequency-specific corrections at various frequency layers, and combining the retouched frequency layers into a retouched digital image. As described herein, the image retouching system efficiently utilizes different neural networks to target and correct skin features specific to each frequency layer.

    TEMPORALLY DISTRIBUTED NEURAL NETWORKS FOR VIDEO SEMANTIC SEGMENTATION

    公开(公告)号:US20220270370A1

    公开(公告)日:2022-08-25

    申请号:US17735156

    申请日:2022-05-03

    Applicant: Adobe Inc.

    Abstract: A Video Semantic Segmentation System (VSSS) is disclosed that performs accurate and fast semantic segmentation of videos using a set of temporally distributed neural networks. The VSSS receives as input a video signal comprising a contiguous sequence of temporally-related video frames. The VSSS extracts features from the video frames in the contiguous sequence and based upon the extracted features, selects, from a set of labels, a label to be associated with each pixel of each video frame in the video signal. In certain embodiments, a set of multiple neural networks are used to extract the features to be used for video segmentation and the extraction of features is distributed among the multiple neural networks in the set. A strong feature representation representing the entirety of the features is produced for each video frame in the sequence of video frames by aggregating the output features extracted by the multiple neural networks.

    Utilizing a neural network having a two-stream encoder architecture to generate composite digital images

    公开(公告)号:US11158055B2

    公开(公告)日:2021-10-26

    申请号:US16523465

    申请日:2019-07-26

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to utilizing a neural network having a two-stream encoder architecture to accurately generate composite digital images that realistically portray a foreground object from one digital image against a scene from another digital image. For example, the disclosed systems can utilize a foreground encoder of the neural network to identify features from a foreground image and further utilize a background encoder to identify features from a background image. The disclosed systems can then utilize a decoder to fuse the features together and generate a composite digital image. The disclosed systems can train the neural network utilizing an easy-to-hard data augmentation scheme implemented via self-teaching. The disclosed systems can further incorporate the neural network within an end-to-end framework for automation of the image composition process.

    Robust training of large-scale object detectors with a noisy dataset

    公开(公告)号:US11126890B2

    公开(公告)日:2021-09-21

    申请号:US16388115

    申请日:2019-04-18

    Applicant: ADOBE INC.

    Abstract: Systems and methods are described for object detection within a digital image using a hierarchical softmax function. The method may include applying a first softmax function of a softmax hierarchy on a digital image based on a first set of object classes that are children of a root node of a class hierarchy, then apply a second (and subsequent) softmax functions to the digital image based on a second (and subsequent) set of object classes, where the second (and subsequent) object classes are children nodes of an object class from the first (or parent) object classes. The methods may then include generating an object recognition output using a convolutional neural network (CNN) based at least in part on applying the first and second (and subsequent) softmax functions. In some cases, the hierarchical softmax function is the loss function for the CNN.

    Generating and utilizing pruned neural networks

    公开(公告)号:US11983632B2

    公开(公告)日:2024-05-14

    申请号:US18309367

    申请日:2023-04-28

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.

    Shaping a neural network architecture utilizing learnable sampling layers

    公开(公告)号:US11710042B2

    公开(公告)日:2023-07-25

    申请号:US16782793

    申请日:2020-02-05

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The present disclosure relates to shaping the architecture of a neural network. For example, the disclosed systems can provide a neural network shaping mechanism for at least one sampling layer of a neural network. The neural network shaping mechanism can include a learnable scaling factor between a sampling rate of the at least one sampling layer and an additional sampling function. The disclosed systems can learn the scaling factor based on a dataset while jointly learning the network weights of the neural network. Based on the learned scaling factor, the disclosed systems can shape the architecture of the neural network by modifying the sampling rate of the at least one sampling layer.

Patent Agency Ranking