Resource-Aware Training for Neural Networks
    281.
    发明申请

    公开(公告)号:US20200234128A1

    公开(公告)日:2020-07-23

    申请号:US16254406

    申请日:2019-01-22

    Applicant: Adobe Inc.

    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.

    Environment map generation and hole filling

    公开(公告)号:US10719920B2

    公开(公告)日:2020-07-21

    申请号:US16188479

    申请日:2018-11-13

    Applicant: Adobe Inc.

    Abstract: In some embodiments, an image manipulation application receives a two-dimensional background image and projects the background image onto a sphere to generate a sphere image. Based on the sphere image, an unfilled environment map containing a hole area lacking image content can be generated. A portion of the unfilled environment map can be projected to an unfilled projection image using a map projection. The unfilled projection image contains the hole area. A hole filling model is applied to the unfilled projection image to generate a filled projection image containing image content for the hole area. A filled environment map can be generated by applying an inverse projection of the map projection on the filled projection image and by combining the unfilled environment map with the generated image content for the hole area of the environment map.

    Joint Training Technique for Depth Map Generation

    公开(公告)号:US20200175700A1

    公开(公告)日:2020-06-04

    申请号:US16204785

    申请日:2018-11-29

    Applicant: Adobe Inc.

    Abstract: Joint training technique for depth map generation implemented by depth prediction system as part of a computing device is described. The depth prediction system is configured to generate a candidate feature map from features extracted from training digital images, generate a candidate segmentation map and a candidate depth map from the generated candidate feature map, and jointly train portions of the depth prediction system using a loss function. Consequently, depth prediction system is able to generate a depth map that identifies depths of objects using ordinal depth information and accurately delineates object boundaries within a single digital image.

    Predicting patch displacement maps using a neural network

    公开(公告)号:US10672164B2

    公开(公告)日:2020-06-02

    申请号:US15785386

    申请日:2017-10-16

    Applicant: Adobe Inc.

    Abstract: Predicting patch displacement maps using a neural network is described. Initially, a digital image on which an image editing operation is to be performed is provided as input to a patch matcher having an offset prediction neural network. From this image and based on the image editing operation for which this network is trained, the offset prediction neural network generates an offset prediction formed as a displacement map, which has offset vectors that represent a displacement of pixels of the digital image to different locations for performing the image editing operation. Pixel values of the digital image are copied to the image pixels affected by the operation by: determining the vectors pixels that correspond to the image pixels affected by the image editing operation and mapping the pixel values of the image pixels represented by the determined offset vectors to the affected pixels. According to this mapping, the pixel values of the affected pixels are set, effective to perform the image editing operation.

    LEARNING COPY SPACE USING REGRESSION AND SEGMENTATION NEURAL NETWORKS

    公开(公告)号:US20200160111A1

    公开(公告)日:2020-05-21

    申请号:US16191724

    申请日:2018-11-15

    Applicant: ADOBE INC.

    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.

    GENERATING MODIFIED DIGITAL IMAGES UTILIZING A MULTIMODAL SELECTION MODEL BASED ON VERBAL AND GESTURE INPUT

    公开(公告)号:US20200160042A1

    公开(公告)日:2020-05-21

    申请号:US16192573

    申请日:2018-11-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images based on verbal and/or gesture input by utilizing a natural language processing neural network and one or more computer vision neural networks. The disclosed systems can receive verbal input together with gesture input. The disclosed systems can further utilize a natural language processing neural network to generate a verbal command based on verbal input. The disclosed systems can select a particular computer vision neural network based on the verbal input and/or the gesture input. The disclosed systems can apply the selected computer vision neural network to identify pixels within a digital image that correspond to an object indicated by the verbal input and/or gesture input. Utilizing the identified pixels, the disclosed systems can generate a modified digital image by performing one or more editing actions indicated by the verbal input and/or gesture input.

    High resolution style transfer
    287.
    发明授权

    公开(公告)号:US10650495B2

    公开(公告)日:2020-05-12

    申请号:US15997386

    申请日:2018-06-04

    Applicant: Adobe Inc.

    Abstract: High resolution style transfer techniques and systems are described that overcome the challenges of transferring high resolution style features from one image to another image, and of the limited availability of training data to perform high resolution style transfer. In an example, a neural network is trained using high resolution style features which are extracted from a style image and are used in conjunction with an input image to apply the style features to the input image to generate a version of the input image transformed using the high resolution style features.

    Event image curation
    288.
    发明授权

    公开(公告)号:US10565472B2

    公开(公告)日:2020-02-18

    申请号:US15935816

    申请日:2018-03-26

    Applicant: Adobe Inc.

    Abstract: In embodiments of event image curation, a computing device includes memory that stores a collection of digital images associated with a type of event, such as a digital photo album of digital photos associated with the event, or a video of image frames and the video is associated with the event. A curation application implements a convolutional neural network, which receives the digital images and a designation of the type of event. The convolutional neural network can then determine an importance rating of each digital image within the collection of the digital images based on the type of the event. The importance rating of a digital image is representative of an importance of the digital image to a person in context of the type of the event. The convolutional neural network generates an output of representative digital images from the collection based on the importance rating of each digital image.

    UTILIZING A DEEP NEURAL NETWORK-BASED MODEL TO IDENTIFY VISUALLY SIMILAR DIGITAL IMAGES BASED ON USER-SELECTED VISUAL ATTRIBUTES

    公开(公告)号:US20190354802A1

    公开(公告)日:2019-11-21

    申请号:US15983949

    申请日:2018-05-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a deep neural network-based model to identify similar digital images for query digital images. For example, the disclosed systems utilize a deep neural network-based model to analyze query digital images to generate deep neural network-based representations of the query digital images. In addition, the disclosed systems can generate results of visually-similar digital images for the query digital images based on comparing the deep neural network-based representations with representations of candidate digital images. Furthermore, the disclosed systems can identify visually similar digital images based on user-defined attributes and image masks to emphasize specific attributes or portions of query digital images.

    ITERATIVELY APPLYING NEURAL NETWORKS TO AUTOMATICALLY IDENTIFY PIXELS OF SALIENT OBJECTS PORTRAYED IN DIGITAL IMAGES

    公开(公告)号:US20190340462A1

    公开(公告)日:2019-11-07

    申请号:US15967928

    申请日:2018-05-01

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, method, and computer readable media that iteratively apply a neural network to a digital image at a reduced resolution to automatically identify pixels of salient objects portrayed within the digital image. For example, the disclosed systems can generate a reduced-resolution digital image from an input digital image and apply a neural network to identify a region corresponding to a salient object. The disclosed systems can then iteratively apply the neural network to additional reduced-resolution digital images (based on the identified region) to generate one or more reduced-resolution segmentation maps that roughly indicate pixels of the salient object. In addition, the systems described herein can perform post-processing based on the reduced-resolution segmentation map(s) and the input digital image to accurately determine pixels that correspond to the salient object.

Patent Agency Ranking