-
公开(公告)号:US11930303B2
公开(公告)日:2024-03-12
申请号:US17526998
申请日:2021-11-15
申请人: Adobe Inc.
CPC分类号: H04N9/3182 , G06T5/92 , H04N9/73 , G06T2207/20081
摘要: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.
-
公开(公告)号:US11900902B2
公开(公告)日:2024-02-13
申请号:US17228357
申请日:2021-04-12
申请人: Adobe Inc.
CPC分类号: G10H1/0008 , G06N3/084 , H03G3/32 , H03G5/025 , H04R5/04 , G10H2250/165
摘要: Embodiments are disclosed for determining an answer to a query associated with a graphical representation of data. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an unprocessed audio sequence and a request to perform an audio signal processing effect on the unprocessed audio sequence. The one or more embodiments further include analyzing, by a deep encoder, the unprocessed audio sequence to determine parameters for processing the unprocessed audio sequence. The one or more embodiments further include sending the unprocessed audio sequence and the parameters to one or more audio signal processing effects plugins to perform the requested audio signal processing effect using the parameters and outputting a processed audio sequence after processing of the unprocessed audio sequence using the parameters of the one or more audio signal processing effects plugins.
-
公开(公告)号:US11798180B2
公开(公告)日:2023-10-24
申请号:US17186436
申请日:2021-02-26
申请人: Adobe Inc.
发明人: Wei Yin , Jianming Zhang , Oliver Wang , Simon Niklaus , Mai Long , Su Chen
CPC分类号: G06T7/50 , G06T7/13 , G06T7/143 , G06T7/30 , G06T7/521 , G06T7/593 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084
摘要: This disclosure describes one or more implementations of a depth prediction system that generates accurate depth images from single input digital images. In one or more implementations, the depth prediction system enforces different sets of loss functions across mix-data sources to generate a multi-branch architecture depth prediction model. For instance, in one or more implementations, the depth prediction model utilizes different data sources having different granularities of ground truth depth data to robustly train a depth prediction model. Further, given the different ground truth depth data granularities from the different data sources, the depth prediction model enforces different combinations of loss functions including an image-level normalized regression loss function and/or a pair-wise normal loss among other loss functions.
-
34.
公开(公告)号:US11568642B2
公开(公告)日:2023-01-31
申请号:US17068429
申请日:2020-10-12
申请人: ADOBE INC.
摘要: Methods and systems are provided for facilitating large-scale augmented reality in relation to outdoor scenes using estimated camera pose information. In particular, camera pose information for an image can be estimated by matching the image to a rendered ground-truth terrain model with known camera pose information. To match images with such renders, data driven cross-domain feature embedding can be learned using a neural network. Cross-domain feature descriptors can be used for efficient and accurate feature matching between the image and the terrain model renders. This feature matching allows images to be localized in relation to the terrain model, which has known camera pose information. This known camera pose information can then be used to estimate camera pose information in relation to the image.
-
公开(公告)号:US20220277514A1
公开(公告)日:2022-09-01
申请号:US17186522
申请日:2021-02-26
申请人: Adobe Inc.
发明人: Wei Yin , Jianming Zhang , Oliver Wang , Simon Niklaus , Mai Long , Su Chen
摘要: This disclosure describes implementations of a three-dimensional (3D) scene recovery system that reconstructs a 3D scene representation of a scene portrayed in a single digital image. For instance, the 3D scene recovery system trains and utilizes a 3D point cloud model to recover accurate intrinsic camera parameters from a depth map of the digital image. Additionally, the 3D point cloud model may include multiple neural networks that target specific intrinsic camera parameters. For example, the 3D point cloud model may include a depth 3D point cloud neural network that recovers the depth shift as well as include a focal length 3D point cloud neural network that recovers the camera focal length. Further, the 3D scene recovery system may utilize the recovered intrinsic camera parameters to transform the single digital image into an accurate and realistic 3D scene representation, such as a 3D point cloud.
-
公开(公告)号:US20220182588A1
公开(公告)日:2022-06-09
申请号:US17526998
申请日:2021-11-15
申请人: Adobe Inc.
摘要: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.
-
公开(公告)号:US11354906B2
公开(公告)日:2022-06-07
申请号:US16846544
申请日:2020-04-13
申请人: Adobe Inc.
发明人: Federico Perazzi , Zhe Lin , Ping Hu , Oliver Wang , Fabian David Caba Heilbron
摘要: A Video Semantic Segmentation System (VSSS) is disclosed that performs accurate and fast semantic segmentation of videos using a set of temporally distributed neural networks. The VSSS receives as input a video signal comprising a contiguous sequence of temporally-related video frames. The VSSS extracts features from the video frames in the contiguous sequence and based upon the extracted features, selects, from a set of labels, a label to be associated with each pixel of each video frame in the video signal. In certain embodiments, a set of multiple neural networks are used to extract the features to be used for video segmentation and the extraction of features is distributed among the multiple neural networks in the set. A strong feature representation representing the entirety of the features is produced for each video frame in the sequence of video frames by aggregating the output features extracted by the multiple neural networks.
-
公开(公告)号:US11178374B2
公开(公告)日:2021-11-16
申请号:US16428201
申请日:2019-05-31
申请人: Adobe Inc.
发明人: Stephen DiVerdi , Seth Walker , Oliver Wang , Cuong Nguyen
IPC分类号: H04N13/111 , H04N13/282 , H04N13/383
摘要: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
-
公开(公告)号:US11158090B2
公开(公告)日:2021-10-26
申请号:US16692503
申请日:2019-11-22
申请人: Adobe Inc.
发明人: Tharun Mohandoss , Pulkit Gera , Oliver Wang , Kartik Sethi , Kalyan Sunkavalli , Elya Shechtman , Chetan Nanda
摘要: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
-
公开(公告)号:US20210160466A1
公开(公告)日:2021-05-27
申请号:US16696160
申请日:2019-11-26
申请人: Adobe Inc.
摘要: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.
-
-
-
-
-
-
-
-
-