-
公开(公告)号:US20200312009A1
公开(公告)日:2020-10-01
申请号:US16368548
申请日:2019-03-28
Applicant: ADOBE INC.
Inventor: Xin Sun , Nathan Aaron Carr , Alexandr Kuznetsov
Abstract: Images are rendered from deeply learned raytracing parameters. Active learning, via a machine learning (ML) model (e.g., implemented by a deep neural network), is used to automatically determine, infer, and/or predict optimized, or at least somewhat optimized, values for parameters used in raytracing methods. Utilizing deep learning to determine optimized, or at least somewhat optimized, values for raytracing parameters is in contrast to conventional methods, which require users to rely of heuristics for parameter value setting. In various embodiments, one or more parameters regarding the termination and splitting of traced light paths in stochastic-based (e.g., Monte Carlo) raytracing are determined via active learning. In some embodiments, one or more parameters regarding the sampling rate of shadow rays are also determined.
-
公开(公告)号:US10650599B2
公开(公告)日:2020-05-12
申请号:US16029205
申请日:2018-07-06
Applicant: Adobe Inc.
Inventor: Xin Sun , Nathan Carr , Hao Qin
Abstract: The present disclosure includes methods and systems for rendering digital images of a virtual environment utilizing full path space learning. In particular, one or more embodiments of the disclosed systems and methods estimate a global light transport function based on sampled paths within a virtual environment. Moreover, in one or more embodiments, the disclosed systems and methods utilize the global light transport function to sample additional paths. Accordingly, the disclosed systems and methods can iteratively update an estimated global light transport function and utilize the estimated global light transport function to focus path sampling on regions of a virtual environment most likely to impact rendering a digital image of the virtual environment from a particular camera perspective.
-
公开(公告)号:US20190355095A1
公开(公告)日:2019-11-21
申请号:US15980367
申请日:2018-05-15
Applicant: Adobe Inc.
Abstract: In some embodiments, a computing device uses a blue noise sampling operation to identify source pixels from an input image defining respective pixel sets. Each pixel set is associated with a respective weight matrix for a down-scaling operation. The blue noise sampling operation causes an overlap region between first and second pixel sets. The computing device assigns an overlap pixel in the overlap region to the first weight matrix based on the overlap pixel being closer to the first source pixel. The computing device modifies the second weight matrix to exclude the overlap pixel from a portion of the down-scaling operation involving the second weight matrix. The computing device performs the down-scaling operation on the input image by combining the first pixel set into a first target pixel with the first weight matrix and combining the second pixel set into a second target with the modified second weight matrix.
-
公开(公告)号:US10290146B2
公开(公告)日:2019-05-14
申请号:US15335069
申请日:2016-10-26
Applicant: ADOBE INC.
Inventor: Zhili Chen , Xin Sun , Nathan Carr
Abstract: Techniques disclosed herein display depth effects in digital artwork based on movement of a display. In one technique, a first rendering of the digital artwork is displayed on the display. While the first rendering is displayed, a movement of the display is determined based on motion information from a motion sensor associated with the display. Based on the movement of the display, a position of the digital artwork is determined relative to a fixed gaze direction and a fixed light direction in a 3 dimensional (3D) model. A second rendering of the digital artwork is displayed on the display on the artwork. Displaying the second rendering involves displaying a depth effect based on variable depth of the digital artwork and the position of the digital artwork relative to the fixed gaze direction and the fixed light direction in the 3D model.
-
公开(公告)号:US20250029323A1
公开(公告)日:2025-01-23
申请号:US18354855
申请日:2023-07-19
Applicant: Adobe Inc.
Inventor: Krishna Bhargava Mullia Lakshminarayana , Xin Sun , Miloš Hašan , Fujun Luan
Abstract: Techniques for generation of compressed representations for appearance of fiber-based digital assets are described that support computationally efficient and high fidelity rendering of digital assets that include fiber primitives under a variety of lighting conditions and view directions. A processing device, for instance, receives a digital asset that includes fiber primitives to be included in a three-dimensional digital scene. The processing device generates a compressed representation of the digital asset that maintains a geometry of the digital asset and includes a precomputed light transport. The processing device then inserts the compressed representation into the digital scene, such as at a location relative to one or more light sources. The content processing system applies one or more lighting effects to the compressed representation based on the precomputed light transport and the location relative to the one or more light sources.
-
公开(公告)号:US11823313B2
公开(公告)日:2023-11-21
申请号:US17332773
申请日:2021-05-27
Applicant: Adobe Inc.
Inventor: Xin Sun , Sohrab Amirghodsi , Nathan Carr , Michal Lukac
CPC classification number: G06T11/60 , G06V10/758
Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media for generating a modified digital image by identifying patch matches within a digital image utilizing a Gaussian mixture model. For example, the systems described herein can identify sample patches and corresponding matching portions within a digital image. The systems can also identify transformations between the sample patches and the corresponding matching portions. Based on the transformations, the systems can generate a Gaussian mixture model, and the systems can modify a digital image by replacing a target region with target matching portions identified in accordance with the Gaussian mixture model.
-
公开(公告)号:US20230037591A1
公开(公告)日:2023-02-09
申请号:US17383294
申请日:2021-07-22
Applicant: Adobe Inc.
Inventor: Ruben Villegas , Yunseok Jang , Duygu Ceylan Aksit , Jimei Yang , Xin Sun
Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map. Additionally, the disclosed system generates a modified digital image with the three-dimensional object inserted into the digital image with consistent lighting of the three-dimensional object and the digital image.
-
公开(公告)号:US20230017659A1
公开(公告)日:2023-01-19
申请号:US17365043
申请日:2021-07-01
Applicant: Adobe Inc.
Inventor: Theo Thonat , Xin Sun , Tamy Boubekeur , Nathan Carr , Francois Beaune
Abstract: Aspects and features of the present disclosure provide a direct ray tracing operator with a low memory footprint for surfaces enriched with displacement maps. A graphics editing application can be used to manipulate displayed representations of a 3D object that include surfaces with displacement textures. The application creates an independent map of a displaced surface. The application ray-traces bounding volumes on the fly and uses the intersection of a query ray with a bounding volume to produce rendering information for a displaced surface. The rendering information can be used to generate displaced surfaces for various base surfaces without significant re-computation so that updated images can be rendered quickly, in real time or near real time.
-
公开(公告)号:US11380023B2
公开(公告)日:2022-07-05
申请号:US16823092
申请日:2020-03-18
Applicant: Adobe Inc.
Inventor: Xin Sun , Ruben Villegas , Manuel Lagunas Arto , Jimei Yang , Jianming Zhang
Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.
-
公开(公告)号:US10950038B2
公开(公告)日:2021-03-16
申请号:US16800783
申请日:2020-02-25
Applicant: ADOBE INC.
Inventor: Jeong Joon Park , Zhili Chen , Xin Sun , Vladimir Kim , Kalyan Krishna Sunkavalli , Duygu Ceylan Aksit
Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector. The resulting illumination of the VO matches the second combination of the intensities and surface reflections.
-
-
-
-
-
-
-
-
-