Dynamic mapping of virtual and physical interactions

    公开(公告)号:US10957103B2

    公开(公告)日:2021-03-23

    申请号:US15934000

    申请日:2018-03-23

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for performing dynamic mapping between a virtual environment and a real-world space. During dynamic mapping, a current virtual scene of the virtual environment that is within view of a user is prioritized over areas of the virtual environment that are out of view. The dynamic mapping between the virtual environment and the real-world space can be utilized to render a virtual scene for user in real-time. As a user interacts and/or moves within the virtual environment, dynamic mapping can be performed in real-time to capture any dynamic changes to the real-world space and/or the virtual environment.

    Beautifying freeform drawings using arc and circle center snapping

    公开(公告)号:US10762674B2

    公开(公告)日:2020-09-01

    申请号:US16752902

    申请日:2020-01-27

    Applicant: Adobe Inc.

    Abstract: Embodiments of the present invention are directed to beautifying freeform input paths in accordance with paths existing in the drawing (i.e., resolved paths). In some embodiments of the present invention, freeform input paths of a curved format can be modified or replaced to more precisely illustrate a path desired by a user. As such, a user can provide a freeform input path that resembles a path of interest by the user, but is not as precise as desired. Based on existing paths in the electronic drawing, a path suggestion(s) can be generated to rectify, modify, or replace the input path with a more precise path. In some cases, a user can then select a desired path suggestion, and the selected path then replaces the initially provided freeform input path.

    Procedural modeling using autoencoder neural networks

    公开(公告)号:US10552730B2

    公开(公告)日:2020-02-04

    申请号:US14788178

    申请日:2015-06-30

    Applicant: ADOBE INC.

    Abstract: An intuitive object-generation experience is provided by employing an autoencoder neural network to reduce the dimensionality of a procedural model. A set of sample objects are generated using the procedural model. In embodiments, the sample objects may be selected according to visual features such that the sample objects are uniformly distributed in visual appearance. Both procedural model parameters and visual features from the sample objects are used to train an autoencoder neural network, which maps a small number of new parameters to the larger number of procedural model parameters of the original procedural model. A user interface may be provided that allows users to generate new objects by adjusting the new parameters of the trained autoencoder neural network, which outputs procedural model parameters. The output procedural model parameters may be provided to the procedural model to generate the new objects.

Patent Agency Ranking