GENERATING ANIMATIONS IN AN AUGMENTED REALITY ENVIRONMENT

    公开(公告)号:US20210256751A1

    公开(公告)日:2021-08-19

    申请号:US17231180

    申请日:2021-04-15

    申请人: Adobe Inc.

    IPC分类号: G06T13/20 G06T19/00

    摘要: The present disclosure relates to an AR animation generation system that detects a change in position of a mobile computing system in a real-world environment, determines that a position for a virtual object in an augmented reality (AR) scene is to be changed from a first position in the AR scene to a second position in the AR scene, identifies an animation profile to be used for animating the virtual object, wherein the animation profile is associated with the virtual object, and animates the virtual object in the AR scene using the animation profile. Animating the virtual object in the AR scene includes moving the virtual object in the AR scene from the first position to the second position along a path, wherein the path and a movement of the virtual object along the path are determined based on the animation profile.

    Particle-based spatial audio visualization

    公开(公告)号:US10791412B2

    公开(公告)日:2020-09-29

    申请号:US16790469

    申请日:2020-02-13

    申请人: ADOBE INC.

    IPC分类号: H04S7/00

    摘要: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.

    Facilitating synchronization of motion imagery and audio

    公开(公告)号:US10453494B2

    公开(公告)日:2019-10-22

    申请号:US15403035

    申请日:2017-01-10

    申请人: ADOBE INC.

    摘要: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating synchronization of audio with motion imagery. In embodiments, an indication to create a relationship between an audio feature associated with an audio and an imagery feature associated with a motion imagery is received. Thereafter, a relationship is created between the audio feature and the imagery feature in accordance with an instance or a time duration to synchronize the audio with the motion imagery. Based on the relationship between the audio feature and the imagery feature, the imagery feature of the component is automatically manipulated in relation to the audio feature at the designated instance or the time duration.

    PARTICLE-BASED SPATIAL AUDIO VISUALIZATION
    4.
    发明申请

    公开(公告)号:US20190149941A1

    公开(公告)日:2019-05-16

    申请号:US16218207

    申请日:2018-12-12

    申请人: ADOBE INC.

    IPC分类号: H04S7/00

    摘要: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.

    Generating animations in an augmented reality environment

    公开(公告)号:US10984574B1

    公开(公告)日:2021-04-20

    申请号:US16692521

    申请日:2019-11-22

    申请人: Adobe Inc.

    摘要: The present disclosure relates to an AR animation generation system identifies an animation profile for animating the virtual object displayed in an augmented reality (AR) scene. The AR animation generation system creates a link between the virtual object and the mobile computing system based upon a position of the virtual object within the AR scene and a position of a mobile device in a real-world environment. The link enables determining for each position of the mobile device in the real-world environment, a corresponding position for the virtual object in the AR scene. The AR animation generation system animates the virtual object using the mobile device by detecting a change in position in the real-world environment of the mobile computing system from a first position to a second position and using the link to determine a change in position for the virtual object in the AR scene from a first position to a second position, The AR animation generation system updates the AR scene to display the virtual object in the second position in the AR scene.

    Immersive media content navigation and editing techniques

    公开(公告)号:US10649638B2

    公开(公告)日:2020-05-12

    申请号:US15889628

    申请日:2018-02-06

    申请人: Adobe Inc.

    摘要: Techniques and systems to support immersive media content navigation and editing are described. A two-dimensional equirectangular projection of a spherical video is generated by a computing device and displayed in a navigator portion of a user interface of a content editing application. A visual position indicator, indicative of a position within the spherical video, is displayed over the 2D equirectangular projection of the spherical video. A portion of the spherical video is determined based on the position, and a planar spherical view of the portion of the spherical video is generated by the computing device and displayed in a compositor portion of the user interface. The navigator portion and the compositor portion are linked such that user input to the navigator portion or the compositor portion of the user interface causes corresponding visual changes in both the navigator portion and the compositor portion of the user interface.

    Immersive Media Content Navigation and Editing Techniques

    公开(公告)号:US20190243530A1

    公开(公告)日:2019-08-08

    申请号:US15889628

    申请日:2018-02-06

    申请人: Adobe Inc.

    摘要: Techniques and systems to support immersive media content navigation and editing are described. A two-dimensional equirectangular projection of a spherical video is generated by a computing device and displayed in a navigator portion of a user interface of a content editing application. A visual position indicator, indicative of a position within the spherical video, is displayed over the 2D equirectangular projection of the spherical video. A portion of the spherical video is determined based on the position, and a planar spherical view of the portion of the spherical video is generated by the computing device and displayed in a compositor portion of the user interface. The navigator portion and the compositor portion are linked such that user input to the navigator portion or the compositor portion of the user interface causes corresponding visual changes in both the navigator portion and the compositor portion of the user interface.

    PARTICLE-BASED SPATIAL AUDIO VISUALIZATION
    9.
    发明申请

    公开(公告)号:US20200186957A1

    公开(公告)日:2020-06-11

    申请号:US16790469

    申请日:2020-02-13

    申请人: ADOBE INC.

    IPC分类号: H04S7/00

    摘要: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.

    Particle-based spatial audio visualization

    公开(公告)号:US10575119B2

    公开(公告)日:2020-02-25

    申请号:US16218207

    申请日:2018-12-12

    申请人: ADOBE INC.

    IPC分类号: H04S7/00

    摘要: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.