TRANSFORMING STATIC TWO-DIMENSIONAL IMAGES INTO IMMERSIVE COMPUTER-GENERATED CONTENT

    公开(公告)号:US20220165024A1

    公开(公告)日:2022-05-26

    申请号:US17103848

    申请日:2020-11-24

    Abstract: A method for transforming static two-dimensional images into immersive computer generated content includes various operations performed by a processing system including at least one processor. In one example, the operations include extracting a plurality of physical features of a media asset from a plurality of two-dimensional images of the media asset, constructing a three-dimensional model of the media asset, based on the plurality of physical features, extracting a plurality of narrative elements associated with the media asset from the plurality of two-dimensional images of the media asset, building a hierarchy of a narrative for the media asset, based on at least a subset of the plurality of narrative elements, and creating an immersive experience based on the three-dimensional model and the hierarchy of the narrative.

    Systems and methods for spatial remodeling in extended reality

    公开(公告)号:US11189106B2

    公开(公告)日:2021-11-30

    申请号:US16859709

    申请日:2020-04-27

    Abstract: Aspects of the subject disclosure may include, for example, storing, in a database, a decorating style preference of a user; receiving, from user equipment of the user, one or more images (and/or one or more 2D environment models and/or one or more 3D environment models) depicting an environment in which remodeling is desired; generating, via a machine learning process, a first model to present by the user equipment, the generating the first model being based upon the decorating style preference and the one or more images (and/or the one or more 2D environment models and/or the one or more 3D environment models), the first model comprising a first remodeling proposal for the environment; sending, to the user equipment, the first model, the sending of the first model facilitating display by the user equipment of a first depiction of the environment as proposed by the first remodeling proposal; receiving, from the user equipment, feedback information regarding the first remodeling proposal; generating, via the machine learning process, a second model to present by the user equipment, the generating the second model being based upon the decorating style preference, the one or more images (and/or the one or more 2D environment models and/or the one or more 3D environment models), and the feedback information, the second model comprising a second remodeling proposal for the environment; and sending, to the user equipment, the second model, the sending of the second model facilitating display by the user equipment of a second depiction of the environment as proposed by the second remodeling proposal. Other embodiments are disclosed.

    METHOD AND SYSTEM FOR PERSONALIZING METAVERSE OBJECT RECOMMENDATIONS OR REVIEWS

    公开(公告)号:US20230410159A1

    公开(公告)日:2023-12-21

    申请号:US17841045

    申请日:2022-06-15

    CPC classification number: G06Q30/0282 G06Q30/0201

    Abstract: Aspects of the subject disclosure may include, for example, obtaining contextual information associated with a user, wherein the user is engaged in an immersive environment using a target user device, and wherein the contextual information comprises user profile data, data regarding a location of the user, data regarding one or more inputs provided by the user, or a combination thereof, receiving data regarding a metaverse object in the immersive environment, determining a relevance of the metaverse object to the user based on the contextual information and the data regarding the metaverse object, responsive to the determining the relevance of the metaverse object to the user, generating a personalized recommendation or review of the metaverse object for the user, and causing the personalized recommendation or review to be provided to the user in the immersive environment for user consumption. Other embodiments are disclosed.

    User-driven adaptation of immersive experiences

    公开(公告)号:US11675419B2

    公开(公告)日:2023-06-13

    申请号:US17103858

    申请日:2020-11-24

    Abstract: A method includes obtaining a set of components that, when collectively rendered, presents an immersive experience, extracting a narrative from the set of components, learning a plurality of details of the immersive experience that exhibit variance, based on an analysis of the set of components and an analysis of the narrative, presenting a device of a creator of the immersive experience with an identification of the plurality of the details, receiving from the device of the creator, an input, wherein the input defines a variant for a default segment of one component of the set of components, and wherein the variant presents an altered form of at least one detail of the plurality of details that is presented in the default segment, and storing the set of components, the variant, and information indicating how and when to present the variant to a user device in place of the default segment.

    EXTERNAL AUDIO ENHANCEMENT VIA SITUATIONAL DETECTION MODELS FOR WEARABLE AUDIO DEVICES

    公开(公告)号:US20230063988A1

    公开(公告)日:2023-03-02

    申请号:US18045451

    申请日:2022-10-10

    Abstract: A processing system including at least one processor may capture data from a sensor comprising a microphone of a wearable device, the data comprising external audio data captured via the microphone, determine first audio data a first audio source in the external audio data, apply the first audio data to a situational detection model, and detect a first situation via the first situational detection model. The processing system may then modify, in response to detecting the first situation via the first situational detection model, the external audio data via a change to the first audio data in the external audio data to generate a modified audio data, in accordance with at least a first audio adjustment corresponding to the first situational detection model, where the modifying comprises increasing or decreasing a volume of the first audio data, and present the modified audio data via an earphone of the wearable device.

    EXTERNAL AUDIO ENHANCEMENT VIA SITUATIONAL DETECTION MODELS FOR WEARABLE AUDIO DEVICES

    公开(公告)号:US20220167075A1

    公开(公告)日:2022-05-26

    申请号:US17101501

    申请日:2020-11-23

    Abstract: A processing system including at least one processor may capture data from a sensor comprising a microphone of a wearable device, the data comprising external audio data captured via the microphone, determine first audio data a first audio source in the external audio data, apply the first audio data to a situational detection model, and detect a first situation via the first situational detection model. The processing system may then modify, in response to detecting the first situation via the first situational detection model, the external audio data via a change to the first audio data in the external audio data to generate a modified audio data, in accordance with at least a first audio adjustment corresponding to the first situational detection model, where the modifying comprises increasing or decreasing a volume of the first audio data, and present the modified audio data via an earphone of the wearable device.

Patent Agency Ranking