Generating Adapted Virtual Content to Spatial Characteristics of a Physical Setting

    公开(公告)号:US20220012951A1

    公开(公告)日:2022-01-13

    申请号:US17484996

    申请日:2021-09-24

    Applicant: Apple Inc.

    Abstract: In some implementations, a method includes: identifying a plurality of subsets associated with a physical environment; determining a set of spatial characteristics for each of the plurality of subsets, wherein a first set of spatial characteristics characterizes dimensions of a first subset and a second set of spatial characteristics characterizes dimensions of a second subset; generating an adapted first extended reality (XR) content portion for the first subset based at least in part on the first set of spatial characteristics; generating an adapted second XR content portion for the second subset based at least in part on the second set of spatial characteristics; and generating one or more navigation options that allow a user to traverse between the first and second subsets based on the first and second sets of spatial characteristics.

    Generating Adapted Virtual Content to Spatial Characteristics of a Physical Setting

    公开(公告)号:US20250069340A1

    公开(公告)日:2025-02-27

    申请号:US18812432

    申请日:2024-08-22

    Applicant: Apple Inc.

    Abstract: In some implementations, a method includes: identifying a plurality of subsets associated with a physical environment; determining a set of spatial characteristics for each of the plurality of subsets, wherein a first set of spatial characteristics characterizes dimensions of a first subset and a second set of spatial characteristics characterizes dimensions of a second subset; generating an adapted first extended reality (XR) content portion for the first subset based at least in part on the first set of spatial characteristics; generating an adapted second XR content portion for the second subset based at least in part on the second set of spatial characteristics; and generating one or more navigation options that allow a user to traverse between the first and second subsets based on the first and second sets of spatial characteristics.

    Method and device for eye tracking using event camera data

    公开(公告)号:US12105280B2

    公开(公告)日:2024-10-01

    申请号:US17961963

    申请日:2022-10-07

    Applicant: Apple Inc.

    Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.

    MODIFYING VIRTUAL CONTENT TO INVOKE A TARGET USER STATE

    公开(公告)号:US20220197373A1

    公开(公告)日:2022-06-23

    申请号:US17690731

    申请日:2022-03-09

    Applicant: Apple Inc.

    Abstract: In one implementation, a method includes: while presenting reference CGR content, obtaining a request from a user to invoke a target state for the user; generating, based on a user model and the reference CGR content, modified CGR content to invoke the target state for the user; presenting the modified CGR content; after presenting the modified CGR content, determining a resultant state of the user; in accordance with a determination that the resultant state of the user corresponds to the target state for the user, updating the user model to indicate that the modified CGR content successfully invoked the target state for the user; and in accordance with a determination that the resultant state of the user does not correspond to the target state for the user, updating the user model to indicate that the modified CGR content did not successfully invoke the target state for the user.

    Systems and Methods for Providing Personalized Saliency Models

    公开(公告)号:US20220092331A1

    公开(公告)日:2022-03-24

    申请号:US17448456

    申请日:2021-09-22

    Applicant: Apple Inc.

    Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.

    METHOD AND DEVICE FOR EYE TRACKING USING EVENT CAMERA DATA

    公开(公告)号:US20220003994A1

    公开(公告)日:2022-01-06

    申请号:US17481272

    申请日:2021-09-21

    Applicant: Apple Inc.

    Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.

    WARPING AN INPUT IMAGE BASED ON DEPTH AND OFFSET INFORMATION

    公开(公告)号:US20230419439A1

    公开(公告)日:2023-12-28

    申请号:US18244513

    申请日:2023-09-11

    Applicant: Apple Inc.

    CPC classification number: G06T3/0093 G06T11/00 G06T7/50 G06T2207/20084

    Abstract: Various implementations disclosed herein include a method performed at an electronic device including one or more processors, a non-transitory memory, an image sensor, and a display device. The method includes obtaining, via the image sensor, an input image that includes an object. The method includes obtaining depth information characterizing the object, wherein the depth information characterizes a first distance between the image sensor and a portion of the object. The method includes determining a distance warp map for the input image based on a function of the depth information and a first offset value characterizing an estimated distance between eyes of a user and the display device. The method includes setting an operational parameter for the electronic device based on the distance warp map and generating, by the electronic device set to the operational parameter, a warped image from the input image.

    Systems and methods for providing personalized saliency models

    公开(公告)号:US11854242B2

    公开(公告)日:2023-12-26

    申请号:US17448456

    申请日:2021-09-22

    Applicant: Apple Inc.

    CPC classification number: G06V10/462 G06F18/21 G06V10/40 G06V30/274

    Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.

    Modifying virtual content to invoke a target user state

    公开(公告)号:US11703944B2

    公开(公告)日:2023-07-18

    申请号:US17690731

    申请日:2022-03-09

    Applicant: Apple Inc.

    CPC classification number: G06F3/011 G06N20/00 G06F2203/011

    Abstract: In one implementation, a method includes: while presenting reference CGR content, obtaining a request from a user to invoke a target state for the user; generating, based on a user model and the reference CGR content, modified CGR content to invoke the target state for the user; presenting the modified CGR content; after presenting the modified CGR content, determining a resultant state of the user; in accordance with a determination that the resultant state of the user corresponds to the target state for the user, updating the user model to indicate that the modified CGR content successfully invoked the target state for the user; and in accordance with a determination that the resultant state of the user does not correspond to the target state for the user, updating the user model to indicate that the modified CGR content did not successfully invoke the target state for the user.

    Method and device for utilizing physical objects and physical usage patterns for presenting virtual content

    公开(公告)号:US11532137B2

    公开(公告)日:2022-12-20

    申请号:US17231917

    申请日:2021-04-15

    Applicant: Apple Inc.

    Abstract: In some implementations, a method includes: determining first usage patterns associated with a physical object within the physical environment; obtaining a first objective for an objective-effectuator (OE) instantiated in a computer-generated reality (CGR) environment, wherein the first objective is associated with a representation of the physical object; obtaining a first directive for the OE that limits actions for performance by the OE to achieve the first objective to the first usage patterns associated with the physical object; generating first actions, for performance by the OE, in order to achieve the first objective as limited by the first directive, wherein the first set of actions corresponds to a first subset of usage patterns from the first set of usage patterns associated with the physical object; and presenting the OE performing the first actions on the representation of the physical object overlaid on the physical environment.

Patent Agency Ranking