Audience Engagement
    1.
    发明公开
    Audience Engagement 审中-公开

    公开(公告)号:US20240338160A1

    公开(公告)日:2024-10-10

    申请号:US18578777

    申请日:2022-08-22

    Applicant: APPLE INC.

    CPC classification number: G06F3/14 G06F3/011 G06T19/006 G06V40/172 G10L17/00

    Abstract: Various implementations disclosed herein include devices, systems, and methods for displaying presentation notes at varying positions within a presenter's field of view. In some implementations, a device includes a display, one or more processors, and a memory. A first portion of a media content item corresponding to a presentation is displayed at a first location in a three-dimensional environment. Audience engagement data corresponding to an engagement level of a member of an audience is received. A second portion of the media content item is displayed at a second location in the three-dimensional environment. The second location is selected based on the audience engagement data.

    Ambient Augmented Language Tutoring
    2.
    发明公开

    公开(公告)号:US20230290270A1

    公开(公告)日:2023-09-14

    申请号:US18112450

    申请日:2023-02-21

    Applicant: Apple Inc.

    CPC classification number: G09B19/06 G06T19/006 G06F3/011

    Abstract: Devices, systems, and methods that facilitate learning a language in an extended reality (XR) environment. This may involve identifying objects or activities in the environment, identifying a context associated with the user or the environment, and providing language teaching content based on the objects, activities, or contexts. In one example, the language teaching content provides individual words, phrases, or sentences corresponding to the objects, activities, or contexts. In another example, the language teaching content requests user interaction (e.g., via quiz questions or educational games) corresponding to the objects, activities, or contexts. Context may be used to determine whether or how to provide the language teaching content. For example, based on a user's current course of language study (e.g., this week's vocabulary list), corresponding object or activities may be identified in the environment for use in providing the language teaching content.

    Proactive Actions Based on Audio and Body Movement

    公开(公告)号:US20220291743A1

    公开(公告)日:2022-09-15

    申请号:US17689460

    申请日:2022-03-08

    Applicant: APPLE INC.

    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine that a user is interested in audio content by determining that a movement (e.g., a user's head bob) has a time-based relationship with detected audio content (e.g., the beat of music playing in the background). Some implementations involve obtaining first sensor data and second sensor data corresponding to a physical environment, the first sensor data corresponding to audio in the physical environment and the second sensor data corresponding to a body movement in the physical environment. A time-based relationship between one or more elements of the audio and one or more aspects of the body movement is identified based on the first sensor data and the second sensor data. An interest in content of the audio is identified based on identifying the time-based relationship. Various actions may be performed proactively based on identifying the interest in the content.

    Method And Device For Dynamic Sensory And Input Modes Based On Contextual State

    公开(公告)号:US20240219998A1

    公开(公告)日:2024-07-04

    申请号:US18291979

    申请日:2022-07-13

    Applicant: Apple Inc.

    CPC classification number: G06F3/011 G06T19/20 G06T2200/24 G06T2219/2016

    Abstract: In one implementation, a method for dynamically changing sensory and/or input modes associated with content based on a current contextual state. The method includes: while in a first contextual state, presenting extended reality (XR) content, via the display device, according to a first presentation N mode and enabling a first set of input modes to be directed to the XR content; detecting a change from the first contextual state to a second contextual state; and in response to detecting the change from the first contextual state to the second contextual state, presenting, via the display device, the XR content according to a second presentation mode different from the first presentation mode and enabling a second set of input modes to be directed to the XR content that are different from the first set of input modes.

    CONTENT TRANSFORMATIONS BASED ON REFLECTIVE OBJECT RECOGNITION

    公开(公告)号:US20250005873A1

    公开(公告)日:2025-01-02

    申请号:US18829450

    申请日:2024-09-10

    Applicant: Apple Inc.

    Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data in a physical environment that includes one or more objects. The method may further include detecting a reflection of a first object of the one or more objects upon a reflective surface of a reflective object based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment based on determining a 3D position of the reflection of the first object. The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.

    METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS

    公开(公告)号:US20240248532A1

    公开(公告)日:2024-07-25

    申请号:US18272261

    申请日:2022-01-11

    Applicant: Apple Inc.

    Abstract: In one implementation, a method for visualizing multi-modal inputs includes: displaying a first user interface element within an extended reality (XR) environment; determining a gaze direction based on first input data; in response to determining that the gaze direction is directed to the first user interface element, displaying a focus indicator with a first appearance in association with the first user interface element; detecting a change in pose of at least one of a head pose or a body pose of a user of the computing system; and, in response to detecting the change of pose, modifying the focus indicator from the first appearance to a second appearance different from the first appearance.

Patent Agency Ranking