Audience Engagement
    1.
    发明公开
    Audience Engagement 审中-公开

    公开(公告)号:US20240338160A1

    公开(公告)日:2024-10-10

    申请号:US18578777

    申请日:2022-08-22

    申请人: APPLE INC.

    摘要: Various implementations disclosed herein include devices, systems, and methods for displaying presentation notes at varying positions within a presenter's field of view. In some implementations, a device includes a display, one or more processors, and a memory. A first portion of a media content item corresponding to a presentation is displayed at a first location in a three-dimensional environment. Audience engagement data corresponding to an engagement level of a member of an audience is received. A second portion of the media content item is displayed at a second location in the three-dimensional environment. The second location is selected based on the audience engagement data.

    Ambient Augmented Language Tutoring
    2.
    发明公开

    公开(公告)号:US20230290270A1

    公开(公告)日:2023-09-14

    申请号:US18112450

    申请日:2023-02-21

    申请人: Apple Inc.

    IPC分类号: G09B19/06 G06T19/00 G06F3/01

    摘要: Devices, systems, and methods that facilitate learning a language in an extended reality (XR) environment. This may involve identifying objects or activities in the environment, identifying a context associated with the user or the environment, and providing language teaching content based on the objects, activities, or contexts. In one example, the language teaching content provides individual words, phrases, or sentences corresponding to the objects, activities, or contexts. In another example, the language teaching content requests user interaction (e.g., via quiz questions or educational games) corresponding to the objects, activities, or contexts. Context may be used to determine whether or how to provide the language teaching content. For example, based on a user's current course of language study (e.g., this week's vocabulary list), corresponding object or activities may be identified in the environment for use in providing the language teaching content.

    Proactive Actions Based on Audio and Body Movement

    公开(公告)号:US20220291743A1

    公开(公告)日:2022-09-15

    申请号:US17689460

    申请日:2022-03-08

    申请人: APPLE INC.

    摘要: Various implementations disclosed herein include devices, systems, and methods that determine that a user is interested in audio content by determining that a movement (e.g., a user's head bob) has a time-based relationship with detected audio content (e.g., the beat of music playing in the background). Some implementations involve obtaining first sensor data and second sensor data corresponding to a physical environment, the first sensor data corresponding to audio in the physical environment and the second sensor data corresponding to a body movement in the physical environment. A time-based relationship between one or more elements of the audio and one or more aspects of the body movement is identified based on the first sensor data and the second sensor data. An interest in content of the audio is identified based on identifying the time-based relationship. Various actions may be performed proactively based on identifying the interest in the content.

    Method And Device For Dynamic Sensory And Input Modes Based On Contextual State

    公开(公告)号:US20240219998A1

    公开(公告)日:2024-07-04

    申请号:US18291979

    申请日:2022-07-13

    申请人: Apple Inc.

    IPC分类号: G06F3/01 G06T19/20

    摘要: In one implementation, a method for dynamically changing sensory and/or input modes associated with content based on a current contextual state. The method includes: while in a first contextual state, presenting extended reality (XR) content, via the display device, according to a first presentation N mode and enabling a first set of input modes to be directed to the XR content; detecting a change from the first contextual state to a second contextual state; and in response to detecting the change from the first contextual state to the second contextual state, presenting, via the display device, the XR content according to a second presentation mode different from the first presentation mode and enabling a second set of input modes to be directed to the XR content that are different from the first set of input modes.

    METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS

    公开(公告)号:US20240248532A1

    公开(公告)日:2024-07-25

    申请号:US18272261

    申请日:2022-01-11

    申请人: Apple Inc.

    IPC分类号: G06F3/01 G06T19/00 G09G3/00

    摘要: In one implementation, a method for visualizing multi-modal inputs includes: displaying a first user interface element within an extended reality (XR) environment; determining a gaze direction based on first input data; in response to determining that the gaze direction is directed to the first user interface element, displaying a focus indicator with a first appearance in association with the first user interface element; detecting a change in pose of at least one of a head pose or a body pose of a user of the computing system; and, in response to detecting the change of pose, modifying the focus indicator from the first appearance to a second appearance different from the first appearance.