Three dimensional user interface effects on a display by using properties of motion
    2.
    发明授权
    Three dimensional user interface effects on a display by using properties of motion 有权
    通过使用运动属性对显示器的三维用户界面效果

    公开(公告)号:US09417763B2

    公开(公告)日:2016-08-16

    申请号:US14571062

    申请日:2014-12-15

    Applicant: Apple Inc.

    Abstract: The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.

    Abstract translation: 本文公开的技术使用罗盘,MEMS加速度计,GPS模块和MEMS陀螺仪来推断用于手持设备的参考框架。 这可以提供真正的Frenet帧,即用于显示的X和Y向量,以及垂直于显示器指向的Z向量。 事实上,随着来自加速度计,陀螺仪和其他实时报告状态的仪器的各种惯性线索,可以实时跟踪设备的Frenet帧,以提供连续的3D参考帧。 一旦知道了这个连续的参考框架,用户的眼睛的位置可以通过使用设备的前置摄像机来直接推断或计算。 随着用户眼睛的位置和用于显示器的连续的3D参考框架,可以创建和显示设备显示器上的对象的更逼真的虚拟3D描绘,并由用户进行交互。

    Three Dimensional User Interface Effects On A Display
    3.
    发明申请
    Three Dimensional User Interface Effects On A Display 有权
    三维用户界面对显示器的影响

    公开(公告)号:US20150009130A1

    公开(公告)日:2015-01-08

    申请号:US14329777

    申请日:2014-07-11

    Applicant: Apple Inc.

    Abstract: The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.

    Abstract translation: 本文公开的技术可以使用各种传感器来推断用于手持设备的参考帧。 事实上,由于来自加速度计,陀螺仪和其他实时报告其状态的仪器的各种惯性线索,可以实时跟踪设备的Frenet帧,以提供瞬时(或连续)3D帧 - 参考。 除了或代替计算这个瞬时(或连续的)参照系,用户头部的位置可以通过使用设备的一个或多个光学传感器(例如,光学摄像机)来直接推断或计算, 红外摄像机,激光器等。通过了解用于显示和/或用户头部位置的知识的3D参考框架,可以创建设备显示器上的图形对象的更逼真的虚拟3D描绘,以及 与用户进行交互。

    Avatars Reflecting User States
    4.
    发明申请
    Avatars Reflecting User States 审中-公开
    形象反映用户国

    公开(公告)号:US20140143693A1

    公开(公告)日:2014-05-22

    申请号:US14163697

    申请日:2014-01-24

    Applicant: Apple Inc.

    CPC classification number: G06F3/04845 G06F3/04883 G06Q10/10 G06Q50/01

    Abstract: Methods, systems, and computer-readable media for creating and using customized avatar instances to reflect current user states are disclosed. In various implementations, the user states can be defined using trigger events based on user-entered textual data, emoticons, or states of the device being used. For each user state, a customized avatar instance having a facial expression, body language, accessories, clothing items, and/or a presentation scheme reflective of the user state can be generated. When one or more trigger events indicating occurrence of a particular user state are detected on the device, the avatar presented on the device is updated with the customized avatar instance associated with the particular user state.

    Abstract translation: 公开了用于创建和使用自定义化身实例以反映当前用户状态的方法,系统和计算机可读介质。 在各种实现中,可以基于用户输入的文本数据,表情符号或正在使用的设备的状态的触发事件来定义用户状态。 对于每个用户状态,可以生成具有面部表情,身体语言,附件,服装项目和/或反映用户状态的呈现方案的定制化身实例。 当在设备上检测到指示出现特定用户状态的一个或多个触发事件时,使用与特定用户状态相关联的定制化身实例更新在设备上呈现的化身。

    Three dimensional user interface effects on a display
    6.
    发明授权
    Three dimensional user interface effects on a display 有权
    显示器上的三维用户界面效果

    公开(公告)号:US09411413B2

    公开(公告)日:2016-08-09

    申请号:US14329777

    申请日:2014-07-11

    Applicant: Apple Inc.

    Abstract: The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.

    Abstract translation: 本文公开的技术可以使用各种传感器来推断用于手持设备的参考帧。 事实上,由于来自加速度计,陀螺仪和其他实时报告其状态的仪器的各种惯性线索,可以实时跟踪设备的Frenet帧,以提供瞬时(或连续)3D帧 - 参考。 除了或代替计算这个瞬时(或连续的)参照系,用户头部的位置可以通过使用设备的一个或多个光学传感器(例如,光学摄像机)来直接推断或计算, 红外摄像机,激光器等。通过了解用于显示和/或用户头部位置的知识的3D参考框架,可以创建设备显示器上的图形对象的更逼真的虚拟3D描绘,以及 与用户进行交互。

    Seamless Display Migration
    7.
    发明申请
    Seamless Display Migration 有权
    无缝显示迁移

    公开(公告)号:US20130033504A1

    公开(公告)日:2013-02-07

    申请号:US13647973

    申请日:2012-10-09

    Applicant: Apple Inc.

    Abstract: Exemplary embodiments of methods, apparatuses, and systems for seamlessly migrating a user visible display stream sent to a display device from one rendered display stream to another rendered display stream are described. For one embodiment, mirror video display streams are received from both a first graphics processing unit (GPU) and a second GPU, and the video display stream sent to a display device is switched from the video display stream from the first GPU to the video display stream from the second GPU, wherein the switching occurs during a blanking interval for the first GPU that overlaps with a blanking interval for the second GPU.

    Abstract translation: 描述了用于将发送到显示设备的用户可见显示流从一个渲染显示流无缝迁移到另一个渲染显示流的方法,装置和系统的示例性实施例。 对于一个实施例,从第一图形处理单元(GPU)和第二GPU两者接收镜像视频显示流,并且发送到显示设备的视频显示流从第一GPU的视频显示流切换到视频显示 流从第二GPU,其中切换发生在与第二GPU的消隐间隔重叠的第一GPU的消隐间隔期间。

    Avatars reflecting user states
    8.
    发明授权

    公开(公告)号:US09652134B2

    公开(公告)日:2017-05-16

    申请号:US14163697

    申请日:2014-01-24

    Applicant: Apple Inc.

    CPC classification number: G06F3/04845 G06F3/04883 G06Q10/10 G06Q50/01

    Abstract: Methods, systems, and computer-readable media for creating and using customized avatar instances to reflect current user states are disclosed. In various implementations, the user states can be defined using trigger events based on user-entered textual data, emoticons, or states of the device being used. For each user state, a customized avatar instance having a facial expression, body language, accessories, clothing items, and/or a presentation scheme reflective of the user state can be generated. When one or more trigger events indicating occurrence of a particular user state are detected on the device, the avatar presented on the device is updated with the customized avatar instance associated with the particular user state.

    Avatars reflecting user states
    10.
    发明授权

    公开(公告)号:US10042536B2

    公开(公告)日:2018-08-07

    申请号:US15591723

    申请日:2017-05-10

    Applicant: Apple Inc.

    Abstract: Methods, systems, and computer-readable media for creating and using customized avatar instances to reflect current user states are disclosed. In various implementations, the user states can be defined using trigger events based on user-entered textual data, emoticons, or states of the device being used. For each user state, a customized avatar instance having a facial expression, body language, accessories, clothing items, and/or a presentation scheme reflective of the user state can be generated. When one or more trigger events indicating occurrence of a particular user state are detected on the device, the avatar presented on the device is updated with the customized avatar instance associated with the particular user state.

Patent Agency Ranking