MAINTAINING MULTIPLE VIEWS ON A SHARED STABLE VIRTUAL SPACE
    1.
    发明申请
    MAINTAINING MULTIPLE VIEWS ON A SHARED STABLE VIRTUAL SPACE 有权
    维持一个共享稳定的虚拟空间的多个视图

    公开(公告)号:US20160214011A1

    公开(公告)日:2016-07-28

    申请号:US15089360

    申请日:2016-04-01

    Abstract: Methods, apparatus, and computer programs for controlling a view of a virtual scene with a handheld device are presented. In one method, images of a real world scene are captured using a device. The method further includes operations for creating an augmented view for presentation on a display of the device by augmenting the images with virtual reality objects, and for detecting a hand in the images as extending into the real world scene. In addition, the method includes operations for showing the hand in the screen as detected in the images, and for generating interaction data, based on an interaction of the hand with a virtual reality object, when the hand makes virtual contact in the augmented view with the virtual reality object. The augmented view is updated based on the interaction data, which simulates on the screen that the hand is interacting with the virtual reality object.

    Abstract translation: 提出了用手持设备控制虚拟场景视图的方法,装置和计算机程序。 在一种方法中,使用设备捕获真实世界场景的图像。 该方法还包括用于通过用虚拟现实对象增强图像并且用于检测图像中的手以延伸到现实世界场景中来创建用于在设备的显示器上呈现的增强视图的操作。 此外,该方法包括用于在图像中检测到的用于显示屏幕中的手的操作,并且用于当手与虚拟现实对象的交互作用生成交互数据时,当手在增强视图中进行虚拟接触时, 虚拟现实对象。 基于交互数据更新增强视图,该交互数据在屏幕上模拟手与虚拟现实对象交互。

    Maintaining multiple views on a shared stable virtual space
    2.
    发明授权
    Maintaining multiple views on a shared stable virtual space 有权
    在共享的稳定虚拟空间上维护多个视图

    公开(公告)号:US09310883B2

    公开(公告)日:2016-04-12

    申请号:US14260216

    申请日:2014-04-23

    Abstract: Methods, apparatus, and computer programs for controlling a view of a virtual scene with a handheld device are presented. In one method, images of a real world scene are captured using a device. The method further includes operations for creating an augmented view for presentation on a display of the device by augmenting the images with virtual reality objects, and for detecting a hand in the images as extending into the real world scene. In addition, the method includes operations for showing the hand in the screen as detected in the images, and for generating interaction data, based on an interaction of the hand with a virtual reality object, when the hand makes virtual contact in the augmented view with the virtual reality object. The augmented view is updated based on the interaction data, which simulates on the screen that the hand is interacting with the virtual reality object.

    Abstract translation: 提出了用手持设备控制虚拟场景视图的方法,装置和计算机程序。 在一种方法中,使用设备捕获真实世界场景的图像。 该方法还包括用于通过用虚拟现实对象增强图像并且用于检测图像中的手以延伸到现实世界场景中来创建用于在设备的显示器上呈现的增强视图的操作。 此外,该方法包括用于在图像中检测到的用于显示屏幕中的手的操作,以及用于当手与虚拟现实对象的交互作用时,当手在增强视图中进行虚拟接触以产生交互数据时, 虚拟现实对象。 基于交互数据更新增强视图,该交互数据在屏幕上模拟手与虚拟现实对象交互。

    Evolution of a user interface based on learned idiosyncrasies and collected data of a user
    3.
    发明授权
    Evolution of a user interface based on learned idiosyncrasies and collected data of a user 有权
    基于学习的特征和用户的收集数据的用户界面的演变

    公开(公告)号:US08954356B2

    公开(公告)日:2015-02-10

    申请号:US14049436

    申请日:2013-10-09

    Inventor: George Weising

    CPC classification number: G06N99/005 G06F3/038 G06F9/451 G06F2203/0381

    Abstract: A user interface evolves based on learned idiosyncrasies and collected data of a user. Learned idiosyncrasies and collected data of the user can be stored in a knowledge base. Information from the surrounding environment of the user can be obtained during learning of idiosyncrasies or collection of data. Thought-based statements can be generated based at least in part on the knowledge base and the information from the environment surrounding the user during learning of idiosyncrasies or collection of data. The thought-based statements serve to invoke or respond to subsequent actions of the user. The user interface can be presented so as to allow for interaction with the user based at least in part on the thought-based statements. Furthermore, personality nuances of the user interface can be developed that affect the interaction between the user and the user interface.

    Abstract translation: 用户界面基于学习的特征和用户的收集数据而发展。 学习的特征和用户的收集数据可以存储在知识库中。 来自用户周围环境的信息可以在学习特征或收集数据时获得。 至少部分基于知识库和在学习特定数据或收集数据期间用户周围环境的信息可以生成基于思想的语句。 基于思想的语句用于调用或响应用户的后续操作。 可以呈现用户界面,以便至少部分地基于基于思想的语句来允许与用户的交互。 此外,可以开发影响用户和用户界面之间的交互的用户界面的个性细微差别。

    Calibration Of Portable Devices In A Shared Virtual Space
    4.
    发明申请
    Calibration Of Portable Devices In A Shared Virtual Space 有权
    在共享虚拟空间中校准便携式设备

    公开(公告)号:US20140002359A1

    公开(公告)日:2014-01-02

    申请号:US14017208

    申请日:2013-09-03

    Abstract: Methods, systems, and computer programs for generating an interactive space, viewable through at least a first and a second handheld devices, are presented. The method includes an operation for taking an image with a camera in the first device. In addition, the method includes an operation for determining a relative position of the second device with reference to the first device, based on image analysis of the taken image to identify a geometry of the second device. Furthermore, the method includes operations for identifying a reference point in a three-dimensional (3D) space based on the relative position, and for generating views of an interactive scene in corresponding displays of the first device and the second device. The interactive scene is tied to the reference point and includes virtual objects, and each view shows all or part of the interactive scene as observed from a current location of the corresponding device.

    Abstract translation: 提出了用于生成可通过至少第一和第二手持设备查看的交互式空间的方法,系统和计算机程序。 该方法包括用于在第一设备中用相机拍摄图像的操作。 此外,该方法包括基于所拍摄图像的图像分析来确定第二装置相对于第一装置的相对位置的操作,以识别第二装置的几何形状。 此外,该方法包括基于相对位置在三维(3D)空间中识别参考点的操作,以及用于在第一设备和第二设备的对应显示中生成交互场景的视图的操作。 交互式场景与参考点相关,包括虚拟对象,每个视图显示从相应设备的当前位置观察到的全部或部分交互式场景。

    Calibration of portable devices in a shared virtual space
    5.
    发明授权
    Calibration of portable devices in a shared virtual space 有权
    在共享虚拟空间中校准便携式设备

    公开(公告)号:US09513700B2

    公开(公告)日:2016-12-06

    申请号:US14260208

    申请日:2014-04-23

    Abstract: Methods, systems, and computer programs are provided for generating an interactive space. One method includes operations for associating a first device to a reference point in 3D space, and for calculating by the first device a position of the first device in the 3D space based on inertial information captured by the first device and utilizing dead reckoning. Further, the method includes operations for capturing images with a camera of the first device, and for identifying locations of one or more static features in the images. The position of the first device is corrected based on the identified locations of the one or more static features, and a view of an interactive scene is presented in a display of the first device, where the interactive scene is tied to the reference point and includes virtual objects.

    Abstract translation: 提供了用于生成交互式空间的方法,系统和计算机程序。 一种方法包括用于将第一设备与3D空间中的参考点相关联的操作,并且用于基于由第一设备捕获的惯性信息并利用航位推算来计算第一设备中的第一设备在3D空间中的位置。 此外,该方法包括用于利用第一设备的照相机捕获图像的操作,以及用于识别图像中的一个或多个静态特征的位置的操作。 基于所识别的一个或多个静态特征的位置来校正第一设备的位置,并且在第一设备的显示器中呈现交互式场景的视图,其中交互式场景被绑定到参考点,并且包括 虚拟对象。

    MAINTAINING MULTIPLE VIEWS ON A SHARED STABLE VIRTUAL SPACE
    6.
    发明申请
    MAINTAINING MULTIPLE VIEWS ON A SHARED STABLE VIRTUAL SPACE 有权
    维持一个共享稳定的虚拟空间的多个视图

    公开(公告)号:US20140235311A1

    公开(公告)日:2014-08-21

    申请号:US14260216

    申请日:2014-04-23

    Abstract: Methods, apparatus, and computer programs for controlling a view of a virtual scene with a handheld device are presented. In one method, images of a real world scene are captured using a device. The method further includes operations for creating an augmented view for presentation on a display of the device by augmenting the images with virtual reality objects, and for detecting a hand in the images as extending into the real world scene. In addition, the method includes operations for showing the hand in the screen as detected in the images, and for generating interaction data, based on an interaction of the hand with a virtual reality object, when the hand makes virtual contact in the augmented view with the virtual reality object. The augmented view is updated based on the interaction data, which simulates on the screen that the hand is interacting with the virtual reality object.

    Abstract translation: 提出了用手持设备控制虚拟场景视图的方法,装置和计算机程序。 在一种方法中,使用设备捕获真实世界场景的图像。 该方法还包括用于通过用虚拟现实对象增强图像并且用于检测图像中的手以延伸到现实世界场景中来创建用于在设备的显示器上呈现的增强视图的操作。 此外,该方法包括用于在图像中检测到的用于显示屏幕中的手的操作,并且用于当手与虚拟现实对象的交互作用生成交互数据时,当手在增强视图中进行虚拟接触时, 虚拟现实对象。 基于交互数据更新增强视图,该交互数据在屏幕上模拟手与虚拟现实对象交互。

    CALIBRATION OF PORTABLE DEVICES IN A SHARED VIRTUAL SPACE
    7.
    发明申请
    CALIBRATION OF PORTABLE DEVICES IN A SHARED VIRTUAL SPACE 有权
    便携式设备在共享的虚拟空间中的校准

    公开(公告)号:US20140232652A1

    公开(公告)日:2014-08-21

    申请号:US14260208

    申请日:2014-04-23

    Abstract: Methods, systems, and computer programs are provided for generating an interactive space. One method includes operations for associating a first device to a reference point in 3D space, and for calculating by the first device a position of the first device in the 3D space based on inertial information captured by the first device and utilizing dead reckoning. Further, the method includes operations for capturing images with a camera of the first device, and for identifying locations of one or more static features in the images. The position of the first device is corrected based on the identified locations of the one or more static features, and a view of an interactive scene is presented in a display of the first device, where the interactive scene is tied to the reference point and includes virtual objects.

    Abstract translation: 提供了用于生成交互式空间的方法,系统和计算机程序。 一种方法包括用于将第一设备与3D空间中的参考点相关联的操作,并且用于基于由第一设备捕获的惯性信息并利用航位推算来计算第一设备中的第一设备在3D空间中的位置。 此外,该方法包括用于利用第一设备的照相机捕获图像的操作,以及用于识别图像中的一个或多个静态特征的位置的操作。 基于所识别的一个或多个静态特征的位置来校正第一设备的位置,并且在第一设备的显示器中呈现交互式场景的视图,其中交互式场景被绑定到参考点,并且包括 虚拟对象。

    Evolution of a user interface based on learned idiosyncrasies and collected data of a user
    8.
    发明授权
    Evolution of a user interface based on learned idiosyncrasies and collected data of a user 有权
    基于学习的特征和用户的收集数据的用户界面的演变

    公开(公告)号:US08725659B2

    公开(公告)日:2014-05-13

    申请号:US13723943

    申请日:2012-12-21

    Inventor: George Weising

    CPC classification number: G06N99/005 G06F3/038 G06F9/451 G06F2203/0381

    Abstract: A user interface evolves based on learned idiosyncrasies and collected data of a user. Learned idiosyncrasies and collected data of the user can be stored in a knowledge base. Information from the surrounding environment of the user can be obtained during learning of idiosyncrasies or collection of data. Thought-based statements can be generated based at least in part on the knowledge base and the information from the environment surrounding the user during learning of idiosyncrasies or collection of data. The thought-based statements serve to invoke or respond to subsequent actions of the user. The user interface can be presented so as to allow for interaction with the user based at least in part on the thought-based statements. Furthermore, personality nuances of the user interface can be developed that affect the interaction between the user and the user interface.

    Abstract translation: 用户界面基于学习的特征和用户的收集数据而发展。 学习的特征和用户的收集数据可以存储在知识库中。 来自用户周围环境的信息可以在学习特征或收集数据时获得。 至少部分基于知识库和在学习特定数据或收集数据期间用户周围环境的信息可以生成基于思想的语句。 基于思想的语句用于调用或响应用户的后续操作。 可以呈现用户界面,以便至少部分地基于基于思想的语句来允许与用户的交互。 此外,可以开发影响用户和用户界面之间的交互的用户界面的个性细微差别。

    EVOLUTION OF A USER INTERFACE BASED ON LEARNED IDIOSYNCRASIES AND COLLECTED DATA OF A USER
    9.
    发明申请
    EVOLUTION OF A USER INTERFACE BASED ON LEARNED IDIOSYNCRASIES AND COLLECTED DATA OF A USER 有权
    基于有用的身份识别和用户收集的数据的用户界面的演变

    公开(公告)号:US20140040168A1

    公开(公告)日:2014-02-06

    申请号:US14049436

    申请日:2013-10-09

    Inventor: George Weising

    CPC classification number: G06N99/005 G06F3/038 G06F9/451 G06F2203/0381

    Abstract: A user interface evolves based on learned idiosyncrasies and collected data of a user. Learned idiosyncrasies and collected data of the user can be stored in a knowledge base. Information from the surrounding environment of the user can be obtained during learning of idiosyncrasies or collection of data. Thought-based statements can be generated based at least in part on the knowledge base and the information from the environment surrounding the user during learning of idiosyncrasies or collection of data. The thought-based statements serve to invoke or respond to subsequent actions of the user. The user interface can be presented so as to allow for interaction with the user based at least in part on the thought-based statements. Furthermore, personality nuances of the user interface can be developed that affect the interaction between the user and the user interface.

    Abstract translation: 用户界面基于学习的特征和用户的收集数据而发展。 学习的特征和用户的收集数据可以存储在知识库中。 来自用户周围环境的信息可以在学习特征或收集数据时获得。 至少部分基于知识库和在学习特定数据或收集数据期间用户周围环境的信息可以生成基于思想的语句。 基于思想的语句用于调用或响应用户的后续操作。 可以呈现用户界面以便至少部分地基于基于思想的语句来允许与用户的交互。 此外,可以开发影响用户和用户界面之间的交互的用户界面的个性细微差别。

    Calibration of portable devices in a shared virtual space
    10.
    发明授权
    Calibration of portable devices in a shared virtual space 有权
    在共享虚拟空间中校准便携式设备

    公开(公告)号:US08717294B2

    公开(公告)日:2014-05-06

    申请号:US14017208

    申请日:2013-09-03

    Abstract: Methods, systems, and computer programs for generating an interactive space, viewable through at least a first and a second handheld devices, are presented. The method includes an operation for taking an image with a camera in the first device. In addition, the method includes an operation for determining a relative position of the second device with reference to the first device, based on image analysis of the taken image to identify a geometry of the second device. Furthermore, the method includes operations for identifying a reference point in a three-dimensional (3D) space based on the relative position, and for generating views of an interactive scene in corresponding displays of the first device and the second device. The interactive scene is tied to the reference point and includes virtual objects, and each view shows all or part of the interactive scene as observed from a current location of the corresponding device.

    Abstract translation: 提出了用于生成可通过至少第一和第二手持设备查看的交互式空间的方法,系统和计算机程序。 该方法包括用于在第一设备中用相机拍摄图像的操作。 此外,该方法包括基于所拍摄图像的图像分析来确定第二装置相对于第一装置的相对位置以识别第二装置的几何形状的操作。 此外,该方法包括基于相对位置在三维(3D)空间中识别参考点的操作,以及用于在第一设备和第二设备的对应显示中生成交互场景的视图的操作。 交互式场景与参考点相关,包括虚拟对象,每个视图显示从相应设备的当前位置观察到的全部或部分交互式场景。

Patent Agency Ranking