Platform for Registering and Processing Visual Encodings

    公开(公告)号:US20240135126A1

    公开(公告)日:2024-04-25

    申请号:US18397918

    申请日:2023-12-27

    Applicant: Google LLC

    CPC classification number: G06K7/1443 G06K7/1447 G06K19/06103 G06V10/255

    Abstract: The present disclosure relates generally to the processing of machine-readable visual encodings in view of contextual information. One embodiment of aspects of the present disclosure comprises obtaining image data descriptive of a scene that includes a machine-readable visual encoding; processing the image data with a first recognition system configured to recognize the machine-readable visual encoding; processing the image data with a second, different recognition system configured to recognize a surrounding portion of the scene that surrounds the machine-readable visual encoding; identifying a stored reference associated with the machine-readable visual encoding based at least in part on one or more first outputs generated by the first recognition system based on the image data and based at least in part on one or more second outputs generated by the second recognition system based on the image data; and performing one or more actions responsive to identification of the stored reference.

    System and method for casting content

    公开(公告)号:US11947859B2

    公开(公告)日:2024-04-02

    申请号:US17595172

    申请日:2020-11-16

    Applicant: GOOGLE LLC

    Abstract: A system and method is provided that provides for the transfer of the execution of content from a user device to an external device for output of the content by the external device. External devices may be detected in a physical space, and identified based on previous connection with the user device, based on a shared network or shared system of connected devices including the user device, based on image information captured by the user device and previously stored anchoring information that identifies the external devices, and the like. An external device may be selected for potential output of the content based on previously stored configuration information associated with the external device including, for example, output capabilities associated with the external device. The identified external device may output the transferred content in response to a user verification input, verifying that the content is to be output by the external device.

    Gesture-triggered augmented-reality

    公开(公告)号:US11592907B2

    公开(公告)日:2023-02-28

    申请号:US16949206

    申请日:2020-10-20

    Applicant: Google LLC

    Abstract: A user may routinely wear or hold more than one computing devices. One of the computing devices may be a head-mounted computing-device configured for augmented reality. The head-mounted computing-device may include a camera. While imaging, the camera can consume power and processing resources that diminish a battery of the head-mounted computing device. To improve a battery life and to enhance a user's privacy, imaging of the camera can be deactivated during periods when the user is not interacting with the head-mounted computing device and activated when the user wishes to interact with the head-mounted computing device. The activation of the camera can be triggered by gestured data collected by a computing device other than the head-mounted computing-device.

    DYNAMIC SWITCHING AND MERGING OF HEAD, GESTURE AND TOUCH INPUT IN VIRTUAL REALITY

    公开(公告)号:US20190011979A1

    公开(公告)日:2019-01-10

    申请号:US16130040

    申请日:2018-09-13

    Applicant: Google LLC

    Abstract: In a system for dynamic switching and merging of head, gesture and touch input in virtual reality, focus may be set on a first virtual in response to a first input implementing one of a number of different input modes. The first object may then be manipulated in the virtual world in response to a second input implementing another input mode. In response to a third input, focus may be shifted from the first object to a second object if, for example, a priority value of the third input is higher than a priority value of the first input. If the priority value of the third input is less than that of the first input, focus may remain on the first object. In response to certain trigger inputs, a display of virtual objects may be shifted between a far field display and a near field display to accommodate a particular mode of interaction with the virtual objects.

    IDENTIFYING A POSITION OF A CONTROLLABLE DEVICE USING A WEARABLE DEVICE

    公开(公告)号:US20230360264A1

    公开(公告)日:2023-11-09

    申请号:US18246464

    申请日:2020-11-16

    Applicant: Google LLC

    Abstract: According to an aspect, a method of identifying a position of a controllable device includes receiving visual data from an image sensor on a wearable device, generating, by an object recognition module, identification data based on the visual data, and identifying, using the identification data, a first three-dimensional (3D) map from a map database that stores a plurality of 3D maps including the first 3D map and a second 3D map, where the first 3D map is associated with a first controllable device and the second 3D map is associated with a second controllable device. The method includes obtaining a position of the first controllable device in a physical space based on visual positioning data of the first 3D map and rendering a user interface (UI) object on a display in a position that is within a threshold distance of the position of the first controllable device.

    SYSTEMS AND METHODS FOR GENERATING THREE-DIMENSIONAL MAPS OF AN INDOOR SPACE

    公开(公告)号:US20230075389A1

    公开(公告)日:2023-03-09

    申请号:US17445751

    申请日:2021-08-24

    Applicant: GOOGLE LLC

    Abstract: Three-dimensional (3D) maps may be generated for different areas based on scans of the areas using sensor(s) of a mobile computing device. During each scan, locations of the mobile computing device can be measured relative to a fixed-positioned smart device using ultra-wideband communication (UWB). The 3D maps for the areas may be registered to the fixed position (i.e., anchor position) of the smart device based on the location measurements acquired during the scan so that the 3D maps can be merged into a combined 3D map. The combined (i.e., merged) 3D map may then be used to facilitate location-specific operation of the mobile computing device or other smart device.

    Gesture Controls Using Ultra Wide Band

    公开(公告)号:US20230024254A1

    公开(公告)日:2023-01-26

    申请号:US17385433

    申请日:2021-07-26

    Applicant: Google LLC

    Abstract: The present disclosure provides for device localization using ultra wide band (UWB) detection and gesture detection using inertial measurement units (IMUs) on one or more wearable devices to control smart devices, such as home assistants, smart lights, smart locks, etc.

Patent Agency Ranking