HAND GESTURE RECOGNITION BASED ON DETECTED WRIST MUSCULAR MOVEMENTS

    公开(公告)号:US20220300082A1

    公开(公告)日:2022-09-22

    申请号:US17249966

    申请日:2021-03-19

    Applicant: GOOGLE LLC

    Abstract: Techniques of identifying gestures include detecting and classifying inner-wrist muscle motions at a user's wrist using micron-resolution radar sensors. For example, a user of an AR system may wear a band around their wrist. When the user makes a gesture to manipulate a virtual object in the AR system as seen in a head-mounted display (HMD), muscles and ligaments in the user's wrist make small movements on the order of 1-3 mm. The band contains a small radar device that has a transmitter and a number of receivers (e.g., three) of electromagnetic (EM) radiation on a chip (e.g., a Soli chip. This radiation reflects off the wrist muscles and ligaments and is received by the receivers on the chip in the band. The received reflected signal, or signal samples, are then sent to processing circuitry for classification to identify the wrist movement as a gesture.

    Refined Location Estimates Using Ultra-Wideband Communication Links

    公开(公告)号:US20220295443A1

    公开(公告)日:2022-09-15

    申请号:US17196093

    申请日:2021-03-09

    Applicant: Google LLC

    Abstract: This document describes systems and techniques to generate refined location estimates using ultra-wideband (UWB) communication links. Mobile devices, such as smartphones, include location sensors to estimate their location. The accuracy of location sensors is generally about 3 meters (or about 10 feet). More accurate location data would allow mobile devices to provide new and improved functionality. The described systems and techniques determine the distance between nearby mobile devices using UWB communication links. A mobile device can then use the distance between the mobile devices to determine their relative locations. By comparing the relative locations of the mobile devices with their location estimates, the mobile device can generate a refined location estimate with greater accuracy.

    Automatic Exposure and Gain Control for Face Authentication

    公开(公告)号:US20220191374A1

    公开(公告)日:2022-06-16

    申请号:US17439762

    申请日:2019-09-25

    Applicant: Google LLC

    Abstract: This document describes techniques and systems that enable automatic exposure and gain control for face authentication. The techniques and systems include a user device initializing a gain for a near-infrared camera system using a default gain. The user device ascertains patch-mean statistics of one or more regions-of-interest of a most-recently captured image that was captured by the near-infrared camera system. The user device computes an update in the initialized gain to provide an updated gain that is usable to scale the one or more regions-of-interest toward a target mean-luminance value. The user device dampens the updated gain by using hysteresis. Then, the user device sets the initialized gain for the near-infrared camera system to the dampened updated gain.

    Methods, systems, and media for relighting images using predicted deep reflectance fields

    公开(公告)号:US10997457B2

    公开(公告)日:2021-05-04

    申请号:US16616235

    申请日:2019-10-16

    Applicant: Google LLC

    Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.

Patent Agency Ranking