-
公开(公告)号:US10579905B2
公开(公告)日:2020-03-03
申请号:US15925141
申请日:2018-03-19
Applicant: Google LLC
Inventor: Sean Ryan Fanello , Julien Pascal Christophe Valentin , Adarsh Prakash Murthy Kowdle , Christoph Rhemann , Vladimir Tankovich , Philip L. Davidson , Shahram Izadi
IPC: G06K9/62
Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.
-
公开(公告)号:US12046072B2
公开(公告)日:2024-07-23
申请号:US17437395
申请日:2019-10-10
Applicant: Google LLC
Inventor: Zhijun He , Wen Yu Chien , Po-Jen Chang , Xu Han , Adarsh Prakash Murthy Kowdle , Jae Min Purvis , Lu Gao , Gopal Parupudi , Clayton Merrill Kimber
CPC classification number: G06V40/166 , G06V10/10 , G06V40/16 , G06V40/172 , G06V40/179 , H04N5/04 , H04N23/56 , H04N23/66
Abstract: This disclosure describes systems and techniques for synchronizing cameras and tagging images for face authentication. For face authentication by a facial recognition model, a dual infrared camera may generate an image stream by alternating between capturing a “flood image” and a “dot image” and tagging each image with metadata that indicates whether the image is a flood or a dot image. Accurately tagging images can be difficult due to dropped frames and errors in metadata tags. The disclosed systems and techniques provide for the improved synchronization of cameras and tagging of images to promote accurate facial recognition.
-
公开(公告)号:US20230377183A1
公开(公告)日:2023-11-23
申请号:US18224801
申请日:2023-07-21
Applicant: Google LLC
Inventor: Tim Phillip Wantland , Brandon Charles Barbello , Christopher Max Breithaupt , Michael John Schoenberg , Adarsh Prakash Murthy Kowdle , Bryan Woods , Anshuman Kumar
IPC: G06T7/593 , G06T7/174 , H04N13/128 , G06T19/20
CPC classification number: G06T7/593 , G06T7/174 , H04N13/128 , G06T19/20 , H04N2013/0081
Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
-
公开(公告)号:US20230350049A1
公开(公告)日:2023-11-02
申请号:US17661401
申请日:2022-04-29
Applicant: Google LLC
Inventor: Anandghan Waghmare , Dongeek Shin , Ivan Poupyrev , Shwetak N. Patel , Shahram Izadi , Adarsh Prakash Murthy Kowdle
IPC: G01S13/72 , G01S7/35 , G06F3/0338 , G06F3/01
CPC classification number: G01S13/723 , G01S7/35 , G06F3/0338 , G06F3/014 , G06F2203/0331
Abstract: A method including transmitting, by a peripheral device communicatively coupled to a wearable device, a frequency-modulated continuous wave (FMCW), receiving, by the peripheral device, a reflected signal based on the FMCW, tracking, by the peripheral device, a movement associated with the peripheral device based on the reflected signal, and communicating, from the peripheral device to the wearable device, an information corresponding to the movement associated with the peripheral device.
-
公开(公告)号:US20230274491A1
公开(公告)日:2023-08-31
申请号:US18001659
申请日:2021-09-01
Applicant: GOOGLE LLC
Inventor: Eric Turner , Adarsh Prakash Murthy Kowdle , Bicheng Luo , Juan David Hincapie Ramos
Abstract: A method including receiving (S605) a request for a depth map, generating (S625) a hybrid depth map based on a device depth map (110) and downloaded depth information (105), and responding (S630) to the request for the depth map with the hybrid depth map (415). The device depth map (110) can be depth data captured on a user device (515) using sensors and/or software. The downloaded depth information (105) can be associated with depth data, map data, image data, and/or the like stored on a remote (to the user device) server (505).
-
公开(公告)号:US20230258798A1
公开(公告)日:2023-08-17
申请号:US17651152
申请日:2022-02-15
Applicant: GOOGLE LLC
Inventor: Dongeek Shin , Adarsh Prakash Murthy Kowdle , Jingying Hu , Andrea Colaco
CPC classification number: G01S15/104 , G06F3/03 , G02B27/017
Abstract: Smart glasses including a first audio device, a second audio device, a frame including a first portion, a second portion, and a third portion, the second portion and the third portion are moveable in relation to the first portion, the second portion including the first audio device and the third portion including the second audio device, and a processor configured to cause the first audio device to generate a signal, receive the signal via the second audio device, estimate a distance based on the received signal, and determine a configuration of the frame.
-
公开(公告)号:US20220335638A1
公开(公告)日:2022-10-20
申请号:US17596794
申请日:2021-04-19
Applicant: Google LLC
Inventor: Abhishek Kar , Hossam Isack , Adarsh Prakash Murthy Kowdle , Aveek Purohit , Dmitry Medvedev
Abstract: According to an aspect, a method for depth estimation includes receiving image data from a sensor system, generating, by a neural network, a first depth map based on the image data, where the first depth map has a first scale, obtaining depth estimates associated with the image data, and transforming the first depth map to a second depth map using the depth estimates, where the second depth map has a second scale.
-
公开(公告)号:US20220191374A1
公开(公告)日:2022-06-16
申请号:US17439762
申请日:2019-09-25
Applicant: Google LLC
Inventor: Adarsh Prakash Murthy Kowdle , Ruben Manuel Velarde , Zhijun He , Xu Han , Kourosh Derakshan , Shahram Izadi
Abstract: This document describes techniques and systems that enable automatic exposure and gain control for face authentication. The techniques and systems include a user device initializing a gain for a near-infrared camera system using a default gain. The user device ascertains patch-mean statistics of one or more regions-of-interest of a most-recently captured image that was captured by the near-infrared camera system. The user device computes an update in the initialized gain to provide an updated gain that is usable to scale the one or more regions-of-interest toward a target mean-luminance value. The user device dampens the updated gain by using hysteresis. Then, the user device sets the initialized gain for the near-infrared camera system to the dampened updated gain.
-
29.
公开(公告)号:US10997457B2
公开(公告)日:2021-05-04
申请号:US16616235
申请日:2019-10-16
Applicant: Google LLC
Inventor: Christoph Rhemann , Abhimitra Meka , Matthew Whalen , Jessica Lynn Busch , Sofien Bouaziz , Geoffrey Douglas Harvey , Andrea Tagliasacchi , Jonathan Taylor , Paul Debevec , Peter Joseph Denny , Sean Ryan Francesco Fanello , Graham Fyffe , Jason Angelo Dourgarian , Xueming Yu , Adarsh Prakash Murthy Kowdle , Julien Pascal Christophe Valentin , Peter Christopher Lincoln , Rohit Kumar Pandey , Christian Häne , Shahram Izadi
Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.
-
公开(公告)号:US10554957B2
公开(公告)日:2020-02-04
申请号:US15996880
申请日:2018-06-04
Applicant: Google LLC
Inventor: Julien Pascal Christophe Valentin , Sean Ryan Fanello , Adarsh Prakash Murthy Kowdle , Christoph Rhemann , Vladimir Tankovich , Philip L. Davidson , Shahram Izadi
IPC: G06K9/64 , H04N13/271 , H04N19/597 , H04N13/128 , G06K9/62 , G06T7/593 , H04N13/00
Abstract: A first and second image of a scene are captured. Each of a plurality of pixels in the first image is associated with a disparity value. An image patch associated with each of the plurality of pixels of the first image and the second image is mapped into a binary vector. Thus, values of pixels in an image are mapped to a binary space using a function that preserves characteristics of values of the pixels. The difference between the binary vector associated with each of the plurality of pixels of the first image and its corresponding binary vector in the second image designated by the disparity value associated with each of the plurality of pixels of the first image is determined. Based on the determined difference between binary vectors, correspondence between the plurality of pixels of the first image and the second image is established.
-
-
-
-
-
-
-
-
-