-
公开(公告)号:US09207760B1
公开(公告)日:2015-12-08
申请号:US13630563
申请日:2012-09-28
Applicant: Google Inc.
Inventor: Bo Wu , Yong Zhao , Hartmut Neven , Hayes Solos Raffle , Thad Eugene Starner
IPC: G06F15/18 , G06N5/02 , G06N3/02 , G06F3/01 , G06K9/62 , G06K9/00 , G06N99/00 , G02B27/00 , G02B27/01
CPC classification number: G06F3/013 , G02B27/0093 , G02B2027/0187 , G06K9/00281 , G06K9/00288 , G06K9/00617 , G06K9/623 , G06K9/6231 , G06K9/6257 , G06K9/626 , G06K9/6263 , G06N99/005
Abstract: This disclosure involves proximity sensing of eye gestures using a machine-learned model. An illustrative method comprises receiving training data that includes proximity-sensor data. The data is generated by at least one proximity sensor of a head-mountable device (HMD). The data is indicative of light received by the proximity sensor(s). The light is received by the proximity sensor(s) after a reflection of the light from an eye area. The reflection occurs while an eye gesture is being performed at the eye area. The light is generated by at least one light source of the HMD. The method further comprises applying a machine-learning process to the training data to generate at least one classifier for the eye gesture. The method further comprises generating an eye-gesture model that includes the at least one classifier for the eye gesture. The model is applicable to subsequent proximity-sensor data for detection of the eye gesture.
Abstract translation: 本公开涉及使用机器学习模型进行眼睛手势的接近感测。 一种说明性方法包括接收包括接近传感器数据的训练数据。 数据由头戴式装置(HMD)的至少一个接近传感器产生。 数据表示由接近传感器接收的光。 在来自眼睛区域的光的反射之后,光被接近传感器接收。 当在眼睛区域执行眼睛手势时发生反射。 光由HMD的至少一个光源产生。 该方法还包括将机器学习过程应用于训练数据以生成用于眼睛手势的至少一个分类器。 该方法还包括生成包括用于眼睛手势的至少一个分类器的眼睛手势模型。 该模型适用于随后的接近传感器数据,用于检测眼睛手势。
-
公开(公告)号:US20140253430A1
公开(公告)日:2014-09-11
申请号:US13963969
申请日:2013-08-09
Applicant: Google Inc.
Inventor: Richard Carl GOSSWEILER, III , Yong Zhao
IPC: G06F3/01
CPC classification number: G06F9/451 , G06F3/017 , G06K9/00335
Abstract: Systems and methods for processing spatial gestures are provided. In some aspects, depth data is received from one or more depth cameras. Positions of a plurality of body parts of a person in a field of view of the one or more depth cameras are determined based on the received depth data. A spatial gesture made by the person is determined based on the positions of the plurality of body parts. The spatial gesture is translated into an event. The event is provided, via a two-way socket, to a web client for executing a function in response to the event.
Abstract translation: 提供了处理空间手势的系统和方法。 在一些方面,从一个或多个深度相机接收深度数据。 基于所接收的深度数据来确定一个或多个深度相机的视场中的人的多个身体部位的位置。 基于多个身体部位的位置来确定由该人制作的空间姿态。 空间手势被翻译成一个事件。 该事件通过双向插座提供给web客户端,用于响应于事件执行功能。
-
公开(公告)号:US20150172634A1
公开(公告)日:2015-06-18
申请号:US13914727
申请日:2013-06-11
Applicant: Google Inc.
Inventor: Aaron Joseph Wheeler , Christian Plagemann , Hendrik Dahlkamp , Liang-Yu Chi , Yong Zhao , Varun Ganapathi , Alejandro Jose Kauffmann
IPC: H04N13/02
CPC classification number: H04N13/117 , H04N13/239 , H04N13/271 , H04N21/21805 , H04N21/23412 , H04N21/2365 , H04N21/4223 , H04N21/816
Abstract: Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
Abstract translation: 公开了基于虚拟相机透视请求以及两个或更多个视频流的投影来视觉呈现所请求的场景的系统和技术。 可以使用二维相机或三维深度相机捕获视频流,并且可以捕获不同的视角。 投影可以是内部投影,其基于两个或更多个视频流三维地映射场景。 可以识别场景内部或外部的对象,并且可以基于对象的属性来可视地呈现场景。 例如,可以基于移动对象位于场景内的位置来可视地呈现场景。
-
公开(公告)号:US20140258943A1
公开(公告)日:2014-09-11
申请号:US13963975
申请日:2013-08-09
Applicant: Google Inc.
Inventor: Richard Carl Gossweiler, III , Yong Zhao
IPC: G06F3/01
CPC classification number: G06F9/451 , G06F3/017 , G06K9/00335
Abstract: Systems and methods for providing an output responsive to a spatial gesture are provided. In some aspects, an event associated with a spatial gesture or body position information corresponding to the event are received via a two-way socket. A function corresponding to the event is determined, where the function includes modifying data rendered for display at a display device responsive to the spatial gesture. The function is executed.
Abstract translation: 提供了用于提供响应于空间手势的输出的系统和方法。 在一些方面,通过双向插座接收与空间姿势相关联的事件或与事件相对应的身体位置信息。 确定与事件相对应的功能,其中该功能包括响应于空间姿态修改呈现为在显示设备上显示的数据。 执行该功能。
-
公开(公告)号:US09684374B2
公开(公告)日:2017-06-20
申请号:US14627790
申请日:2015-02-20
Applicant: Google Inc.
Inventor: Thad Eugene Starner , Hayes Solos Raffle , Yong Zhao
CPC classification number: G06F3/013 , G02B27/0093 , G02B27/017 , G02B2027/0138 , G02B2027/014 , G02B2027/0178 , G02B2027/0187 , G06F1/163
Abstract: Example methods and devices are disclosed for generating life-logs with point-of-view images. An example method may involve: receiving image-related data based on electromagnetic radiation reflected from a human eye, generating an eye reflection image based on the image-related data, generating a point-of-view image by filtering the eye reflection image, and storing the point-of-view image. The electromagnetic radiation reflected from a human eye can be captured using one or more video or still cameras associated with a suitably-configured computing device, such as a wearable computing device.
-
公开(公告)号:US09392248B2
公开(公告)日:2016-07-12
申请号:US13914727
申请日:2013-06-11
Applicant: Google Inc.
Inventor: Aaron Joseph Wheeler , Christian Plagemann , Hendrik Dahlkamp , Liang-Yu Chi , Yong Zhao , Varun Ganapathi , Alejandro Jose Kauffmann
IPC: H04N13/00 , H04N21/218 , H04N21/234 , H04N21/2365 , H04N21/4223 , H04N21/81 , H04N13/02
CPC classification number: H04N13/117 , H04N13/239 , H04N13/271 , H04N21/21805 , H04N21/23412 , H04N21/2365 , H04N21/4223 , H04N21/816
Abstract: Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
Abstract translation: 公开了基于虚拟相机透视请求以及两个或更多个视频流的投影来视觉呈现所请求的场景的系统和技术。 可以使用二维相机或三维深度相机捕获视频流,并且可以捕获不同的视角。 投影可以是内部投影,其基于两个或更多个视频流三维地映射场景。 可以识别场景内部或外部的对象,并且可以基于对象的属性来可视地呈现场景。 例如,可以基于移动对象位于场景内的位置来可视地呈现场景。
-
公开(公告)号:US09265415B1
公开(公告)日:2016-02-23
申请号:US13631333
申请日:2012-09-28
Applicant: Google Inc.
Inventor: Thad Eugene Starner , Bo Wu , Yong Zhao
Abstract: Methods and systems that are described herein may help to dynamically utilize multiple eye-tracking techniques to more accurately determine eye position and/or eye movement. An exemplary system may be configured to: (a) perform at least a first and a second eye-tracking process; (b) determine a reliability indication for at least one of the eye-tracking processes; (c) determine a respective weight for each of the eye-tracking processes based at least in part on the reliability indication; (d) determine a combined eye position based on a weighted combination of eye-position data from the two or more eye-tracking processes, wherein the eye-position data from each eye-tracking process is weighted by the respectively determined weight for the eye-tracking process; and (e) carry out functions based on the combined eye position.
Abstract translation: 这里描述的方法和系统可以有助于动态地利用多个眼睛跟踪技术以更准确地确定眼睛位置和/或眼睛运动。 示例性系统可以被配置为:(a)执行至少第一和第二眼睛跟踪过程; (b)确定至少一个眼睛跟踪过程的可靠性指示; (c)至少部分地基于可靠性指示来确定每个眼睛跟踪过程的相应权重; (d)基于来自两个或更多个眼睛跟踪过程的眼睛位置数据的加权组合来确定组合的眼睛位置,其中来自每个眼睛跟踪过程的眼睛位置数据被分别确定的眼睛重量加权 跟踪过程; 和(e)基于组合的眼睛位置执行功能。
-
8.
公开(公告)号:US09202280B2
公开(公告)日:2015-12-01
申请号:US14571402
申请日:2014-12-16
Applicant: Google Inc.
Inventor: Bo Wu , Thad Eugene Starner , Hayes Solos Raffle , Yong Zhao , Edward Allen Keyes
IPC: G06T7/00 , G06K9/00 , G09G3/00 , G09G5/00 , G01S17/06 , A61B3/00 , G02B27/01 , G03B29/00 , H04N5/225 , A61B3/113
CPC classification number: G06T7/0044 , A61B3/00 , A61B3/113 , G01S17/06 , G02B27/01 , G02B27/017 , G02B2027/0138 , G02B2027/014 , G02B2027/0178 , G02B2027/0187 , G03B29/00 , G03B2213/025 , G06F3/013 , G06K9/00604 , G06K9/0061 , G06T7/74 , G06T2207/10016 , G06T2207/10152 , G06T2207/30201 , G09G3/003 , G09G5/00 , G09G2354/00 , H04N5/225
Abstract: Methods and systems are described for determining eye position and/or for determining eye movement based on glints. An exemplary computer-implemented method involves: (a) causing a camera that is attached to a head-mounted display (HMD) to record a video of the eye; (b) while the video of the eye is being recorded, causing a plurality of light sources that are attached to the HMD and generally directed towards the eye to switch on and off according to a predetermined pattern, wherein the predetermined pattern is such that at least two of the light sources are switched on at any given time while the video of the eye is being recorded; (c) analyzing the video of the eye to detect controlled glints that correspond to the plurality of light sources; and (d) determining a measure of eye position based on the controlled glints.
Abstract translation: 描述了用于确定眼睛位置和/或基于闪光来确定眼睛运动的方法和系统。 示例性的计算机实现的方法包括:(a)使附接到头戴式显示器(HMD)的照相机记录眼睛的视频; (b)在记录眼睛的视频的同时,使根据预定图案使附接到HMD并且大体上指向眼睛的多个光源接通和断开,其中预定模式使得在 在记录眼睛的视频的任何给定时间,至少两个光源被接通; (c)分析眼睛的视频以检测对应于多个光源的受控闪光; 和(d)基于受控的闪光来确定眼睛位置的度量。
-
公开(公告)号:US09158904B1
公开(公告)日:2015-10-13
申请号:US14079941
申请日:2013-11-14
Applicant: Google Inc.
Inventor: Steven James Ross , Henry Will Schneiderman , Michael Christian Nechyba , Yong Zhao
CPC classification number: G06F21/32 , G06K9/00214 , G06K9/00221 , G06K9/00288 , G06K9/00899
Abstract: An example method includes capturing, by an image capture device of a computing device, an image of a face of a user. The method further includes detecting, by the computing device, whether a distance between the computing device and an object represented by at least a portion of the image is less than a threshold distance, and, when the detected distance is less than a threshold distance, denying authentication to the user with respect to accessing one or more functionalities controlled by the computing device, where the authentication is denied independent of performing facial recognition based at least in part on the captured image.
Abstract translation: 示例性方法包括通过计算设备的图像捕获设备捕获用户的脸部的图像。 该方法还包括由计算设备检测计算设备与由图像的至少一部分表示的对象之间的距离是否小于阈值距离,并且当检测到的距离小于阈值距离时, 拒绝对用户访问由计算设备控制的一个或多个功能的认证,其中认证被拒绝,独立于至少部分地基于所捕获的图像执行面部识别。
-
公开(公告)号:US20150160461A1
公开(公告)日:2015-06-11
申请号:US14627790
申请日:2015-02-20
Applicant: Google Inc.
Inventor: Thad Eugene Starner , Hayes Solos Raffle , Yong Zhao
CPC classification number: G06F3/013 , G02B27/0093 , G02B27/017 , G02B2027/0138 , G02B2027/014 , G02B2027/0178 , G02B2027/0187 , G06F1/163
Abstract: Example methods and devices are disclosed for generating life-logs with point-of-view images. An example method may involve: receiving image-related data based on electromagnetic radiation reflected from a human eye, generating an eye reflection image based on the image-related data, generating a point-of-view image by filtering the eye reflection image, and storing the point-of-view image. The electromagnetic radiation reflected from a human eye can be captured using one or more video or still cameras associated with a suitably-configured computing device, such as a wearable computing device.
Abstract translation: 公开了用于通过点视图图像生成生命日志的示例方法和设备。 示例性方法可以包括:基于从人眼反射的电磁辐射接收图像相关数据,基于图像相关数据生成眼睛反射图像,通过滤波眼睛反射图像生成观察点图像,以及 存储视点图像。 可以使用与适当配置的计算设备(例如可穿戴式计算设备)相关联的一个或多个视频或静态摄像机来捕获从人眼反射的电磁辐射。
-
-
-
-
-
-
-
-
-