-
1.
公开(公告)号:US20180348885A1
公开(公告)日:2018-12-06
申请号:US16055994
申请日:2018-08-06
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
CPC classification number: G06F3/017 , G06F3/011 , G06F3/012 , G06F3/0304 , G06F3/16 , G06F3/165 , G06K9/00375 , G06K9/00389 , G06K9/6269 , G06K9/6282 , G06K2209/40
Abstract: Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors.
-
2.
公开(公告)号:US10444854B2
公开(公告)日:2019-10-15
申请号:US16055994
申请日:2018-08-06
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors.
-
公开(公告)号:US20190080508A1
公开(公告)日:2019-03-14
申请号:US16032418
申请日:2018-07-11
Applicant: Apple Inc.
Inventor: Garrett Johnson , Chong Chen , Frederic Cao
Abstract: Embodiments of the present disclosure can provide systems, methods, and computer-readable medium for providing virtual lighting adjustments to image data. A user interface for presenting and/or modifying image data may be provided via an electronic device. User input may be received that indicates a selection of a virtual lighting mode. Landmark points corresponding to a set of pixels of the image data may be identified based, at least in part, on depth measurement values of the set of pixels. One or more masks may be generated from the landmark points. One or more virtual lighting adjustments associated with the selected virtual lighting mode may be made to the image data using these masks (or the landmark points and an implied geometry of the landmark points). The adjusted/modified image may be presented to the user via the user interface at the electronic device.
-
4.
公开(公告)号:US10048765B2
公开(公告)日:2018-08-14
申请号:US14865850
申请日:2015-09-25
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Varying embodiments of intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene, such as walls, furniture, and humans may be evaluated and monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent or desire as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, for example, expressed through fine hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, for example, optical or non-optical type depth sensors. The depth information may be interpreted in “slices” (three-dimensional regions of space having a relatively small depth) until one or more candidate hand structures are detected. Once detected, each candidate hand structure may be confirmed or rejected based on its own unique physical properties (e.g., shape, size and continuity to an arm structure). Each confirmed hand structure may be submitted to a depth-aware filtering process before its own unique three-dimensional features are quantified into a high-dimensional feature vector. A two-step classification scheme may be applied to the feature vectors to identify a candidate gesture (step 1), and to reject candidate gestures that do not meet a gesture-specific identification operation (step-2). The identified gesture may be used to initiate some action controlled by a computer system.
-
5.
公开(公告)号:US11561621B2
公开(公告)日:2023-01-24
申请号:US16600830
申请日:2019-10-14
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors.
-
公开(公告)号:US10740959B2
公开(公告)日:2020-08-11
申请号:US16032418
申请日:2018-07-11
Applicant: Apple Inc.
Inventor: Garrett Johnson , Chong Chen , Frederic Cao
Abstract: Embodiments of the present disclosure can provide systems, methods, and computer-readable medium for providing virtual lighting adjustments to image data. A user interface for presenting and/or modifying image data may be provided via an electronic device. User input may be received that indicates a selection of a virtual lighting mode. Landmark points corresponding to a set of pixels of the image data may be identified based, at least in part, on depth measurement values of the set of pixels. One or more masks may be generated from the landmark points. One or more virtual lighting adjustments associated with the selected virtual lighting mode may be made to the image data using these masks (or the landmark points and an implied geometry of the landmark points). The adjusted/modified image may be presented to the user via the user interface at the electronic device.
-
7.
公开(公告)号:US20200042096A1
公开(公告)日:2020-02-06
申请号:US16600830
申请日:2019-10-14
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
Abstract: Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors.
-
8.
公开(公告)号:US20170090584A1
公开(公告)日:2017-03-30
申请号:US14865850
申请日:2015-09-25
Applicant: Apple Inc.
Inventor: Feng Tang , Chong Chen , Haitao Guo , Xiaojin Shi , Thorsten Gernoth
CPC classification number: G06F3/017 , G06F3/012 , G06F3/0304 , G06F3/16 , G06K9/00375 , G06K9/00389 , G06K9/6269 , G06K9/6282 , G06K2209/40
Abstract: Varying embodiments of intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene, such as walls, furniture, and humans may be evaluated and monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent or desire as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, for example, expressed through fine hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, for example, optical or non-optical type depth sensors. The depth information may be interpreted in “slices” (three-dimensional regions of space having a relatively small depth) until one or more candidate hand structures are detected. Once detected, each candidate hand structure may be confirmed or rejected based on its own unique physical properties (e.g., shape, size and continuity to an arm structure). Each confirmed hand structure may be submitted to a depth-aware filtering process before its own unique three-dimensional features are quantified into a high-dimensional feature vector. A two-step classification scheme may be applied to the feature vectors to identify a candidate gesture (step 1), and to reject candidate gestures that do not meet a gesture-specific identification operation (step-2). The identified gesture may be used to initiate some action controlled by a computer system.
-
-
-
-
-
-
-