Abstract:
Embodiments of the present invention provide an adaptive data path for computer-vision applications. Utilizing techniques provided herein, the data path can adapt to the needs of a computer-vision application to provide the needed data. The data path can be adapted by applying one or more filters to image data from one or more sensors. Some embodiments may utilize a computer-vision processing unit comprising a specialized instruction-based, in-line processor capable of interpreting commands from a computer-vision application.
Abstract:
Systems, methods, apparatuses, and computer-readable media are provided for use with a system configured to detect gestures. In one embodiment, a method includes detecting a first user gesture meeting a first condition to enter a mode of operation. The method may further include exiting the mode of operation. The method may further include detecting a second user gesture meeting a second condition to reenter the mode of operation based on the detecting the first user gesture, wherein the second condition is less stringent than the first condition.
Abstract:
Embodiments of the present invention provide an adaptive data path for computer-vision applications. Utilizing techniques provided herein, the data path can adapt to the needs of a computer-vision application to provide the needed data. The data path can be adapted by applying one or more filters to image data from one or more sensors. Some embodiments may utilize a computer-vision processing unit comprising a specialized instruction-based, in-line processor capable of interpreting commands from a computer-vision application.
Abstract:
Embodiments of the present invention provide an adaptive data path for computer-vision applications. Utilizing techniques provided herein, the data path can adapt to the needs of a computer-vision application to provide the needed data. The data path can be adapted by applying one or more filters to image data from one or more sensors. Some embodiments may utilize a computer-vision processing unit comprising a specialized instruction-based, in-line processor capable of interpreting commands from a computer-vision application.
Abstract:
Systems and methods for switching between voice dictation modes using a gesture are provided so that an alternate meaning to a dictated word may be applied. The provided systems and methods time stamp detected gestures and detected words from the voice dictation and compare the time stamp at which a gesture is detected to the time stamp at which a word is detected. When it is determined that a time stamp of a gesture approximately matches a time stamp of a word, the word may be processed to have an alternate meaning, such as a command, punctuation, or action.
Abstract:
Systems, methods, apparatuses, and computer-readable media for are provided for engaging and re-engaging a gesture mode. In one embodiment, a method performed by the computer system detects an initial presence of a user pose, indicates to a user progress toward achieving a predetermined state while continuing to detect the user pose, determines that the detection of the user pose has reached the predetermined state, and responds to the detection of the user pose based on determining that the detection has reached the predetermined state. The computer system may further prompt the user by displaying a representation of the user pose corresponding to an option for a user decision, detecting the user decision based at least in part on determining that the detection of the user pose has reached the predetermined state, and responding to the user decision.
Abstract:
Systems and methods for performing an action based on a detected gesture are provided. The systems and methods provided herein may detect a direction of an initial touchless gesture and process subsequent touchless gestures based on the direction of the initial touchless gesture. The systems and methods may translate a coordinate system related to a user device and a gesture library based on the detected direction such that subsequent touchless gestures may be processed based on the detected direction. The systems and methods may allow a user to make a touchless gesture over a device to interact with the device independent of the orientation of the device since the direction of the initial gesture can set the coordinate system or context for subsequent gesture detection.
Abstract:
Systems and methods for switching between voice dictation modes using a gesture are provided so that an alternate meaning to a dictated word may be applied. The provided systems and methods time stamp detected gestures and detected words from the voice dictation and compare the time stamp at which a gesture is detected to the time stamp at which a word is detected. When it is determined that a time stamp of a gesture approximately matches a time stamp of a word, the word may be processed to have an alternate meaning, such as a command, punctuation, or action.
Abstract:
Embodiments of the present invention provide an adaptive data path for computer-vision applications. Utilizing techniques provided herein, the data path can adapt to the needs of a computer-vision application to provide the needed data. The data path can be adapted by applying one or more filters to image data from one or more sensors. Some embodiments may utilize a computer-vision processing unit comprising a specialized instruction-based, in-line processor capable of interpreting commands from a computer-vision application.