Abstract:
Systems and methods may provide for determining a sound vibration condition of an ambient environment of a wearable device and determining a motion condition of the wearable device. In addition, one or more automated voice operations may be performed based at least in part on the sound vibration condition and the motion condition. In one example, two or more signals corresponding to the sound vibration condition and the motion condition may be combined.
Abstract:
Technologies are described herein that allow a user to wake up a computing device operating in a low-power state and for the user to be verified by speaking a single wake phrase. Wake phrase recognition is performed by a low-power engine. in some embodiments, the low-power engine may also perform speaker verification. In other embodiments, the mobile device wakes up after a wake phrase is recognized and a component other than the low-power engine performs speaker verification on a portion of the audio input comprising the wake phrase, More than one wake phrases may be associated with a particular user, and separate users may be associated with different wake phrases. Different wake phrases may cause the device transition from a low-power state to various active states.
Abstract:
Technologies are described herein that allow a user to wake up a computing device operating in a low-power state and for the user to be verified by speaking a single wake phrase. Wake phrase recognition is performed by a low-power engine. In some embodiments, the low-power engine may also perform speaker verification. In other embodiments, the mobile device wakes up after a wake phrase is recognized and a component other than the low-power engine performs speaker verification on a portion of the audio input comprising the wake phrase. More than one wake phrases may be associated with a particular user, and separate users may be associated with different wake phrases. Different wake phrases may cause the device transition from a low-power state to various active states.
Abstract:
Apparatus, computer-readable storage medium, and method associated with speech recognition are described. In embodiments, a mobile phone may include a processor; and a speech recognition module coupled with the processor. The voice recognition module may be configured to recognize one or more voice commands and may include first echo cancellation logic and second echo cancellation logic to be selectively employed during recognition of voice commands. Employment of the first and second echo cancellation logic respectively may cause the mobile phone to variably consume a first and second amount of energy, with the second amount of energy being less than the first amount energy.
Abstract:
Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.
Abstract:
Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.
Abstract:
Various systems and methods for air gesture-based composition and instruction systems are described herein. A composition system for composing gesture-based performances may receive an indication of an air gesture performed by a user; reference a mapping of air gestures to air gesture notations to identify an air gesture notation corresponding to the air gesture; and store an indication of the air gesture notation in a memory of the computerized composition system. Another system used for instruction may present a plurality of air gesture notations in a musical arrangement; receive an indication of an air gesture performed by a user; reference a mapping of air gestures to air gesture notations to identify an air gesture notation corresponding to the air gesture; and guide the user through the musical arrangement by sequentially highlighting the air gesture notations in the musical arrangement based on the mapping of air gestures to air gesture notations.
Abstract:
Systems and methods may be used to provide effects corresponding to movement of instrument objects or other objects. A method may include receiving sensor data from an object based on movement of the object, recognizing a gesture from the sensor data, and determining an effect, such as a visualization or audio effect corresponding to the gesture. The method may include causing the effect to be output in response to the determination.
Abstract:
A touchpad system is disclosed herein that includes a touchpad sensor formed from a plurality of flexible materials and is at least partially integrated into host clothing or furniture such that the touchpad sensor is generally obscured from view. The touchpad sensor may be implemented as a resistive or capacitive touchpad, whereby user input is detected and converted into a proportional electrical signal. The touchpad sensor may include N layers of flexible materials configured to conduct electrical energy, and also conform to contours of a host object. At least one layer of the flexible materials may be heat sealed or otherwise adhered to an inner surface of the host object, such as beneath a shirt sleeve. Thus smart clothing/furniture may appear to be “normal” while also providing convenient user-access to a touchpad sensor that can robustly handle a wide variety of user-input to control operation of a remote computing device.
Abstract:
The present disclosure describes a number of embodiments related to devices, systems, and methods locating a an object using ultra-wide band (UWB) radio transceivers embedded in carpet or other flexible material that may be rolled up and moved to various locations. Once in a location, the carpet may be unrolled and the multiple embedded radio transceivers may receive a signal from a tag attached to the object sending UWB radio signals. Based on the signals received by the UWB radio transceivers, various processes including time-difference on arrival, time-of-flight, and phase shift may be used to determine the location or the movement of the object.