Abstract:
In some embodiments, techniques for using machine learning to enable visible light pupilometry are provided. In some embodiments, a smartphone may be used to create a visible light video recording of a pupillary light reflex (PLR). A machine learning model may be used to detect a size of a pupil in the video recording over time, and the size over time may be presented to a clinician. In some embodiments, a system that includes a smartphone and a box that holds the smartphone in a predetermined relationship to a subject's face is provided. In some embodiments, a sequential convolutional neural network architecture is used. In some embodiments, a fully convolutional neural network architecture is used.
Abstract:
In some embodiments, techniques for using machine learning to enable visible light pupilometry are provided. In some embodiments, a smartphone may be used to create a visible light video recording of a pupillary light reflex (PLR). A machine learning model may be used to detect a size of a pupil in the video recording over time, and the size over time may be presented to a clinician. In some embodiments, a system that includes a smartphone and a box that holds the smartphone in a predetermined relationship to a subject's face is provided. In some embodiments, a sequential convolutional neural network architecture is used. In some embodiments, a fully convolutional neural network architecture is used.
Abstract:
Examples of systems and methods for classifying SpO2 levels using smartphones are described. A wideband light source (e.g., a flash) may be used to illuminate a finger. A wideband imaging sensor (e.g., a camera) may be used to capture images of the illuminated finger. The smartphone may apply per-color channel gain adjustments to the captured images. The adjusted pixel data may be used as the basis of input to a classifier (e.g., a deep learning model). The classifier may be trained on ground truth data, such as from an induced hypoxia study. The classifier may output an SpO2 level blood in the finger.
Abstract:
Examples of systems, devices, and methods are described herein that can provide for gesture recognition. Wireless communication signals are received from sources in an environment (e.g. cellular telephones, computers, etc.). Features of the wireless communication signals (e.g. Doppler shifts) are extracted and utilized to identify gestures. The use of wireless communication signals accordingly make possible gesture recognition in a whole-home environment that identifies gestures performed through walls or other obstacles.
Abstract:
Systems and methods for sensing environmental changes using electromagnetic interference (EMI) signals are disclosed herein. An EMI monitoring system may be used to monitor an EMI signal of one or more EMI signal sources provided over a power line, e.g., in a home or building. The received EMI energy at the power line may be analyzed to detect variations in the EMI signature indicative of environmental changes occurring in the proximity of the signal sources. Examples include detection of gestures on or near liquid crystal displays using EMI signals generated by internal operation of the liquid crystal displays.
Abstract:
By monitoring pressure transients in a liquid within a liquid distribution system using only a single sensor, events such as the opening and closing of valves at specific fixtures are readily detected. The sensor, which can readily be coupled to a faucet bib, transmits an output signal to a computing device. Each such event can be identified by the device based by comparing characteristic features of the pressure transient waveform with previously observed characteristic features for events in the system. These characteristic features, which can include the varying pressure, derivative, and real Cepstrum of the pressure transient waveform, can be used to select a specific fixture where a valve open or close event has occurred. Flow to each fixture and leaks in the system can also be determined from the pressure transient signal. A second sensor disposed at a point disparate from the first sensor provides further event information.
Abstract:
An apparatus including a sensing device configured to be coupled to an electrical outlet is provided. The sensing device can include a data acquisition receiver configured to receive electrical noise via the electrical outlet when the sensing device is coupled to the electrical outlet. The electrical outlet can be electrically coupled to an electrical power infrastructure. One or more electrical devices can be coupled to the electrical power infrastructure and can generate at least a portion of the electrical noise on the electrical power infrastructure. The data acquisition receiver can be configured to convert the electrical noise into one or more first data signals. The apparatus also can include a processing module configured to run on a processor of a computational unit. The sensing device can be in communication with the computational unit. The processing module can be further configured to identify each of two or more operating states of each of the one or more electrical devices at least in part using the one or more first data signals. The two or more operating states of each electrical device of the one or more electrical devices can be each different user-driven operating states of the electrical device when the electrical device is in an on-power state. Other embodiments are provided.
Abstract:
Examples of automatic valve shutoff systems are described which may include an actuation device including an actuator and a valve attachment portion. The valve attachment portion may be configured for attachment with an existing valve in a fluid or compressible gas supply line. The system may further include a controller coupled to the actuation device, wherein the controller is configured to initiate a valve shutoff process in response to a wireless signal. Wake-up circuitry may be coupled to the controller and configured to monitor the supply line for vibrations and activate the controller in response to the vibrations.
Abstract:
Examples of systems, devices, and methods are described herein that may provide for gesture recognition. Wireless communication signals may be received from sources in an environment (e.g. cellular telephones, computers, etc.). Features of the wireless communication signals (e.g. Doppler shifts) may be extracted and utilized to identify gestures. The use of wireless communication signals may accordingly make possible gesture recognition in a whole-home environment that may identify gestures performed through walls or other obstacles.
Abstract:
A system for classifying a user touch event by a user interacting with a device as an intended key is provided. For different hand postures (e.g., holding device with right hand and entering text with right thumb), the system provides a touch pattern model indicating how the user interacts using that hand posture. The system receives an indication of a user touch event and identifies the hand posture of the user. The system then determines the intended key based on the user touch event and a touch pattern model for the identified hand posture. A system is also provided for determining the amount a presser a user is applying to the device based on dampening of vibrations as measured by an inertial sensor. A system is provided that uses motion of the device as measured by an inertial sensor to improve the accuracy of text entry.