Abstract:
Techniques disclosed herein utilize a vision sensor that integrates a special-purpose camera with dedicated computer vision (CV) computation hardware and a dedicated low-power microprocessor for the purposes of detecting, tracking, recognizing, and/or analyzing subjects, objects, and scenes in the view of the camera. The vision sensor processes the information retrieved from the camera using the included low-power microprocessor and sends “events” (or indications that one or more reference occurrences have occurred, and, possibly, associated data) for the main processor only when needed or as defined and configured by the application. This allows the general-purpose microprocessor (which is typically relatively high-speed and high-power to support a variety of applications) to stay in a low-power (e.g., sleep mode) most of the time as conventional, while becoming active only when events are received from the vision sensor.
Abstract:
This disclosure provides systems, methods and apparatus related to touch and gesture recognition with an electronic interactive display. The interactive display has a front surface that includes a viewing area, a planar light guide disposed proximate to and behind the front surface, a light source, and at least one photo sensing element coupled with the first planar light guide. The planar light guide is configured to receive scattered light, the received scattered light resulting from interaction between light emitted by the light source and an object in optical contact with the front surface. The photo sensing element is configured to detect at least some of the received scattered light and to output, to a processor, image data. The processor is configured to recognize, from the image data, one or both of a contact pressure and a rotational orientation of the object.
Abstract:
Apparatuses, methods, and systems are presented for reacting to scene-based occurrences. Such an apparatus may comprise dedicated computer vision (CV) computation hardware configured to receive sensor data from a sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels of the sensor array. The apparatus may further comprise a first processing unit configured to control operation of the dedicated CV computation hardware. The first processing unit may be further configured to execute one or more application programs and, in conjunction with execution of the one or more application programs, communicate with at least one input/output (I/O) device controller, to effectuate an I/O operation in reaction to an event generated based on operations performed on the one or more computed CV features.
Abstract:
Techniques disclosed herein utilize a vision sensor that integrates a special-purpose camera with dedicated computer vision (CV) computation hardware and a dedicated low-power microprocessor for the purposes of detecting, tracking, recognizing, and/or analyzing subjects, objects, and scenes in the view of the camera. The vision sensor processes the information retrieved from the camera using the included low-power microprocessor and sends “events” (or indications that one or more reference occurrences have occurred, and, possibly, associated data) for the main processor only when needed or as defined and configured by the application. This allows the general-purpose microprocessor (which is typically relatively high-speed and high-power to support a variety of applications) to stay in a low-power (e.g., sleep mode) most of the time as conventional, while becoming active only when events are received from the vision sensor.
Abstract:
Techniques disclosed herein utilize a vision sensor that integrates a special-purpose camera with dedicated computer vision (CV) computation hardware and a dedicated low-power microprocessor for the purposes of detecting, tracking, recognizing, and/or analyzing subjects, objects, and scenes in the view of the camera. The vision sensor processes the information retrieved from the camera using the included low-power microprocessor and sends “events” (or indications that one or more reference occurrences have occurred, and, possibly, associated data) for the main processor only when needed or as defined and configured by the application. This allows the general-purpose microprocessor (which is typically relatively high-speed and high-power to support a variety of applications) to stay in a low-power (e.g., sleep mode) most of the time as conventional, while becoming active only when events are received from the vision sensor.
Abstract:
Apparatuses, methods, and systems are presented for reacting to scene-based occurrences. Such an apparatus may comprise dedicated computer vision (CV) computation hardware configured to receive sensor data from a sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels of the sensor array. The apparatus may further comprise a first processing unit configured to control operation of the dedicated CV computation hardware. The first processing unit may be further configured to execute one or more application programs and, in conjunction with execution of the one or more application programs, communicate with at least one input/output (I/O) device controller, to effectuate an I/O operation in reaction to an event generated based on operations performed on the one or more computed CV features.
Abstract:
This disclosure provides systems, methods and apparatus for touch and gesture recognition, using a field sequential color display. The display includes a processor, a lighting system, and an arrangement for spatial light modulation that includes an array of light modulators. Each light modulator is switchable between an open position that permits transmittance of light from the lighting system through a respective aperture and a shut position that blocks light transmission through the respective aperture. The processor switches the light modulators in accordance with a first modulation scheme to render an image and in accordance with a second modulation scheme to selectively pass object illuminating light through at least one of the respective apertures. A light sensor receives light resulting from interaction of the object illuminating with an object and outputs a signal to the processor. The processor recognizes, from the output of the light sensor, a characteristic of the object.
Abstract:
This disclosure provides systems, methods and apparatus for touch and gesture recognition, using a field sequential color display. The display includes a processor, a lighting system, and an arrangement for spatial light modulation that includes a number of apertures, and devices for opening and shutting the apertures. A light directing arrangement includes at least one light turning feature. The display lighting system is configured to emit visible light and infrared (IR) light through at least a first opened one of the plurality of apertures. The light turning feature is configured to redirect IR light emitted through the opened aperture into at least one lobe, and to pass visible light emitted by the display lighting system through the opened aperture with substantially no redirection.