摘要:
A “Contact Discriminator” provides various techniques for differentiating between valid and invalid contacts received from any input methodology by one or more touch-sensitive surfaces of a touch-sensitive computing device. Examples of contacts include single, sequential, concurrent, or simultaneous user finger touches (including gesture type touches), pen or stylus touches or inputs, hover-type inputs, or any combination thereof. The Contact Discriminator then acts on valid contacts (i.e., contacts intended as inputs) while rejecting or ignoring invalid contacts or inputs. Advantageously, the Contact Discriminator is further capable of disabling or ignoring regions of input surfaces, such tablet touch screens, that are expected to receive unintentional contacts, or intentional contacts not intended as inputs, for device or application control purposes. Examples of contacts not intended as inputs include, but are not limited to, a user's palm resting on a touch screen while the user writes on that screen with a stylus or pen.
摘要:
A “Contact Discriminator” provides various techniques for differentiating between valid and invalid contacts received from any input methodology by one or more touch-sensitive surfaces of a touch-sensitive computing device. Examples of contacts include single, sequential, concurrent, or simultaneous user finger touches (including gesture type touches), pen or stylus touches or inputs, hover-type inputs, or any combination thereof. The Contact Discriminator then acts on valid contacts (i.e., contacts intended as inputs) while rejecting or ignoring invalid contacts or inputs. Advantageously, the Contact Discriminator is further capable of disabling or ignoring regions of input surfaces, such tablet touch screens, that are expected to receive unintentional contacts, or intentional contacts not intended as inputs, for device or application control purposes. Examples of contacts not intended as inputs include, but are not limited to, a user's palm resting on a touch screen while the user writes on that screen with a stylus or pen.
摘要:
A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.
摘要:
The recognition of user input to a computing device is enhanced. The user input is either speech, or handwriting data input by the user making screen-contacting gestures, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed screen-contacting gestures that are made by the user, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed non-screen-contacting gestures that are made by the user.
摘要:
The recognition of user input to a computing device is enhanced. The user input is either speech, or handwriting data input by the user making screen-contacting gestures, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed screen-contacting gestures that are made by the user, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed non-screen-contacting gestures that are made by the user.
摘要:
A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.
摘要:
A sensor manager provides dynamic input fusion using thermal imaging to identify and segment a region of interest. Thermal overlay is used to focus heterogeneous sensors on regions of interest according to optimal sensor ranges and to reduce ambiguity of objects of interest. In one implementation, a thermal imaging sensor locates a region of interest that includes an object of interest within predetermined wavelengths. Based on the thermal imaging sensor input, the regions each of the plurality of sensors are focused on and the parameters each sensor employs to capture data from a region of interest are dynamically adjusted. The thermal imaging sensor input may be used during data pre-processing to dynamically eliminate or reduce unnecessary data and to dynamically focus data processing on sensor input corresponding to a region of interest.
摘要:
Various embodiments are disclosed that relate to the presentation of video images in a presentation space via a head-mounted display. For example, one disclosed embodiment comprises receiving viewer location data and orientation data from a location and orientation sensing system, and from the viewer location data and the viewer orientation data, locate a viewer in a presentation space, determine a direction in which the user is facing, and determine an orientation of the head-mounted display system. From the determined location, direction, and orientation, a presentation image is determined based upon a portion of and an orientation of a volumetric image mapped to the portion of the presentation space that is within the viewer's field of view. The presentation image is then sent to the head-mounted display.
摘要:
A node device in a distributed virtual environment captures locational signals projected by another node device into a capture area of the node device and reflected from the capture area to a capture device of the node device. The location of the node device relative to the other node device is determined based on the captured locational signals. The determined location can be based on an angular relationship determined between the node device and the other node device based on the captured locational signals. The determined location can also be based on a relative distance determined between the node device and the other node device based on the captured locational signals. Topology of the capture area can also be detected by the node device, and topologies of multiple capture areas can be combined to define one or more surfaces in a virtual environment.
摘要:
A method for constructing a 3D representation of a subject comprises capturing, with a camera, a 2D image of the subject. The method further comprises scanning a modulated illumination beam over the subject to illuminate, one at a time, a plurality of target regions of the subject, and measuring a modulation aspect of light from the illumination beam reflected from each of the target regions. A moving-mirror beam scanner is used to scan the illumination beam, and a photodetector is used to measure the modulation aspect. The method further comprises computing a depth aspect based on the modulation aspect measured for each of the target regions, and associating the depth aspect with a corresponding pixel of the 2D image.