Abstract:
A computing device, such as a desktop, laptop, tablet computer, a mobile device, or a computing device integrated into another device (e.g., an entertainment device for gaming, a television, an appliance, kiosk, vehicle, tool, etc.) is configured to determine user input commands from the location and/or movement of one or more objects in a space. The object(s) can be imaged using one or more optical sensors and the resulting position data can be interpreted in any number of ways to determine a command. An interactive volume can be defined and adjusted so that the same movement at different locations within the volume may result in different corresponding movement of a cursor or other interpretations of input.
Abstract:
A computing device, such as a desktop, laptop, tablet computer, a mobile device, or a computing device integrated into another device (e.g., an entertainment device for gaming, a television, an appliance, kiosk, vehicle, tool, etc.) is configured to determine user input commands from the location and/or movement of one or more objects in a space. The object(s) can be imaged using one or more optical sensors and the resulting position data can be interpreted in any number of ways to determine a command, including 2-dimensional and 3-dimensional movements with or without touch.
Abstract:
A position detection system includes at least two optical units configured to image a space, a memory, and a processing device interfaced to the memory and the optical units. The processing device is configured to access image data from the first and second optical units and use this data to determine at least one of a current first position and a current second position representing touch points on a display. The processing device can define a polygon having at least four sides based the current first and current second positions and can access the memory to store and retrieve the polygon. If the processing device can determine only one of the current first position or the current second position based on the accessed image data, the processing device can use the previously defined polygon to estimate the other position that was not determined using the accessed image data.
Abstract:
A method of mapping subsurface fracture geometry below a surface of the ground includes two independently powered systems, namely a plurality of sensors distributed through a hole in the subsurface and a downhole tool to facilitate reception and transmission of signal data from the plurality of sensors. The sensors are distributed into fissures within formations that have been hydraulically fractured. The sensors send signal data to the downhole tool for transmission to a unit on the surface. The signal data permits for the mapping of the fissures within the fractured formations.
Abstract:
A mounting assembly for an optical touch system can comprise an elongated member defining a top face extending in a plane. A first mounting portion extends perpendicular to a first end of the elongated member and a second mounting portion extends perpendicular a second end. The elongated member defines a first side face perpendicular to a top face and each mounting portion defines a second side face perpendicular to the top face. The mounting assembly is directly mounted to a panel with the top face of the elongated member overlaying at least a portion of the front face of the panel, the first side face overlaying at least a portion of the top edge, the top face of each mounting portion overlaying at least a portion of the front face along the side edges, and each second side face overlaying at least a portion of a respective side edge.
Abstract:
A coordinate detection system can comprise a display screen, a touch surface corresponding the top of the display screen or a material positioned above the screen and defining a touch area, at least one camera outside the touch area and configured to capture an image of space above the touch surface, and a processor executing program code to identify whether an object interferes with the light from the light source projected through the touch surface based on the image captured by the at least one camera. The processor can be configured to carry out a calibration routine utilizing a single touch point in order to determine a plane corresponding to the touch surface by using mirror images of the features adjacent the touch surface, images of the features, and/or based on the touch point and a normal to the reflective plane defined by an image of the object and its mirror image.
Abstract:
A touch screen which uses light sources at one or more edges of the screen which directs light across the surface of the screen and at least two cameras having electronic outputs located at the periphery of the screen to receive light from said light sources. A processor receives the outputs of said cameras and employs triangulation techniques to determine the location of an object proximate to said screen. Detecting the presence of an object includes detecting at the cameras the presence or absence of direct light due to the object, using a screen surface as a mirror and detecting at the cameras the presence or absence of reflected light due to an object. The light sources may be modulated to provide a frequency band in the output of the cameras.
Abstract:
A touch screen which uses light sources at one or more edges of the screen which directs light across the surface of the screen and at least two cameras having electronic outputs located at the periphery of the screen to receive light from said light sources. A processor receives the outputs of said cameras and employs triangulation techniques to determine the location of an object proximate to said screen. Detecting the presence of an object includes detecting at the cameras the presence or absence of direct light due to the object, using a screen surface as a mirror and detecting at the cameras the presence or absence of reflected light due to an object. The light sources may be modulated to provide a frequency band in the output of the cameras.
Abstract:
An optical touch detection system may rely on triangulating points in a touch area based on the direction of shadows cast by an object interrupting light in the touch area. When two interruptions occur simultaneously, ghost points and true touch points triangulated from the shadows can be distinguished from one another without resort to additional light detectors. In some embodiments, a distance from a touch point to a single light detector can be determined or estimated based on a change in the length of a shadow detected by a light detector when multiple light sources are used. Based on the distance, the true touch points can be identified by comparing the distance as determined from shadow extension to a distance calculated from the triangulated location of the touch points.
Abstract:
A coordinate detection system can comprise a display screen, a touch surface corresponding the top of the display screen or a material positioned above the screen and defining a touch area, at least one camera outside the touch area and configured to capture an image of space above the touch surface, and a processor executing program code to identify whether an object interferes with the light from the light source projected through the touch surface based on the image captured by the at least one camera. The processor can be configured to carry out a calibration routine utilizing a single touch point in order to determine a plane corresponding to the touch surface by using mirror images of the features adjacent the touch surface, images of the features, and/or based on the touch point and a normal to the reflective plane defined by an image of the object and its mirror image.