Abstract:
Techniques for providing a virtual touch screen are described. An example of a computing device with a virtual touch screen includes a projector to project a user interface image and a depth camera to detect objects in the vicinity of the user interface image. The computing device also includes a touch service that receives image data from the depth camera and analyzes the image data to generate touch event data. The computing device also includes a User Input (UI) device driver that receives the touch event data from the touch service and reports the touch event data to an operating system of the computing device. The touch service and UI device driver are system level software that is operable prior to a user logging onto the computing device.
Abstract:
Techniques for providing a virtual touch screen are described. An example of a computing device with a virtual touch screen includes a projector to project a user interface image onto a touch surface, and a depth camera to generate a depth image representing objects in a vicinity of the user interface image, and a touch mask generator. The computing device also includes a touch detection module to analyze the touch mask to detect touch events. The touch detection module is configured to identify a finger in the touch mask, identify a centroid region of the finger and compute a distance of the centroid region from a touch surface, and compare the distance to a threshold distance to identify a touch event.
Abstract:
In accordance with some embodiments, a touch input device such as a touch screen or track pad or touch pad may be operated in mouse mode by touching the screen simultaneously with more than one finger. In one embodiment, three fingers may be utilized. The three fingers in one embodiment may be the thumb, together with the index finger and the middle finger. Then the index finger and the middle finger may be used to left or right click to enter a virtual mouse command.
Abstract:
Technologies for performing contextually adaptive media streaming are described. In some embodiments, the technologies utilize contextual parameters leverage contextual information to alter the parameters of a content stream that is provided to a client device from a server. In some embodiments, the parameters of the content stream are altered by changing one or more input parameters (e.g., a report of network parameters) that is/are operated on by adaptive logic of a media player on the client device. Alternatively or additionally, in some embodiments the technologies leverage contextual information to alter the manner in which a client device processes content in a received content stream for consumption. Systems, devices, and methods employing the technologies are also described.
Abstract:
Techniques for providing a virtual touch screen are described. An example of a computing device with a virtual touch screen includes a projector to project a user interface image onto a touch surface, and a depth camera to generate a depth image representing objects in a vicinity of the user interface image, and a touch mask generator. The computing device also includes a touch detection module to analyze the touch mask to detect touch events. The touch detection module is configured to identify a finger in the touch mask, identify a centroid region of the finger and compute a distance of the centroid region from a touch surface, and compare the distance to a threshold distance to identify a touch event.
Abstract:
Techniques for calibrating touch detection devices are described herein. A method for calibrating touch detection may include detecting, via a processor, predefined calibration points from a projected predefined pattern based on visible image sensor data. The method may also include generating, via the processor, a surface depth model by fitting a surface plane to depth sensor data. The method may further include mapping, via the processor, the detected calibration points to surface depth coordinates. The method may also further include mapping, via the processor, an infrared image from infrared (IR) sensor data to the surface depth model using preconfigured image correlations.