Abstract:
Examples disclosed herein relate to projecting onto a touch-sensitive surface a projection image having projected regions corresponding to target and non-target touch regions. Examples include a computing system having a touch-sensitive surface, and a camera to capture an image representing an object disposed between the camera and the touch-sensitive surface. The computing system may also include a detection engine to identify, based at least on the object represented in the image, at least one touch region of the touch-sensitive surface, and to generate a projection image including a projected region corresponding to the touch region, and a projector to project the projection image onto the touch-sensitive surface.
Abstract:
Data is captured by an image capture device of an input object that has a first retroreflective pattern and a second, different retroreflective pattern on a surface of the input object. A position of the input object in three dimensions is determined based on the received data.
Abstract:
An example method is provided for presentation of a digital image of an object. The method comprises aligning a plurality of sensors with a projector unit, receiving, from a sensor of the plurality of sensors, an image of an object on a surface, detecting features of the object, and presenting the image on the surface based on the features of the object. The features include location and dimensions, wherein dimensions of the image match the dimensions of the object and location of the image overlap with the location of the object on the surface.
Abstract:
Examples relate to a turntable peripheral for three dimensional (3D) scanning. In some examples, 3D scan data of a real-world object is obtained while the object is rotated by the turntable peripheral. Positioning commands are sent to the turntable peripheral to rotate the object. The 3D scan data is collected while the turntable peripheral is in an untilted and/or tilted position.
Abstract:
Examples relate to capturing and processing three dimensional (3D) scan data. In some examples, 3D scan data of a real-world object is obtained while the real-world object is repositioned in a number of orientations, where the 3D scan data includes 3D scan passes that are each associated with one of the orientations. A projector is used to project a visual cue related to a position of the real-world object as the real-world object is repositioned at each of the orientations. The 3D scan passes are stitched to generate a 3D model of the real-world object, where a real-time representation of the 3D model is shown on a display as each of the 3D scan passes is incorporated into the 3D model.
Abstract:
An example system, including a projector unit, an all-in-one computer attachable to the projector unit, a camera communicatively coupled to the all-in-one computer, and a touch sensitive mat communicatively coupled to the all-in-one computer. The projector unit projects an image on to the touch sensitive mat, and the touch sensitive mat comprises an optical pattern used to track a handheld device interacting with the image.
Abstract:
Examples disclosed herein describe, among other things, a computing system. The computing system may in some examples include a touch-sensitive surface, a display, and at least one camera to capture an image representing an object disposed between the camera and the touch-sensitive surface. The computing system may also include a detection engine to determine, based at least on the image, display coordinates, where the display coordinates may correspond to the object's projection onto the touch-sensitive surface, and the display is not parallel to the touch-sensitive surface, in some examples, the detection engine is also to display an object indicator at the determined display coordinates on the display.
Abstract:
In some examples, a projection capture system comprises a controller, a camera operatively connected to the controller for capturing images of an object on a work surface of a workspace, a projector operatively connected to the controller, and a mirror above the projector to reflect light from the projector onto the work surface. The camera is located higher than the projector, and the controller is to control the camera to capture an image of a real object on the work surface, and control the projector to project the image of the real object onto the work surface.
Abstract:
Examples disclosed herein relate to aligning content displayed from a projector on to a touch sensitive mat. Examples include detecting a border of the mat, wherein the mat includes a surface area of a first spectral reflectance characteristic on to which the projector is to project the content, and the border of a second spectral reflectance characteristic different from the first spectral reflectance characteristic surrounding a perimeter of the surface area. As an example, detecting the border of the mat generally includes differentiating the second spectral reflectance characteristic of the border from the first spectral reflectance characteristic of the surface area. Examples include detecting a border of the content displayed on to the mat, and adjusting projector settings for the border of the content displayed on to the mat to fit within the detected border of the mat.
Abstract:
Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.