Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
An example system, including a camera to identify a projection area based on at least one criteria, and a projector unit attachable to the camera and to project an image to the projection area. A computing unit provides the image to the projector unit and sets the projector unit to project on the projection area, and the computing unit receives input through the camera and updates the image based on the input.
Abstract:
A system includes a sensor to capture multiple images of a portion of a first object illuminated by coherent illumination and a time of capture of each of the images; and a processor to compare two images of the multiple images to identify one or more touch points. Each touch point has a difference in value between the two images that is greater than a threshold. Upon determining a spatial shape formed by the identified touch points that corresponds to a pointing end of a pointing object, the system provides at least one of: i) a touch location of the pointing end relative to the first object, where the touch location is based on the spatial shape formed by the identified touch points, or ii) the time of capture of a second image of the two images that produced the spatial shape.
Abstract:
Systems to accurately position an instrument includes an instrument with an interest point, an optical sensing system having a camera and a sensor coupled to a surface of the instrument, a motion detector, and a processor. The optical sensing system collects a set of optically-sensed positional data points. The motion detector collects a set of motion-sensed positional data points. The processor applies a correction function on the set of optically-sensed positional data points and on the set of motion-sensed positional data points to provide a set of corrected positional data points.
Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
A system includes an image capturing device, a user input device, and a processor coupled to the image capturing device and user input device. The processor includes instructions for capturing a data image with the image capturing device. The data image includes a signal from the user input device. The processor further includes instructions for deactivating the signal from the user input device and, after deactivating the signal from the user input device, capturing an ambient image. The processor further includes instructions for subtracting the ambient image from the data image and determining a position of the user input device in a three-dimensional space using a result of the subtracting.
Abstract:
A projection system includes a video projector to project images having an image region on a surface having a border area associated with the surface, and a processing system including a graphical processing unit to evaluate the border area and the projected image region, the graphical processing unit to transform the projected image region into an aligned projected image region coinciding with the border area.
Abstract:
An example method is provided for presentation of a digital image of an object. The method comprises aligning a plurality of sensors with a projector unit, receiving, from a sensor of the plurality of sensors, an image of an object on a surface, detecting features of the object, and presenting the image on the surface based on the features of the object. The features include location and dimensions, wherein dimensions of the image match the dimensions of the object and location of the image overlap with the location of the object on the surface.
Abstract:
Examples relate to capturing and processing three dimensional (3D) scan data. In some examples, 3D scan data of a real-world object is obtained while the real-world object is repositioned in a number of orientations, where the 3D scan data includes 3D scan passes that are each associated with one of the orientations. A projector is used to project a visual cue related to a position of the real-world object as the real-world object is repositioned at each of the orientations. The 3D scan passes are stitched to generate a 3D model of the real-world object, where a real-time representation of the 3D model is shown on a display as each of the 3D scan passes is incorporated into the 3D model.
Abstract:
Examples disclosed herein describe, among other things, a computing system. The computing system may in some examples include a touch-sensitive surface, a display, and at least one camera to capture an image representing an object disposed between the camera and the touch-sensitive surface. The computing system may also include a detection engine to determine, based at least on the image, display coordinates, where the display coordinates may correspond to the object's projection onto the touch-sensitive surface, and the display is not parallel to the touch-sensitive surface. In some examples, the detection engine is also to display an object indicator at the determined display coordinates on the display.