Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
Examples relate to three-dimensional (3D) scan tuning. In some examples, preliminary scan data is obtained while the real-world object is continuously rotated in view of a 3D scanning device, where the 3D scanning device performs a prescan to collect the preliminary scan data. The preliminary scan data is then used to determine physical characteristics of the real-world object, and a camera operating mode of the 3D scanning device is modified based on the physical characteristics. At this stage, 3D scan data for generating a 3D model of the real-world object is obtained, where the 3D scanning device scans the real-world object according to the camera operating mode.
Abstract:
A system includes a sensor to capture multiple images of a portion of a first object illuminated by coherent illumination and a time of capture of each of the images; and a processor to compare two images of the multiple images to identify one or more touch points. Each touch point has a difference in value between the two images that is greater than a threshold. Upon determining a spatial shape formed by the identified touch points that corresponds to a pointing end of a pointing object, the system provides at least one of: i) a touch location of the pointing end relative to the first object, where the touch location is based on the spatial shape formed by the identified touch points, or ii) the time of capture of a second image of the two images that produced the spatial shape.
Abstract:
An example method is described in which files are received by a computer system. A first user interface is displayed on a first display of the computer system. The first user interface includes multiple user interface elements representing the files. In response to detecting a first user gesture selecting a selected user interface element from the multiple user interface elements via the first display, a second user interface is generated and displayed on a second display of the computer system. The second user interface includes a detailed representation of a file represented by the selected user interface element. In response to detecting a second user gesture interacting with the selected user interface element via the first display, the first user interface on the first display is updated to display the interaction with the selected user interface.
Abstract:
An example processor-implemented method for generating corners of a display area is provided. The method comprises detecting a dominant line for each side of the display area, each dominant line used to identify corners, detecting subline segments on each side of the display area, determining a distance between the corners identified by the dominant lines and the sub-line segments on each side, and generating the corners of the display area based on the distance.
Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
A plurality of homography operators define respective mappings between pairs of coordinate spaces, wherein the coordinate spaces include a coordinate space of a first visual sensor, a virtual coordinate space, and a coordinate space of a second visual sensor. Calibration between the first and second visual sensors is provided using the plurality of homography operators.
Abstract:
Examples disclosed herein relate to projecting onto a touch-sensitive surface a projection image having projected regions corresponding to target and non-target touch regions. Examples include a computing system having a touch-sensitive surface, and a camera to capture an image representing an object disposed between the camera and the touch-sensitive surface. The computing system may also include a detection engine to identify, based at least on the object represented in the image, at least one touch region of the touch-sensitive surface, and to generate a projection image including a projected region corresponding to the touch region, and a projector to project the projection image onto the touch-sensitive surface.
Abstract:
Examples relate to improving unintended touch rejection. In this manner, the examples disclosed herein enable recognizing a touch on a touch-sensitive surface, capturing a set of data related to the touch, wherein the set of data comprises a set of spatial features relating to a shape of the touch over a set of time intervals, and determining whether the recognized touch was intended based on a comparison of a first shape of the touch at a first time interval of the set of time intervals and a second shape of the touch at a second time interval of the set of time intervals.
Abstract:
Examples of a system, method, and machine-readable non-transitory storage medium including instructions executable by a processor are disclosed herein. An example of the machine-readable non-transitory storage medium includes instructions executable by a processor to allow selection of a capture mode of a sensor module to record still images and/or a video mode of the sensor module to record video, retrieve default calibrated sensor module settings from a persistent memory, allow creation of at least one user defined sensor module setting that differs from one of the default calibrated sensor module settings, and utilize the at least one user defined sensor module setting along with the remaining default calibrated sensor module settings with the selected sensor module mode.