Abstract:
According to one example for segmenting image data, image data comprising color pixel data, IR data, and depth data is received from a sensor. The image data is segmented into a first list of objects based on at least one computed feature of the image data. At least one object type is determined for at least one object in the first list of objects. The segmentation of the first list of objects is refined into a second list of objects based on the at least one object type. In an example, the second list of objects is output.
Abstract:
Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.
Abstract:
According to an example, a method for object segmentation may include receiving a digital image, performing initial segmentation on the digital image to generate a segmented digital image, and receiving refinement instructions to refine the initial segmentation. The method may further include inferring an intention of a user to correct a foreground area or a background area of the initial segmentation based on the received refinement instructions, learning a behavior of the user to further infer the intention of the user to correct the foreground area or the background area, and refining, by a processor, the initial segmentation based on the inferred intention.
Abstract:
Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.
Abstract:
Examples disclosed herein relate to aligning content displayed from a projector on to a touch sensitive mat. Examples include detecting a border of the mat, wherein the mat includes a surface area of a first spectral reflectance characteristic on to which the projector is to project the content, and the border of a second spectral reflectance characteristic different from the first spectral reflectance characteristic surrounding a perimeter of the surface area. As an example, detecting the border of the mat generally includes differentiating the second spectral reflectance characteristic of the border from the first spectral reflectance characteristic of the surface area. Examples include detecting a border of the content displayed on to the mat, and adjusting projector settings for the border of the content displayed on to the mat to fit within the detected border of the mat.
Abstract:
Examples disclosed herein relate to detecting misalignment of a touch sensitive mat Examples include detecting corners of the touch sensitive mat, determining a set of reference corners, performing a comparison of the detected corners of the mat with the set of reference corners, and determining a level of misalignment based on the comparison.
Abstract:
According to one example for segmenting image data, image data comprising color pixel data, IR data, and depth data is received from a sensor. The image data is segmented into a first list of objects based on at least one computed feature of the image data. At least one object type is determined for at least one object in the first list of objects. The segmentation of the first list of objects is refined into a second list of objects based on the at least one object type. In an example, the second list of objects is output.
Abstract:
A methods and system for recognizing a three dimensional object on a base are disclosed. A three dimensional image of the object is received as a three-dimensional point cloud having depth data and color data. The base is removed from the three dimensional point cloud to generate a two-dimensional image representing the object. The two-dimensional image is segmented to determine object boundaries of a detected object. Color data from the object is applied to refine segmentation and match the detected object to a reference object data.
Abstract:
A method includes receiving data representing an image captured of an object disposed on a surface in the presence of illumination by a flash light. The technique includes processing the data to identify an object type associated with the object and further processing the data based at least in part on the identified object type.
Abstract:
Examples disclosed herein relate to detecting misalignment of a touch sensitive mat. Examples include detecting corners of the touch sensitive mat, determining a set of reference corners, performing a comparison of the detected corners of the mat with the set of reference corners, and determining a level of misalignment based on the comparison.