Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
A method performed by a display-camera system includes displaying first content and second content that occludes a portion of the first content on a display during a first time period, displaying the second content and third content on the display during a second time period that is non-overlapping with the first time period, the third content to minimize crosstalk from the first content, and capturing fourth content with a camera through the display during the second time period.
Abstract:
Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.
Abstract:
A method performed by a processing system includes determining a device identifier corresponding to a device from a series of captured images that include a light signal emitted by the device.
Abstract:
Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.
Abstract:
According to an example, 3D modeling motion parameters may be simultaneously determined for video frames according to different first and second motion estimation techniques. In response to detecting a failure of the first motion estimation technique, the 3D modeling motion parameters determined according to the second motion estimation technique may be used to re-determine the 3D modeling motion parameters according to the first motion estimation technique.
Abstract:
According to an example, 3D modeling motion parameters may be simultaneously determined for video frames according to different first and second motion estimation techniques. In response to detecting a failure of the first motion estimation technique, the 3D modeling motion parameters determined according to the second motion estimation technique may be used to re-determine the 3D modeling motion parameters according to the first motion estimation technique.