Abstract:
Disclosed are embodiments of systems and methods to generate background and foreground images for a document, which enables high-quality and high-ratio document compression. In embodiments, high-accuracy layer processing enables text enhancement, paper color removal, and many other advanced image analysis and processing. Embodiments of the systems support several operation modes and its many parameters, such as layer compression ratios, image segmentation, and modulized image processing, may be adjusted to generate optimal compressed files for different purposes.
Abstract:
Disclosed are embodiments of systems and methods to generate a composite image from an captured image, such as from a whiteboard, chalkboard, paper, card, poster, sign, or the like. Systems and methods are disclosed for generating a foreground image layer and mask layer, which enables high-quality and high-ratio document compression. In embodiments, a foreground image layer and mask layer may be generated by identifying non-background pixels in the captured image.
Abstract:
Disclosed are embodiments of systems and methods to generate background and foreground images for a document, which enables high-quality and high-ratio document compression. In embodiments, high-accuracy layer processing enables text enhancement, paper color removal, and many other advanced image analysis and processing. Embodiments of the systems support several operation modes and its many parameters, such as layer compression ratios, image segmentation, and modulized image processing, may be adjusted to generate optimal compressed files for different purposes.
Abstract:
Systems, apparatuses, and methods are described for performing fast segmentation of an image. In embodiments, an image may be segmented by generating a background mask, generating an edge mask, dilating the edge mask, and refining that edge mask by applying refinement operations that remove edge pixels from the edge mask. In embodiments, the refined edge mask may be used to generate a foreground mask.
Abstract:
Disclosed are embodiments of systems and methods to stitch two or more images together into a composite image. By finding matching point pairs for a pair of images, a homography transform may be obtained for the pair of images. The homography transform may be used to generate a composite image of the image pair. In an embodiment, the process of identifying a homography transform may be iterated. In an embodiment, when forming the composite image, the transformed foreground regions may be selected such that there is no intersection of foreground pixel regions. In an embodiment, foreground pixel regions on the border of an image may be removed. The resulting composite image is a larger image generated from the selected regions from the input images. In embodiments, the process may be repeated for sets of images with more than two images.
Abstract:
Disclosed are embodiments of systems and methods for embedding and/or extracting data from images. In embodiments, an image may be segmented into regions, and characters or other image groups within a segmented region may be determined to be embedding sites. A data vector may be embedded into a set of corresponding ordered embedding sites by representing each data element as different intensity values assigned to the pixels within one portion of an embedding site relative to the pixels in another portion of the embedding site. In embodiments, embedded data may be extracted from an image by extracting and decoding a set of bit values from a set of identified and ordered embedding sites.
Abstract:
Disclosed are embodiments of systems and methods to stitch two or more images together into a composite image. By finding matching point pairs for a pair of images, a homography transform may be obtained for the pair of images. The homography transform may be used to generate a composite image of the image pair. In an embodiment, the process of identifying a homography transform may be iterated. In an embodiment, when forming the composite image, the transformed foreground regions may be selected such that there is no intersection of foreground pixel regions. In an embodiment, foreground pixel regions on the border of an image may be removed. The resulting composite image is a larger image generated from the selected regions from the input images. In embodiments, the process may be repeated for sets of images with more than two images.
Abstract:
Disclosed are embodiments of systems and methods to generate a composite image from an captured image, such as from a whiteboard, chalkboard, paper, card, poster, sign, or the like. Systems and methods are disclosed for generating a foreground image layer and mask layer, which enables high-quality and high-ratio document compression. In embodiments, a foreground image layer and mask layer may be generated by identifying non-background pixels in the captured image.
Abstract:
Disclosed are embodiments of systems and methods for embedding and/or extracting data from images. In embodiments, an image may be segmented into regions, and characters or other image groups within a segmented region may be determined to be embedding sites. A data vector may be embedded into a set of corresponding ordered embedding sites by representing each data element as different intensity values assigned to the pixels within one portion of an embedding site relative to the pixels in another portion of the embedding site. In embodiments, embedded data may be extracted from an image by extracting and decoding a set of bit values from a set of identified and ordered embedding sites.
Abstract:
Disclosed are embodiments of systems and methods for embedding and/or extracting data from images. In embodiments, an image may be segmented into regions, and characters or other image groups within a segmented region may be determined to be embedding sites. A data vector may be embedded into a set of corresponding ordered embedding sites by representing each data element as different intensity values assigned to the pixels within one portion of an embedding site relative to the pixels in another portion of the embedding site. In embodiments, embedded data may be extracted from an image by extracting and decoding a set of bit values from a set of identified and ordered embedding sites.