Abstract:
A computer-implemented method comprises receiving a data stream that includes a series of code words that encodes a respective series of pixel data according to a first entropy coding lookup table, and processing the data stream to determine if there is a match between a first code word and a consecutive second code word, and a code word entry in a second entropy coding lookup table. The method also includes, if there is a match, decoding the first code word and the second code word using the second entropy coding lookup table. Further, the method includes, if there is not a match, decoding the first code word using the first entropy coding lookup table.
Abstract:
An example embodiment may involve obtaining an a×b pixel macro-cell from an input image. Pixels in the a×b pixel macro-cell may have respective pixel values and may be associated with respective tags. It may be determined whether at least e of the respective tags indicate that their associated pixels represent edges in the input image. Based on this determination, either a first encoding or a second encoding of the a×b pixel macro-cell may be selected. The first encoding may weigh pixels that represent edges in the input image heavier than pixels that do not represent edges in the input image, and the second encoding might not consider whether pixels represent edges. The selected encoding may be performed and written to a computer-readable output medium.
Abstract:
A device such as a color printer includes a main memory, a cache memory, and a convolutional neural network configured to convert pixels from a first color space to a second color space. The convolutional neural network is organized into execution-separable layers, and loaded one or more layer at a time (depending on cache size) from the main memory to the cache memory, whereby the pixels are processed through each of the layers in the cache memory, and layers that have completed processing are evicted to make room for caching next layer(s) of the network.
Abstract:
A device such as a color printer includes a main memory, a cache memory, and a convolutional neural network configured to convert pixels from a first color space to a second color space. The convolutional neural network is organized into execution-separable layers, and loaded one or more layer at a time (depending on cache size) from the main memory to the cache memory, whereby the pixels are processed through each of the layers in the cache memory, and layers that have completed processing are evicted to make room for caching next layer(s) of the network.
Abstract:
A system for super-sampling digital images detects artifacts in an SRGAN super-sampled image, determines blocks of the image that contribute to the artifacts, and if the artifact-contributing blocks exceed a threshold, discards the SRGAN generated output image in favor of applying a super-sampled image generated by an alternate mechanism, such as a nearest neighbor algorithm.
Abstract:
Systems and methods upscale an input image by a final upscaling factor. The systems and methods employ a first module implementing a super resolution neural network with feature extraction layers and multiple sets of upscaling layers sharing the feature extraction layers. The multiple sets of upscaling layers upscale the input image according to different respective upscaling factors to produce respective first module outputs. The systems and methods select the first module output with the respective upscaling factor closest to the final upscaling factor. If the respective upscaling factor for the selected first module output is equal to the final upscaling factor, the systems and methods output the selected first module output. Otherwise, the systems and methods provide the selected first module output to a second module that upscales the selected first module output to produce a second module output corresponding to the input image upscaled by the final upscaling factor.
Abstract:
An example embodiment may involve obtaining (i) an a×b attribute macro-cell, and (ii) a×b pixel macro-cells for each of a luminance plane, a first color plane, and a second color plane of an input image. The a×b pixel macro-cells may each contain 4 non-overlapping m×n pixel cells. The example embodiment may also involve determining 4 attribute-plane output values that represent the 4 non-overlapping m×n attribute cells, 1 to 4 luminance-plane output values that represent the a×b pixel macro-cell of the luminance plane, a first color-plane output value to represent the a×b pixel macro-cell of the first color plane, and a second color-plane output value to represent the a×b pixel macro-cell of the second color plane. The example embodiment may further involve writing an interleaved representation of the output values to a computer-readable output medium.
Abstract:
A digital image processor includes a region proposal network configured to transform digital image inputs into region proposals and bounding box refinement logic configured to transform the region proposals by determining a first set of the region proposals exhibiting dense spacing, determining a second set of the region proposals exhibiting sparse spacing, executing a first transformation to merge at least some of the region proposals exhibiting dense spacing to generate refined region proposals, executing a second transformation to join at least some of the region proposals exhibiting sparse spacing to generate additional ones of the refined region proposals, and applying an expansion transformation to the refined region proposals.
Abstract:
A system for super-sampling digital images detects artifacts in an SRGAN super-sampled image, determines blocks of the image that contribute to the artifacts, and if the artifact-contributing blocks exceed a threshold, discards the SRGAN generated output image in favor of applying a super-sampled image generated by an alternate mechanism, such as a nearest neighbor algorithm.
Abstract:
Systems and methods for processing images receive an input image. The systems and methods provide the input image to a first module to increase a resolution of the input image to produce an upscaled image. The systems and methods detect white pixels in the input image. The systems and methods generate a mask associated with the input image. The mask includes mask bits that are set to mark the white pixels in the input image. The systems and methods upscale the mask to produce an upscaled mask matching a resolution of the upscaled image. The systems and methods identify target pixels of the upscaled image that correspond to the set mask bits in the upscaled mask. The systems and methods modify the upscaled image to produce an output image by replacing target pixels of the upscaled image with a replacement pixel having greater whiteness. The systems and methods output the output image.