Abstract:
There is disclosed an image processing apparatus which applies an adjusting process to an image that includes a pixel to be processed. The image processing apparatus extracts an image area with a predetermined size including the pixel to be processed. The apparatus calculates a variation associated with the pixel to be processed from signal values of pixels included in the image area. The apparatus calculates a variation time count in the image area from the signal values of the pixels included in the image area. The apparatus calculates adjusting levels Fz1, Fz2, and Fe from the variation time count and the variation using a definition unit which defines correspondence among the variation time count, the variation, and the adjusting levels, and applies an adjusting process to a signal value of the pixel to be processed by the calculated adjusting levels. Note that the definition unit defines the correspondence so that the adjusting levels Fz1, Fz2, and Fe progressively change in accordance with different variation time counts or different variations.
Abstract:
One dither mask having a highest spacial frequency is selected from a plurality of dither masks. Next, a granularity is obtained with reference to a table based on the selected dither mask and an ejection amount level per area. Moreover, a difference in granularity between adjacent areas is calculated with respect to all of the areas. A maximum value is obtained out of the obtained differences in granularity, and then, the maximum difference in granularity is compared with a determination threshold. When the maximum difference in granularity is the threshold or greater, it is determined whether or not a dither mask having a spacial frequency lower than that of the selected dither mask is stored in a memory. When there are dither masks having lower spacial frequencies, a dither mask having a spacial frequency lower by one level than that of the selected dither mask is selected.
Abstract:
Nozzles in a print head are arrayed in a density of 600 dpi. Moreover, a dither matrix has a size of 16 pixels×16 pixels in 600 dpi. The dither matrix is repeatedly used. In the meantime, each of rectangles represents an HS processing unit. WHS=3 pixels. As a consequence, the relationship of a least common multiple below is established in a nozzle array direction: 3×WD=16×WHS. In this case, the cycle of interference unevenness can be prolonged to the least common multiple between WD and WHS, that is, 48 pixels (3WD). In this manner, the size of the dither matrix is not an integral multiple of the HS processing unit width, so that the cycle of interference unevenness can be prolonged more than the size of the dither matrix. Thus, the interference unevenness can be hardly recognized.
Abstract:
When an input image is shifted by 640 pixels from a test pattern with reference to the position of a nozzle, the remainder is obtained by dividing 640 pixels by pixels of the dither matrix in an x direction. For example, when the size of the dither matrix in the x direction is 256 pixels, the dither matrix is shifted by 128 pixels in a direction reverse to the x direction. In this manner, the phase of the dither matrix at the time of the quantization during test pattern printing matches the phase of the dither matrix at the time of the quantization during input image printing. Consequently, unevenness of the dither matrix at a position N becomes the same in both of the test pattern and the input image. The HS correction to density unevenness caused by the unevenness of the dither matrix becomes suitable for the input image.
Abstract:
Settings desired by a user may be difficult when the user specifies an area (photometry area) for setting exposure for a combined image acquired through wide dynamic range (WDR) imaging. When a plurality of images is combined and output as a combined image, an imaging apparatus acquires information indicating a plurality of areas in the combined image having input and output characteristics different from each other together with the combined image, allows a user to set a detection area based on the acquired information, acquires an exposure parameter based on the setting, and executes imaging operation.
Abstract:
An image capturing apparatus that can capture a visible light image and an infrared light image of the same object, the image capturing apparatus comprises a detection unit configured to detect a predetermined object in the visible light image, an extraction unit configured to extract feature information of a specific portion in the object detected in the infrared light image by the detection unit, and an estimation unit configured to estimate unique information of the predetermined object using the feature information extracted by the extraction unit.
Abstract:
In an image processing apparatus for encoding image data and a method of controlling the same, whether an attribute of each of a plurality of areas in image data corresponds to an edge in an image based on the image data is determined, and one of a plurality of sub-sampling processes is selected according to the determination for each of the plurality of areas. Note that the plurality of sub-sampling processes can sub-sample color difference components of each of the plurality of areas by different processes. By the sub-sampling process selected as that corresponding to each of the plurality of areas, each of the plurality of areas is sub-sampled to encode the image data.
Abstract:
An image processing device includes a first and a second image processing module including an image processing unit, and a connection module that is connected to the first and second image processing modules, and moves an image data from one image processing module to the other image processing module. At least one of the image processing modules includes a weighted average processing unit that calculates, based on a weighting coefficient included in an attribute value, a weighted average of a pixel value of the input image data and a image processed pixel value, and an output unit which outputs at least one of the image processed pixel value and the weighted-averaged pixel value.
Abstract:
An image processing device includes a first and a second image processing module including an image processing unit, and a connection module that is connected to the first and second image processing modules, and moves an image data from one image processing module to the other image processing module. At least one of the image processing modules includes a weighted average processing unit that calculates, based on a weighting coefficient included in an attribute value, a weighted average of a pixel value of the input image data and a image processed pixel value, and an output unit which outputs at least one of the image processed pixel value and the weighted-averaged pixel value.
Abstract:
An image processing device includes a first and a second image processing module including an image processing unit, and a connection module that is connected to the first and second image processing modules, and moves an image data from one image processing module to the other image processing module. At least one of the image processing modules includes a weighted average processing unit that calculates, based on a weighting coefficient included in an attribute value, a weighted average of a pixel value of the input image data and a image processed pixel value, and an output unit which outputs at least one of the image processed pixel value and the weighted-averaged pixel value.