摘要:
An image processing device and method, where the device includes a data continuity detector configured to detect data continuity of a first image data made up of a plurality of pixels acquired by light signals of a real world being cast upon a plurality of detecting elements each having spatio-temporal integration effects, a real world estimating unit configured to detect real world features which a first function representing the real world light signals has, and an image generator configured to predict and generate second image data having a higher quality than the first image data, based on the real world features detected by the real world estimating unit.
摘要:
An image processing device and method, where the image processing device includes an angle detector configured to detect an angle between a reference axis of data continuity with image data made up of a plurality of pixels acquired by light signals of a real world being cast upon a plurality of detecting elements each having spatio-temporal integration effects, and a real world estimating unit configured to estimate the light signals by estimating a continuity of the real world light signals which have been lost based on the angle detected by the angle detector.
摘要:
An image processing device, method, and program are capable of obtaining processing results which are even more accurate and even more precise as to events in the real world, taking into consideration the real world where data has been acquired. The image processing device includes a data continuity detector and a real world estimating unit.
摘要:
An apparatus and method for separating a background image and an object image. The apparatus comprises an area specifying unit for specifying areas of the image. The area specifying unit specifies, in a first period-of time, an uncovered background area in which foreground object components and background object components are mixed. The area specifying unit specifies, in a second period of time, a foreground area consisting of only the foreground object components and a background area consisting of only the background object components. The area specifying unit specifies, in a third period of time, a covered background area in which the foreground object components and the background object components are mixed. A mixture-ratio calculator detects a mixture ratio indicating a ratio of the foreground object components and the background object components. A separator separates the pixel data into the foreground object components and the background object components based on the mixture ratio.
摘要:
The present invention relates to an image processing device enabling detection of a mixture ratio indicating the state of mixture between multiple objects. A contour region information generating unit 421 and a normal equation generating unit 422 extracts contour region pixel data within a frame of interest positioned at a contour region with approximately a same mixture ratio, extracts corresponding pixel data from a frame different to the frame of interest, extracting background pixel data corresponding to the contour region pixel data or the corresponding pixel data, and generating an equation wherein the mixture ratio is an unknown number, based on region specifying information specifying a non-mixed region made up of a foreground region and background region and a mixed region. A least square approximation unit 423 detects the mixture ratio by solving the equation. The present invention can be applied to signals processing devices for processing image signals.
摘要:
A motion-vector-setting section (31) sets a motion vector in units of pixel in a target image. Based on the motion vector, a target-pixel-setting section (35) sets a target pixel for each image in plural images to be processed. A motion-blur-amount-setting section (33) sets a motion blur amount in units of pixel based on the motion vector and the exposure-time ratio set in units of image in the exposure-time-ratio-setting section (32). A processing-region-setting section (36) sets processing regions corresponding to the target pixel for each of the plural images based on the motion blur amount. A processing-coefficient-setting section (37) sets processing coefficients based on the motion blur amount. A pixel-value-generating section (38) generates motion-blur-removed pixel values that correspond to the target pixel by linear combination of pixel values corresponding to pixels in the processing region and the processing coefficients, so that they can be output from an integration section (39) as one pixel value. By utilizing any time-directional information significantly, motion-blur-removing processing can be accurately performed.
摘要:
A shooting-information-detecting section (31) detects shooting information from an image pick-up section (10). A motion-detecting section (33) detects a motion direction of an image on an overall screen based on a motion direction of the image pick-up section contained in the shooting information. A processing-region-setting section (36) sets a processing region in at least any one of a predicted target image and a peripheral image thereof, which correspond to a target pixel in the predicted target image. A processing-coefficient-setting section (37) sets a motion-blur-removing-processing coefficient that corresponds to the motion direction detected in the motion-detecting section (33). A pixel-value-generating section (38) generates a pixel value that corresponds to the target pixel based on a pixel value of a pixel in the processing region set in the processing-region-setting section (36) and the processing coefficient set in the processing-coefficient-setting section (37). Motion-blur-removing processing can be accurately performed.
摘要:
A motion-setting section (61) sets a motion amount and a motion direction for obtaining processing coefficients. A student-image-generating section (62) generates student images obtained by adding a motion blur to a teacher image not only based on the set motion amount and the set motion direction but also by changing at least one of the motion amount and motion direction in a specific ratio and student images obtained by adding no motion blur to the teacher image. A prediction-tap-extracting section (64) extracts, in order to extract a main term that mainly contains component of the target pixel, at least a pixel value of pixel in the student image whose space position roughly agrees with space position of the target pixel in the teacher image. A processing-coefficient-generating section (65) generates processing coefficients for predicting the target pixels in the teacher images from the pixel values of extracted pixels based on a relationship between the pixels thus extracted and the target pixels in the teacher images. The processing coefficients that are suitable for any motion blur removing which is robust against any shift of the motion vector can be generated through learning.
摘要:
A target-pixel-setting section (31) sets a target pixel in a target image to be predicted. A motion-direction-detecting section (32) detects a motion direction corresponding to the target pixel. A pixel-value-extracting section (36) extracts from peripheral images corresponding to the target image, in order to extract a main term that mainly contains component of the target pixel in a moving object that encounters a motion blur in the peripheral images, at least pixel values of pixels in the peripheral images whose space position roughly agree with space position of the target pixel. A processing-coefficient-setting section (37a) sets a specific motion-blur-removing-processing coefficient. A pixel-value-generating section (38a) newly generates pixel values for processing from the pixel values extracted by the pixel-value-extracting section (36) corresponding to the motion direction and generates pixel values corresponding to the target pixel based on the pixel values for processing and the specific motion-blur-removing-processing coefficients. It is possible to perform a robust motion-blur-removing processing on any shifts of motion vector.
摘要:
An image processing device and method, where the image processing device includes a continuity region detector configured to detect a region having data continuity within image data made up of a plurality of pixels acquired by light signals of a real world being cast upon a plurality of detecting elements each having spatio-temporal integration effects, and a real world estimating unit configured to estimate the light signals by estimating a continuity of the real world light signals which have been lost.