Abstract:
A user interface unit sets a parameter based on operations of a user, and outputs to a parameter correction unit. The parameter correction unit extracts the feature quantity of pixels in an observation region that corresponds to the value of the parameter, and based on the maximal value of that feature quantity, corrects the parameter specified by the user as the correction parameter. A blurring correction unit corrects input image data based on the correction parameter, and generates image data of the image wherein the blurring has been corrected. The present invention can be applied, for example, to a digital camera.
Abstract:
An apparatus for use in conversion of an SD signal into an HD signal. The pixel data sets of a tap corresponding to an objective position in the HD signal are extracted selectively from the SD signal. Class CL to which pixel data set of the objective position belongs is then obtained using the pixel data sets of the tap. A coefficient production circuit produces coefficient data sets Wi for each class based on coefficient seed data sets for each class and values of picture quality adjusting parameters, h and v obtained by user operation. A tap selection circuit selectively extracts the data sets xi of the tap corresponding to the objective position in the HD signal from the SD signal and then, a calculation circuit produces the pixel data sets of the objective position in the HD signal according to an estimation equation using the data sets xi and the coefficient data sets Wi corresponding to the class CL read out of a memory.
Abstract:
A shooting-information-detecting section detects shooting information from an image pick-up section. A motion-detecting section detects a motion direction of an image on an overall screen based on a motion direction of the image pick-up section contained in the shooting information. A processing-region-setting section sets a processing region in at least any one of a predicted target image and a peripheral image thereof, which correspond to a target pixel in the predicted target image. A processing-coefficient-setting section sets a motion-blur-removing-processing coefficient that corresponds to the motion direction detected in the motion-detecting section. A pixel-value-generating section generates a pixel value that corresponds to the target pixel based on a pixel value of a pixel in the processing region set in the processing-region-setting section and the processing coefficient set in the processing-coefficient-setting section. Motion-blur-removing processing can be accurately performed.
Abstract:
An image processing device and method, where the device includes a data continuity detector configured to detect data continuity of image data made up of a plurality of pixels acquired by light signals of a real world being cast upon a plurality of detecting elements each having spatio-temporal integration effects, and a real world estimating unit configured to generate a gradient of pixel values of the plurality of pixels corresponding to a position in one dimensional direction of spatio-temporal directions as to pixels of interest within the image data.
Abstract:
The present invention relates to an apparatus for processing an image signal etc. that are well applicable to removal of coding noise from, for example, an image signal. Based on five consecutive frames of an image signal Va, a memory portion 121 outputs as pixel data xi of predictive taps plural items of pixel data located in a space directional and time directional peripheries with respect to a target position in an image signal Vb. In the case, frames before and after a current frame are subjected to motion compensation by using a motion vector. A class classification portion 124 obtains a class code CL indicating a class to which pixel data of the target position in the image signal Vb belongs, by using the pixel data xi and motion vectors BWV(0), BWV(−1), FWV(0), and FWV(+1). A calculating circuit 126 obtains pixel data y of the target position in the image signal Vb based on an estimation equation by using the pixel data xi and coefficient data Wi that corresponds to the class code CL.
Abstract:
An image processing device for processing images of background images and moving objects. A region specifying unit specifies a mixed region made up of a mixture of a foreground object component and a background object component, and a non-mixed region made up of one or the other of a foreground object component and a background object component, and outputs region information corresponding to the specifying results. A foreground/background separation unit separates the input image into foreground component images and background component images, corresponding to the region information. A separated image processing unit processes the foreground component images and background component images individually, corresponding to the results of separation.
Abstract:
An image processing apparatus detecting a noise area in image data generated by decoding encoded data encoded by a frequency transform method and a lossy compression method. The image processing apparatus includes a motion detection unit for detecting motion in an area having at least one pixel in the image data, a deviation detection unit for detecting the deviation of the image motion in the area having at least one pixel, and a noise detection unit for detecting the noise area in accordance with the deviation of the image motion.
Abstract:
An image signal processing apparatus and method perform appropriate correction of image blurring in accordance with an image characteristic. In the apparatus, a user interface designates a blurring parameter, and a control signal generating unit generates a control signal corresponding to the designated parameter. An image characteristic detecting unit determines directions in which pixels in an input image have flat levels and directions in which pixels in the input image have levels corresponding to edges. Based on the determination, an address calculating unit reads coefficients from a coefficient ROM and supplies the coefficients to a product-sum calculating unit. The product-sum calculating unit generates a blurring-eliminated image by performing product-sum calculation using the coefficients. A post-processing unit produces an output image based on the input image and the result of product-sum calculation.
Abstract:
When an algorithm bay is connected to a signal processing apparatus according to a first connection mode, a selector of the algorithm bay selects and sets a first function provided by a first function provider as the signal processing function of the signal processing apparatus. According to a second connection mode, the selector selects and sets a second function provided by a second function provider as the signal processing function Also, a first information provider of an algorithm bay may supply a signal indicating first information to a signal processor of a signal processing apparatus via a wired interface of the algorithm bay, a wired connection, and a wired interface of the signal processing apparatus. Similarly, a second information provider of the algorithm bay supplies a signal indicating second information for changing the signal processing function of the signal processor to the signal processor.
Abstract:
A coefficient learning apparatus includes a regression coefficient calculation unit configured to obtain a tap from an image of a first signal; a regression prediction value calculation unit configured to perform a regression prediction computation; a discrimination information assigning unit configured to assign discrimination information to the pixel of interest; a discrimination coefficient calculation unit configured to obtain a tap from the image of the first signal; a discrimination prediction value calculation unit configured to perform a discrimination prediction computation; and a classification unit configured to classify each of the pixels of the image of the first signal into one of the first discrimination class and the second discrimination class. The regression coefficient calculation unit further calculates the regression coefficient using only the pixels classified as the first discrimination class and further calculates the regression coefficient using only the pixel classified as the second discrimination class.