摘要:
An image signal generated by a CCD image sensor is processed by the block-generating section 28 provided in an image-signal processing section 25. A class tap and a prediction tap are thereby extracted. The class tap is output to an ADRC process section 29, and the prediction tap is output to an adaptation process section 31. The ADRC process section 29 performs an ADRC process on the input image signal, generating characteristic data. A classification process section 30 generates a class code corresponding to the characteristic data thus generated and supplies the same to an adaptation process section 31. The adaptation process section 31 reads, from a coefficient memory 32, the set of prediction coefficients which corresponds to the class code. The set of prediction coefficients and the prediction tap are applied, thereby generating all color signals, i.e., R, G and B signals, at the positions of the pixels which are to be processed.
摘要:
An image signal generated by a CCD image sensor is processed by the block-generating section 28 provided in an image-signal processing section 25. A class tap and a prediction tap are thereby extracted. The class tap is output to an ADRC process section 29, and the prediction tap is output to an adaptation process section 31. The ADRC process section 29 performs an ADRC process on the input image signal, generating characteristic data. A classification process section 30 generates a class code corresponding to the characteristic data thus generated and supplies the same to an adaptation process section 31. The adaptation process section 31 reads, from a coefficient memory 32, the set of prediction coefficients which corresponds to the class code. The set of prediction coefficients and the prediction tap are applied, thereby generating all color signals, i.e., R, G and B signals, at the positions of the pixels which are to be processed.
摘要:
An image signal generated by a CCD image sensor is processed by the block-generating section 28 provided in an image-signal processing section 25. A class tap and a prediction tap are thereby extracted. The class tap is output to an ADRC process section 29, and the prediction tap is output to an adaptation process section 31. The ADRC process section 29 performs an ADRC process on the input image signal, generating characteristic data. A classification process section 30 generates a class code corresponding to the characteristic data thus generated and supplies the same to an adaptation process section 31. The adaptation process section 31 reads, from a coefficient memory 32, the set of prediction coefficients which corresponds to the class code. The set of prediction coefficients and the prediction tap are applied, thereby generating all color signals, i.e., R, G and B signals, at the positions of the pixels which are to be processed.
摘要:
An image signal generated by a CCD image sensor is processed by the block-generating section 28 provided in an image-signal processing section 25. A class tap and a prediction tap are thereby extracted. The class tap is output to an ADRC process section 29, and the prediction tap is output to an adaptation process section 31. The ADRC process section 29 performs an ADRC process on the input image signal, generating characteristic data. A classification process section 30 generates a class code corresponding to the characteristic data thus generated and supplies the same to an adaptation process section 31. The adaptation process section 31 reads, from a coefficient memory 32, the set of prediction coefficients which corresponds to the class code. The set of prediction coefficients and the prediction tap are applied, thereby generating all color signals, i.e., R, G and B signals, at the positions of the pixels which are to be processed.
摘要:
An image signal generated by a CCD image sensor is processed by the block-generating section 28 provided in an image-signal processing section 25. A class tap and a prediction tap are thereby extracted. The class tap is output to an ADRC process section 29, and the prediction tap is output to an adaptation process section 31. The ADRC process section 29 performs an ADRC process on the input image signal, generating characteristic data. A classification process section 30 generates a class code corresponding to the characteristic data thus generated and supplies the same to an adaptation process section 31. The adaptation process section 31 reads, from a coefficient memory 32, the set of prediction coefficients which corresponds to the class code. The set of prediction coefficients and the prediction tap are applied, thereby generating all color signals, i.e., R, G and B signals, at the positions of the pixels which are to be processed.
摘要:
An image processing device, method, recording medium, and program where the device includes an image data continuity detector configured to detect continuity of image data made up of a plurality of pixels acquired by real world light signals being cast upon a plurality of detecting elements, and a real world estimating unit configured to estimate real world light signals by approximating image data with discontinuous functions.
摘要:
An image processing device and method, where the device includes a data continuity detector configured to detect data continuity of image data made up of a plurality of pixels acquired by light signals of a real world being cast upon a plurality of detecting elements each having spatio-temporal integration effects, and a real world estimating unit configured to generate a gradient of pixel values of the plurality of pixels corresponding to a position in one dimensional direction of spatio-temporal directions as to pixels of interest within the image data.
摘要:
An image processing device for processing images of background images and moving objects. A region specifying unit specifies a mixed region made up of a mixture of a foreground object component and a background object component, and a non-mixed region made up of one or the other of a foreground object component and a background object component, and outputs region information corresponding to the specifying results. A foreground/background separation unit separates the input image into foreground component images and background component images, corresponding to the region information. A separated image processing unit processes the foreground component images and background component images individually, corresponding to the results of separation.
摘要:
An apparatus and method which takes into consideration the real world where data was acquired, and enables obtaining of processing results which are more accurate and more precise as to phenomena in the real world. A data continuity detecting unit detects the continuity of data of second signals, having second dimensions that are fewer than first dimensions had by first signals which are real world signals and are projected whereby a part of the continuity of the real world signals is lost, wherein the continuity to be detected corresponds to the continuity of the real world signals that has been lost. An actual world estimating unit estimates a real world image by estimating the continuity of the real world image that has been lost, based on the continuity of the data detected by the data continuity detecting unit.
摘要:
The present invention enables the high speed processing of a foreground component image and a background component image associated with images received on a network platform. A client computer outputs information specifying image data to a separation server. The separation server then obtains the specified image data from a storage server and outputs it to a motion detecting server to perform motion detection processing. Thereafter, the image data, a motion vector, and positional information are output to an area specifying server. The area specifying server generates area information corresponding to the image data and outputs the area information to a mixture ratio calculating server in addition to the image data, the motion vector, and the positional information. The mixture ratio calculating server then calculates a mixture ratio on the basis of the image data, the motion vector, the positional information, and the area information, whereby a foreground/background image separation server separates the foreground and background of the input image on the basis this information.