摘要:
Radiometric calibration of an image capture device (e.g., a digital camera) using a single image is described. The single image may be a color image or a grayscale image. The calibration identifies and analyzes edge pixels of the image that correspond to an edge between two colors or grayscale levels of a scene. Intensity distributions of intensities measured from the single image are then analyzed. An inverse response function for the image capture device is determined based on the intensity distributions. For a color image, the radiometric calibration involves calculating an inverse response function that maps measured blended colors of edge pixels and the associated measured component colors into linear distributions. For a grayscale image, the radiometric calibration involves deriving an inverse response function that maps non-uniform histograms of measured intensities into uniform distributions of calibrated intensities.
摘要:
Radiometric calibration of an image capture device (e.g., a digital camera) using a single image is described. The single image may be a color image or a grayscale image. The calibration identifies and analyzes edge pixels of the image that correspond to an edge between two colors or grayscale levels of a scene. Intensity distributions of intensities measured from the single image are then analyzed. An inverse response function for the image capture device is determined based on the intensity distributions. For a color image, the radiometric calibration involves calculating an inverse response function that maps measured blended colors of edge pixels and the associated measured component colors into linear distributions. For a grayscale image, the radiometric calibration involves deriving an inverse response function that maps non-uniform histograms of measured intensities into uniform distributions of calibrated intensities.
摘要:
Radiometric calibration of an image capture device (e.g., a digital camera) using a single image is described. The single image may be a color image or a grayscale image. The calibration identifies and analyzes edge pixels of the image that correspond to an edge between two colors or grayscale levels of a scene. Intensity distributions of intensities measured from the single image are then analyzed. An inverse response function for the image capture device is determined based on the intensity distributions. For a color image, the radiometric calibration involves calculating an inverse response function that maps measured blended colors of edge pixels and the associated measured component colors into linear distributions. For a grayscale image, the radiometric calibration involves deriving an inverse response function that maps non-uniform histograms of measured intensities into uniform distributions of calibrated intensities.
摘要:
Radiometric calibration of an image capture device (e.g., a digital camera) using a single image is described. The single image may be a color image or a grayscale image. The calibration identifies and analyzes edge pixels of the image that correspond to an edge between two colors or grayscale levels of a scene. Intensity distributions of intensities measured from the single image are then analyzed. An inverse response function for the image capture device is determined based on the intensity distributions. For a color image, the radiometric calibration involves calculating an inverse response function that maps measured blended colors of edge pixels and the associated measured component colors into linear distributions. For a grayscale image, the radiometric calibration involves deriving an inverse response function that maps non-uniform histograms of measured intensities into uniform distributions of calibrated intensities.
摘要:
A radiometric calibration system finds an inverse response function of a camera from a single digital image of a scene in which the actual colors of the scene are not known a priori. The system analyzes pixels of the image that correspond to an “edge” between two colors of the scene. These “edge” pixels represent a blended color formed from these two “component” colors, as measured by the camera. The system determines an inverse response function at least in part by: (a) finding suitable edge pixels; and (b) determining a function that maps the measured blended colors of edge pixels and their measured component colors into linear distributions. Reference data that includes predetermined inverse response functions of known cameras can be used in determining an inverse response function via a Bayesian Estimation.
摘要:
A method for modeling a time-variant appearance of a material is described. A sample analysis of a material sample is performed, wherein the sample analysis orders surface points of the material sample with respect to weathering from data captured at a single instant in time. An appearance synthesis using the sample analysis is performed, wherein the appearance synthesis generates a time-variant sequence of frames for weathering an object.
摘要:
Techniques are provided for at least modeling any one of mesostructure shadowing, masking, interreflection and silhouettes on a surface, as well as subsurface scattering within a non-homogeneous volume. Such techniques include, at least, acquiring material parameters for a material sample, determining irradiance distribution values for the material sample, synthesizing the material sample onto a mesh of an object. The synthesized object may then be rendered by one of plural rendering techniques.
摘要:
A “mesostructure renderer” uses pre-computed multi-dimensional “generalized displacement maps” (GDM) to provide real-time rendering of general non-height-field mesostructures on both open and closed surfaces of arbitrary geometry. In general, the GDM represents the distance to solid mesostructure along any ray cast from any point within a volumetric sample. Given the pre-computed GDM, the mesostructure renderer then computes mesostructure visibility jointly in object space and texture space, thereby enabling both control of texture distortion and efficient computation of texture coordinates and shadowing. Further, in one embodiment, the mesostructure renderer uses the GDM to render mesostructures with either local or global illumination as a per-pixel process using conventional computer graphics hardware to accelerate the real-time rendering of the mesostructures. Further acceleration of mesostructure rendering is achieved in another embodiment by automatically reducing the number of triangles in the rendering pipeline according to a user-specified threshold for acceptable texture distortion.
摘要:
A “mesostructure renderer” uses pre-computed multi-dimensional “generalized displacement maps” (GDM) to provide real-time rendering of general non-height-field mesostructures on both open and closed surfaces of arbitrary geometry. In general, the GDM represents the distance to solid mesostructure along any ray cast from any point within a volumetric sample. Given the pre-computed GDM, the mesostructure renderer then computes mesostructure visibility jointly in object space and texture space, thereby enabling both control of texture distortion and efficient computation of texture coordinates and shadowing. Further, in one embodiment, the mesostructure renderer uses the GDM to render mesostructures with either local or global illumination as a per-pixel process using conventional computer graphics hardware to accelerate the real-time rendering of the mesostructures. Further acceleration of mesostructure rendering is achieved in another embodiment by automatically reducing the number of triangles in the rendering pipeline according to a user-specified threshold for acceptable texture distortion.
摘要:
A “mesostructure renderer” uses pre-computed multi-dimensional “generalized displacement maps” (GDM) to provide real-time rendering of general non-height-field mesostructures on both open and closed surfaces of arbitrary geometry. In general, the GDM represents the distance to solid mesostructure along any ray cast from any point within a volumetric sample. Given the pre-computed GDM, the mesostructure renderer then computes mesostructure visibility jointly in object space and texture space, thereby enabling both control of texture distortion and efficient computation of texture coordinates and shadowing. Further, in one embodiment, the mesostructure renderer uses the GDM to render mesostructures with either local or global illumination as a per-pixel process using conventional computer graphics hardware to accelerate the real-time rendering of the mesostructures. Further acceleration of mesostructure rendering is achieved in another embodiment by automatically reducing the number of triangles in the rendering pipeline according to a user-specified threshold for acceptable texture distortion.