Abstract:
Removal of the effects of dust or other impurities on image data is described. In one example, a model of artifact formation from sensor dust is determined. From the model of artifact formation, contextual information in the image and a color consistency constraint may be applied on the dust to remove the dust artifacts. Artifacts may also be removed from multiple images from the same or different cameras or camera settings.
Abstract:
Removal of the effects of dust or other impurities on image data is described. In one example, a model of artifact formation from sensor dust is determined. From the model of artifact formation, contextual information in the image and a color consistency constraint may be applied on the dust to remove the dust artifacts. Artifacts may also be removed from multiple images from the same or different cameras or camera settings.
Abstract:
A system and process for determining the vignetting function of an image and using the function to correct for the vignetting is presented. The image can be any arbitrary image and no other images are required. The system and process is designed to handle both textured and untextured segments in order to maximize the use of available information. To extract vignetting information from an image, segmentation techniques are employed that locate image segments with reliable data for vignetting estimation. Within each image segment, the system and process capitalizes on frequency characteristics and physical properties of vignetting to distinguish it from other sources of intensity variation. The vignetting data acquired from segments are weighted according to a presented reliability measure to promote robustness in estimation.
Abstract:
Removal of the effects of dust or other impurities on image data is described. In one example, a model of artifact formation from sensor dust is determined. From the model of artifact formation, contextual information in the image and a color consistency constraint may be applied on the dust to remove the dust artifacts. Artifacts may also be removed from multiple images from the same or different cameras or camera settings.
Abstract:
A method, device and system is provided for providing global illumination of a scene. For example, global illumination may be provided in a rendered 3-dimensional image that may contain objects and/or light sources. Radiance functions or visibility functions may further be represented by scaling of spherical harmonics functions in the spherical harmonics domain. For example, scaling of spherical harmonics coefficients corresponding to a spherical function may be performed based on a spherical harmonics scaling transformation matrix based on an angular scaling function.
Abstract:
A shell radiance texture function (SRTF) is defined to record an outgoing radiance from a base volume of an object to be rendered. Using the SRTF, radiance values are precomputed and stored for the base volume. The object is rendered using the precomputed radiance values.
Abstract:
The present surface detail rendering technique provides an efficient technique for applying a mesostructure to a macrostructure for an object that minimizes the amount of memory required for pre-computed data. A leap texture is pre-computed for a mesostructure by classifying each voxel in the mesostructure geometry and assigning a value in the leap texture based upon the classification. The value in the leap texture represents a distance to jump along a ray cast in any view direction when a model is decorated with the mesostructure geometry.
Abstract:
A “mesostructure renderer” uses pre-computed multi-dimensional “generalized displacement maps” (GDM) to provide real-time rendering of general non-height-field mesostructures on both open and closed surfaces of arbitrary geometry. In general, the GDM represents the distance to solid mesostructure along any ray cast from any point within a volumetric sample. Given the pre-computed GDM, the mesostructure renderer then computes mesostructure visibility jointly in object space and texture space, thereby enabling both control of texture distortion and efficient computation of texture coordinates and shadowing. Further, in one embodiment, the mesostructure renderer uses the GDM to render mesostructures with either local or global illumination as a per-pixel process using conventional computer graphics hardware to accelerate the real-time rendering of the mesostructures. Further acceleration of mesostructure rendering is achieved in another embodiment by automatically reducing the number of triangles in the rendering pipeline according to a user-specified threshold for acceptable texture distortion.
Abstract:
Pre-computed shadow fields are described. In one aspect, shadow fields for multiple entities are pre-computed. The shadow fields are pre-computed independent of scene configuration. The multiple entities include at least one occluding object and at least one light source. A pre-computed shadow field for a light source indicates radiance from the light source. A pre-computed shadow field for an occluding object indicates occlusion of radiance from the at least one light source.
Abstract:
A novel method for synchronizing the lips of a sketched face to an input voice. The lip synchronization system and method approach is to use training video as much as possible when the input voice is similar to the training voice sequences. Initially, face sequences are clustered from video segments, then by making use of sub-sequence Hidden Markov Models, a correlation between speech signals and face shape sequences is built. From this re-use of video, the discontinuity between two consecutive output faces is decreased and accurate and realistic synthesized animations are obtained. The lip synchronization system and method can synthesize faces from input audio in real-time without noticeable delay. Since acoustic feature data calculated from audio is directly used to drive the system without considering its phonemic representation, the method can adapt to any kind of voice, language or sound.