Single-image vignetting correction
    1.
    发明申请
    Single-image vignetting correction 有权
    单图像渐晕校正

    公开(公告)号:US20070146506A1

    公开(公告)日:2007-06-28

    申请号:US11384063

    申请日:2006-03-17

    IPC分类号: H04N5/217

    摘要: A system and process for determining the vignetting function of an image and using the function to correct for the vignetting is presented. The image can be any arbitrary image and no other images are required. The system and process is designed to handle both textured and untextured segments in order to maximize the use of available information. To extract vignetting information from an image, segmentation techniques are employed that locate image segments with reliable data for vignetting estimation. Within each image segment, the system and process capitalizes on frequency characteristics and physical properties of vignetting to distinguish it from other sources of intensity variation. The vignetting data acquired from segments are weighted according to a presented reliability measure to promote robustness in estimation.

    摘要翻译: 提出了一种用于确定图像的渐晕功能并使用功能来校正渐晕的系统和过程。 图像可以是任意图像,并且不需要其他图像。 系统和过程被设计为处理纹理和非纹理段,以便最大限度地利用可用信息。 为了从图像中提取渐晕信息,采用定位图像片段以进行晕影估计的可靠数据的分割技术。 在每个图像片段中,系统和过程利用渐晕的频率特性和物理特性来区别其与强度变化的其他来源。 根据提出的可靠性度量对从片段获取的渐晕数据进行加权,以提高估计的鲁棒性。

    System and process for optimal texture map reconstruction from multiple views
    2.
    发明申请
    System and process for optimal texture map reconstruction from multiple views 失效
    用于从多个视图获得最佳纹理贴图重建的系统和过程

    公开(公告)号:US20050093877A1

    公开(公告)日:2005-05-05

    申请号:US10983193

    申请日:2004-11-05

    IPC分类号: G09G5/00

    CPC分类号: G06T11/001

    摘要: A system and process for reconstructing optimal texture maps from multiple views of a scene is described. In essence, this reconstruction is based on the optimal synthesis of textures from multiple sources. This is generally accomplished using basic image processing theory to derive the correct weights for blending the multiple views. Namely, the steps of reconstructing, warping, prefiltering, and resampling are followed in order to warp reference textures to a desired location, and to compute spatially-variant weights for optimal blending. These weights take into consideration the anisotropy in the texture projection and changes in sampling frequency due to foreshortening. The weights are combined and the computation of the optimal texture is treated as a restoration problem, which involves solving a linear system of equations. This approach can be incorporated in a variety of applications, such as texturing of 3D models, analysis by synthesis methods, super-resolution techniques, and view-dependent texture mapping.

    摘要翻译: 描述用于从场景的多个视图重建最佳纹理图的系统和过程。 实质上,这种重建是基于来自多个源的纹理的最佳合成。 这通常使用基本图像处理理论来实现,以导出用于混合多个视图的正确权重。 即,遵循重构,翘曲,预过滤和重采样的步骤,以便将参考纹理扭曲到期望的位置,并计算用于最佳混合的空间变体权重。 这些权重考虑到纹理投影中的各向异性和由于缩短引起的采样频率的变化。 权重相结合,最优纹理的计算被视为恢复问题,其涉及求解线性方程组。 这种方法可以并入各种应用中,例如3D模型的纹理化,通过合成方法的分析,超分辨率技术和视图相关的纹理映射。

    Symmetric stereo model for handling occlusion
    3.
    发明申请
    Symmetric stereo model for handling occlusion 失效
    用于处理遮挡的对称立体模型

    公开(公告)号:US20070122028A1

    公开(公告)日:2007-05-31

    申请号:US11289907

    申请日:2005-11-30

    IPC分类号: G06K9/00

    CPC分类号: G06K9/32

    摘要: The present symmetric stereo matching technique provides a method for iteratively estimating a minimum energy for occlusion and disparity using belief propagation. The minimum energy is based on an energy minimization framework in which a visibility constraint is embedded. By embedding the visibility constraint, the present symmetric stereo matching technique treats both images equally, instead of treating one as a reference image. The visibility constraint ensures that occlusion in one view and the disparity in another view are consistent.

    摘要翻译: 本对称立体匹配技术提供了一种使用置信传播迭代估计遮挡和视差的最小能量的方法。 最小能量基于嵌入可见性约束的能量最小化框架。 通过嵌入可见度约束,本对称立体匹配技术可以平等对待两个图像,而不是将其视为参考图像。 可见性约束确保一个视图中的遮挡和另一个视图中的视差是一致的。

    Color segmentation-based stereo 3D reconstruction system and process
    5.
    发明申请
    Color segmentation-based stereo 3D reconstruction system and process 有权
    基于颜色分割的立体3D重建系统和过程

    公开(公告)号:US20050286757A1

    公开(公告)日:2005-12-29

    申请号:US10879327

    申请日:2004-06-28

    IPC分类号: G06K9/00 G06T7/00

    CPC分类号: G06K9/20 G06K2209/40 G06T7/55

    摘要: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.

    摘要翻译: 提出了一种用于基于基于颜色分割的方法从其多个图像计算场景的3D重建的系统和过程。 首先,每个图像被独立地分割。 第二,使用假设一个段内的所有像素具有相同的视差来为每个段计算初始视差空间分布(DSD)。 接下来,每个段的DSD使用相邻段进行细化,并将其投影到其他图像中。 然后在视差平滑阶段放宽每个段具有单个视差的假设。 结果是每个图像的视差图,如果重建应用程序需要它,它又可以用于计算每像素深度图。

    Self-calibration for a catadioptric camera

    公开(公告)号:US20050099502A1

    公开(公告)日:2005-05-12

    申请号:US11015828

    申请日:2004-12-15

    申请人: Sing Kang

    发明人: Sing Kang

    CPC分类号: H04N5/2628 G06T5/006 G06T7/80

    摘要: A method and a system for self-calibrating a wide field-of-view camera (such as a catadioptric camera) using a sequence of omni-directional images of a scene obtained from the camera. The present invention uses the consistency of pairwise features tracked across at least a portion of the image collection and uses these tracked features to determine unknown calibration parameters based on the characteristics of catadioptric imaging. More specifically, the self-calibration method of the present invention generates a sequence of omni-directional images representing a scene and tracks features across the image sequence. An objective function is defined in terms of the tracked features and an error metric (an image-based error metric in a preferred embodiment). The catadioptric imaging characteristics are defined by calibration parameters, and determination of optimal calibration parameters is accomplished by minimizing the objective function using an optimizing technique.

    Object matting using flash and no-flash images
    7.
    发明申请
    Object matting using flash and no-flash images 有权
    使用闪光灯和无闪光灯图像的对象消光

    公开(公告)号:US20070263119A1

    公开(公告)日:2007-11-15

    申请号:US11434567

    申请日:2006-05-15

    IPC分类号: H04N5/222

    CPC分类号: H04N5/275 H04N5/2354

    摘要: Foreground object matting uses flash/no-flash images pairs to obtain a flash-only image. A trimap is obtained from the flash-only image. A joint Bayesian algorithm uses the flash-only image, the trimap and one of the image of the scene taken without the flash or the image of the scene taken with the flash to generate a high quality matte that can be used to extract the foreground from the background.

    摘要翻译: 前景对象消光使用闪光/非闪光图像对来获取闪光灯图像。 从闪光灯图像获得一个微调。 联合贝叶斯算法使用仅闪光图像,微调和没有闪光灯拍摄的场景中的一个图像或用闪光灯拍摄的场景的图像,以生成可用于从 的背景。

    Stereoscopic image display
    8.
    发明申请

    公开(公告)号:US20060038880A1

    公开(公告)日:2006-02-23

    申请号:US10922769

    申请日:2004-08-19

    IPC分类号: H04N13/04 H04N15/00

    摘要: Stereoscopic image display is described. In an embodiment, a location of the eye pupils of a viewer is determined and tracked. An image is displayed within a first focus for viewing with the left eye of the viewer, and the image is displayed within a second focus for viewing with the right eye of the viewer. A positional change of the eye pupils is tracked and a sequential image that corresponds to the positional change of the eye pupils is generated for stereoscopic viewing. In another embodiment, an image is displayed for stereoscopic viewing and a head position of a viewer relative to a center of the displayed image is determined. A positional change of the viewer's head is tracked, and a sequential image that corresponds to the positional change of the viewer's head is generated for stereoscopic viewing.

    System and process for compressing and decompressing multiple, layered, video streams of a scene captured from different viewpoints forming a grid using spatial and temporal encoding
    9.
    发明申请
    System and process for compressing and decompressing multiple, layered, video streams of a scene captured from different viewpoints forming a grid using spatial and temporal encoding 有权
    用于压缩和解压缩从使用空间和时间编码形成网格的不同视点捕获的场景的多个分层视频流的系统和过程

    公开(公告)号:US20060031915A1

    公开(公告)日:2006-02-09

    申请号:US11097533

    申请日:2005-03-31

    摘要: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints that from a grid of viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.

    摘要翻译: 一种用于压缩和解压缩从视点网格的不同观点描绘基本相同的动态场景的多个视频流的系统和过程。 多个流的每个同期视频帧集合中的每个帧由至少两层(主层和边界层)表示。 主要层的压缩包括首先将每组同期帧中的这些层中的一个或多个指定为关键帧。 对于按时间顺序排列的每组同期帧,使用帧间压缩技术对每个关键帧的主层进行压缩。 另外,使用空间预测压缩技术对所考虑的帧集合内的每个非关键帧的主层进行压缩。 最后,使用帧内压缩技术对当前帧集合中每帧的边界层进行压缩。 压缩通常与压缩过程相反。

    Facial image processing
    10.
    发明申请
    Facial image processing 有权
    面部图像处理

    公开(公告)号:US20060015308A1

    公开(公告)日:2006-01-19

    申请号:US11218164

    申请日:2005-09-01

    IPC分类号: G06F17/10

    摘要: In the described embodiment, methods and systems for processing facial image data for use in animation are described. In one embodiment, a system is provided that illuminates a face with illumination that is sufficient to enable the simultaneous capture of both structure data, e.g. a range or depth map, and reflectance properties, e.g. the diffuse reflectance of a subject's face. This captured information can then be used for various facial animation operations, among which are included expression recognition and expression transformation.

    摘要翻译: 在所描述的实施例中,描述了用于处理用于动画的面部图像数据的方法和系统。 在一个实施例中,提供了一种系统,其利用足以使得能够同时捕获结构数据的照明来照亮面部,例如, 范围或深度图,以及反射率特性,例如。 受试者面部的漫反射。 然后,该捕获的信息可以用于各种面部动画操作,其中包括表达式识别和表达变换。