Bayesian demosaicing using a two-color image
    41.
    发明授权
    Bayesian demosaicing using a two-color image 有权
    贝叶斯组合使用双色图像

    公开(公告)号:US07706609B2

    公开(公告)日:2010-04-27

    申请号:US11343581

    申请日:2006-01-30

    CPC classification number: H04N9/646 H04N1/58 H04N9/045

    Abstract: A Bayesian two-color image demosaicer and method for processing a digital color image to demosaic the image in such a way as to reduce image artifacts. The method and system are an improvement on and an enhancement to previous demosaicing techniques. A preliminary demosaicing pass is performed on the image to assign each pixel a fully specified RGB triple color value. The final color value of pixel in the processed image is restricted to be a linear combination of two colors. Fully-specified RGB triple color values for each pixel in an image used to find two clusters represented favored two colors. The amount of contribution from these favored two colors on the final color value then is determined. The method and system also can process multiple images to improve the demosaicing results. When using multiple images, sampling can be performed at a finer resolution, known as super resolution.

    Abstract translation: 一种贝叶斯双色图像拆分器和用于处理数字彩色图像以使图像去马赛克的方法,以减少图像伪像。 该方法和系统是对以前的去马赛克技术的改进和增强。 对图像执行初步去马赛克通行,为每个像素分配完全指定的RGB三色值。 处理图像中的像素的最终色值被限制为两种颜色的线性组合。 用于查找两个聚类的图像中每个像素的完全指定的RGB三色值代表有利于两种颜色。 然后确定这些有利的两种颜色对最终颜色值的贡献量。 该方法和系统还可以处理多个图像以改善去马赛克结果。 当使用多个图像时,可以以更精细的分辨率(称为超分辨率)进行采样。

    Automatic removal of purple fringing from images
    42.
    发明授权
    Automatic removal of purple fringing from images 有权
    从图像自动移除紫色边缘

    公开(公告)号:US07577292B2

    公开(公告)日:2009-08-18

    申请号:US11322736

    申请日:2005-12-30

    Applicant: Sing Bing Kang

    Inventor: Sing Bing Kang

    CPC classification number: H04N1/58 H04N1/62

    Abstract: An automatic purple fringing removal system and method for automatically eliminating purple-fringed regions from high-resolution images. The technique is based on the observations that purple-fringing regions often are adjacent near-saturated regions, and that purple-fringed regions are regions in which the blue and red color intensities are substantially greater than the green color intensity. The automatic purple fringing removal system and method implements these two observations by automatically detecting a purple-fringed region in an image and then automatically correcting the region. Automatic detection is achieved by finding near-saturated regions and candidate regions, and then defining a purple-fringed region as a candidate region adjacent a near-saturated region. Automatic correction of a purple-fringed region is performed by replacing color pixels in the region with at least some fully monochrome pixels using a feathering process, a monochrome averaging process, or by setting the red and blue intensity values using the green intensity value.

    Abstract translation: 一种自动紫边移除系统和方法,用于从高分辨率图像中自动消除紫色区域。 该技术基于紫色边缘区域经常邻近近饱和区域的观察,紫色区域是蓝色和红色强度明显大于绿色强度的区域。 自动紫边移除系统和方法通过自动检测图像中的紫色区域,然后自动校正该区域来实现这两个观察。 通过找到近饱和区域和候选区域,然后将紫色边缘区域定义为邻近近饱和区域的候选区域来实现自动检测。 通过使用羽化处理,单色平均处理或通过使用绿色强度值设置红色和蓝色强度值,通过用至少一些完全单色像素替换该区域中的彩色像素来执行紫色条纹区域的自动校正。

    Simultaneous optical flow estimation and image segmentation
    43.
    发明授权
    Simultaneous optical flow estimation and image segmentation 有权
    同时光流估计和图像分割

    公开(公告)号:US07522749B2

    公开(公告)日:2009-04-21

    申请号:US11193273

    申请日:2005-07-30

    CPC classification number: G06K9/34 G06K9/38 G06K9/4652 G06K9/6219

    Abstract: A technique for estimating the optical flow between images of a scene and a segmentation of the images is presented. This involves first establishing an initial segmentation of the images and an initial optical flow estimate for each segment of each images and its neighboring image or images. A refined optical flow estimate is computed for each segment of each image from the initial segmentation of that image and the initial optical flow of the segments of that image. Next, the segmentation of each image is refined from the last-computed optical flow estimates for each segment of the image. This process can continue in an iterative manner by further refining the optical flow estimates for the images using their respective last-computed segmentation, followed by further refining the segmentation of each image using their respective last-computed optical flow estimates, until a prescribed number of iterations have been completed.

    Abstract translation: 提出了一种用于估计场景图像和图像分割之间的光流的技术。 这包括首先建立图像的初始分割和每个图像及其相邻图像或图像的每个片段的初始光学流量估计。 从该图像的初始分割和该图像的片段的初始光流中计算每个图像的每个片段的精细光学流量估计。 接下来,从图像的每个片段的最后计算的光学流量估计来细化每个图像的分割。 该过程可以通过使用其各自的最后计算的分割进一步细化图像的光流估计,然后使用其各自的最后计算的光流估计进一步细化每个图像的分割,直到规定数量的 迭代已经完成。

    Facial image processing
    45.
    发明授权
    Facial image processing 有权
    面部图像处理

    公开(公告)号:US07433807B2

    公开(公告)日:2008-10-07

    申请号:US11218164

    申请日:2005-09-01

    CPC classification number: G06K9/4661 G06K9/00268 G06T7/521 G06T15/506

    Abstract: In the described embodiment, methods and systems for processing facial image data for use in animation are described. In one embodiment, a system is provided that illuminates a face with illumination that is sufficient to enable the simultaneous capture of both structure data, e.g. a range or depth map, and reflectance properties, e.g. the diffuse reflectance of a subject's face. This captured information can then be used for various facial animation operations, among which are included expression recognition and expression transformation.

    Abstract translation: 在所描述的实施例中,描述了用于处理用于动画的面部图像数据的方法和系统。 在一个实施例中,提供了一种系统,其利用足以使得能够同时捕获结构数据的照明来照亮面部,例如, 范围或深度图,以及反射率特性,例如。 受试者面部的漫反射。 然后,该捕获的信息可以用于各种面部动画操作,其中包括表达式识别和表达变换。

    Color segmentation-based stereo 3D reconstruction system and process
    46.
    发明授权
    Color segmentation-based stereo 3D reconstruction system and process 有权
    基于颜色分割的立体3D重建系统和过程

    公开(公告)号:US07324687B2

    公开(公告)日:2008-01-29

    申请号:US10879327

    申请日:2004-06-28

    CPC classification number: G06K9/20 G06K2209/40 G06T7/55

    Abstract: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.

    Abstract translation: 提出了一种用于基于基于颜色分割的方法从其多个图像计算场景的3D重建的系统和过程。 首先,每个图像被独立地分割。 第二,使用假设一个段内的所有像素具有相同的视差来为每个段计算初始视差空间分布(DSD)。 接下来,每个段的DSD使用相邻段进行细化,并将其投影到其他图像中。 然后在视差平滑阶段放宽每个段具有单个视差的假设。 结果是每个图像的视差图,如果重建应用程序需要它,它又可以用于计算每像素深度图。

    Real-time rendering system and process for interactive viewpoint video
    47.
    发明授权
    Real-time rendering system and process for interactive viewpoint video 有权
    实时渲染系统和进程的交互视点视频

    公开(公告)号:US07221366B2

    公开(公告)日:2007-05-22

    申请号:US10910088

    申请日:2004-08-03

    CPC classification number: G06T15/205

    Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.

    Abstract translation: 呈现用于呈现和显示交互式视点视频的系统和过程,其中用户可以在操纵(冻结,减速或反转)时间并随意改变视点的同时观看动态场景。 在观看视频时交互控制视点的能力是基于图像的渲染的令人兴奋的新应用。 因为任何中间视图可以随时被合成,具有时空操纵的潜力,这种类型的视频被称为交互视点视频。

    System and process for generating high dynamic range images from multiple exposures of a moving scene
    48.
    发明授权
    System and process for generating high dynamic range images from multiple exposures of a moving scene 有权
    用于从移动场景的多次曝光中产生高动态范围图像的系统和过程

    公开(公告)号:US07142723B2

    公开(公告)日:2006-11-28

    申请号:US10623033

    申请日:2003-07-18

    CPC classification number: G06T5/50 G06T7/269

    Abstract: A system and process for generating a high dynamic range (HDR) image from a bracketed image sequence, even in the presence of scene or camera motion, is presented. This is accomplished by first selecting one of the images as a reference image. Then, each non-reference image is registered with another one of the images, including the reference image, which exhibits an exposure that is both closer to that of the reference image than the image under consideration and closest among the other images to the exposure of the image under consideration, to generate a flow field. The flow fields generated for the non-reference images not already registered with the reference image are concatenated to register each of them with the reference image. Each non-reference image is then warped using its associated flow field. The reference image and the warped images are combined to create a radiance map representing the HDR image.

    Abstract translation: 提出了即使在存在场景或相机运动的情况下也可以从包围的图像序列生成高动态范围(HDR)图像的系统和过程。 这是通过首先选择一个图像作为参考图像来实现的。 然后,将每个非参考图像与包括参考图像的另一个图像一起登记,该参考图像表现出比正在考虑的图像更接近参考图像的曝光,并且在其他图像中最接近曝光 考虑的图像,产生一个流场。 为未参考图像注册的非参考图像生成的流场被连接以将它们注册到参考图像。 然后使用其相关联的流场对每个非参考图像进行翘曲。 参考图像和翘曲图像被组合以产生表示HDR图像的辐射图。

    System and process for optimal texture map reconstruction from multiple views
    50.
    发明申请
    System and process for optimal texture map reconstruction from multiple views 有权
    用于从多个视图获得最佳纹理贴图重建的系统和过程

    公开(公告)号:US20050285872A1

    公开(公告)日:2005-12-29

    申请号:US11192639

    申请日:2005-07-28

    CPC classification number: G06T15/04

    Abstract: A system and process for reconstructing optimal texture maps from multiple views of a scene is described. In essence, this reconstruction is based on the optimal synthesis of textures from multiple sources. This is generally accomplished using basic image processing theory to derive the correct weights for blending the multiple views. Namely, the steps of reconstructing, warping, prefiltering, and resampling are followed in order to warp reference textures to a desired location, and to compute spatially-variant weights for optimal blending. These weights take into consideration the anisotropy in the texture projection and changes in sampling frequency due to foreshortening. The weights are combined and the computation of the optimal texture is treated as a restoration problem, which involves solving a linear system of equations. This approach can be incorporated in a variety of applications, such as texturing of 3D models, analysis by synthesis methods, super-resolution techniques, and view-dependent texture mapping.

    Abstract translation: 描述用于从场景的多个视图重建最佳纹理图的系统和过程。 实质上,这种重建是基于来自多个源的纹理的最佳合成。 这通常使用基本图像处理理论来实现,以导出用于混合多个视图的正确权重。 即,遵循重构,翘曲,预过滤和重采样的步骤,以便将参考纹理扭曲到期望的位置,并计算用于最佳混合的空间变体权重。 这些权重考虑到纹理投影中的各向异性和由于缩短引起的采样频率的变化。 权重相结合,最优纹理的计算被视为恢复问题,其涉及求解线性方程组。 这种方法可以并入各种应用中,例如3D模型的纹理化,通过合成方法的分析,超分辨率技术和视图相关的纹理映射。

Patent Agency Ranking