System and process for generating high dynamic range video
    21.
    发明授权
    System and process for generating high dynamic range video 有权
    用于生成高动态范围视频的系统和过程

    公开(公告)号:US07239757B2

    公开(公告)日:2007-07-03

    申请号:US11338910

    申请日:2006-01-23

    IPC分类号: G06K9/40

    摘要: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.

    摘要翻译: 提出了用于产生高动态范围(HDR)视频的系统和过程,其涉及首先在改变曝光的同时捕获视频图像序列,以便在具有较短和较长曝光的帧之间交替。 每个帧的曝光在其被捕获之前被设置为在先前帧中的像素亮度分布的函数。 接下来,对于视频的每个帧,识别所考虑的帧与前后帧之间的对应像素。 对于每个对应的像素集合,至少一个像素被识别为表示可靠的像素。 然后使用与可信赖像素相关联的像素颜色信息来计算每个像素组的辐射值以形成辐射图。 然后可以执行色调映射过程以将辐射图转换成HDR帧的8位表示。

    Multi-pass image resampling
    22.
    发明授权
    Multi-pass image resampling 有权
    多路径图像重采样

    公开(公告)号:US08121434B2

    公开(公告)日:2012-02-21

    申请号:US12138454

    申请日:2008-06-13

    IPC分类号: G06K9/32 G06K9/40 G09G5/00

    CPC分类号: G06T3/4007 G06T3/608

    摘要: Multi-pass image resampling technique embodiments are presented that employ a series of one-dimensional filtering, resampling, and shearing stages to achieve good efficiency while maintaining high visual fidelity. In one embodiment, high-quality (multi-tap) image filtering is used inside each one-dimensional resampling stage. Because each stage only uses one-dimensional filtering, the overall computation efficiency is very good and amenable to graphics processing unit (GPU) implementation using pixel shaders. This embodiment also upsamples the image before shearing steps in a direction orthogonal to the shearing to prevent aliasing, and then downsamples the image to its final size with high-quality low-pass filtering. This ensures that none of the stages causes excessive blurring or aliasing.

    摘要翻译: 提出了采用一系列一维滤波,重采样和剪切阶段以实现高效率同时保持高视觉保真度的多通图像重采样技术实施例。 在一个实施例中,在每个一维重采样阶段内使用高质量(多抽头)图像滤波。 因为每个阶段只使用一维过滤,所以整体计算效率非常好,并且适合使用像素着色器的图形处理单元(GPU)实现。 该实施例还在与剪切正交的方向上的剪切步骤之前对图像进行上采样以防止混叠,然后通过高质量低通滤波将图像下采样到其最终尺寸。 这样可以确保任何一个阶段都不会引起过度的模糊或混叠。

    MULTI-PASS IMAGE RESAMPLING
    23.
    发明申请
    MULTI-PASS IMAGE RESAMPLING 有权
    多重图像补偿

    公开(公告)号:US20090310888A1

    公开(公告)日:2009-12-17

    申请号:US12138454

    申请日:2008-06-13

    IPC分类号: G06K9/32

    CPC分类号: G06T3/4007 G06T3/608

    摘要: Multi-pass image resampling technique embodiments are presented that employ a series of one-dimensional filtering, resampling, and shearing stages to achieve good efficiency while maintaining high visual fidelity. In one embodiment, high-quality (multi-tap) image filtering is used inside each one-dimensional resampling stage. Because each stage only uses one-dimensional filtering, the overall computation efficiency is very good and amenable to graphics processing unit (GPU) implementation using pixel shaders. This embodiment also upsamples the image before shearing steps in a direction orthogonal to the shearing to prevent aliasing, and then downsamples the image to its final size with high-quality low-pass filtering. This ensures that none of the stages causes excessive blurring or aliasing.

    摘要翻译: 提出了采用一系列一维滤波,重采样和剪切阶段以实现高效率同时保持高视觉保真度的多通图像重采样技术实施例。 在一个实施例中,在每个一维重采样阶段内使用高质量(多抽头)图像滤波。 因为每个阶段只使用一维过滤,所以整体计算效率非常好,并且适合使用像素着色器的图形处理单元(GPU)实现。 该实施例还在与剪切正交的方向上的剪切步骤之前对图像进行上采样以防止混叠,然后通过高质量低通滤波将图像下采样到其最终尺寸。 这样可以确保任何一个阶段都不会引起过度的模糊或混叠。

    System and process for compressing and decompressing multiple, layered, video streams of a scene captured from different viewpoints forming a grid using spatial and temporal encoding
    24.
    发明申请
    System and process for compressing and decompressing multiple, layered, video streams of a scene captured from different viewpoints forming a grid using spatial and temporal encoding 有权
    用于压缩和解压缩从使用空间和时间编码形成网格的不同视点捕获的场景的多个分层视频流的系统和过程

    公开(公告)号:US20060031915A1

    公开(公告)日:2006-02-09

    申请号:US11097533

    申请日:2005-03-31

    摘要: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints that from a grid of viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.

    摘要翻译: 一种用于压缩和解压缩从视点网格的不同观点描绘基本相同的动态场景的多个视频流的系统和过程。 多个流的每个同期视频帧集合中的每个帧由至少两层(主层和边界层)表示。 主要层的压缩包括首先将每组同期帧中的这些层中的一个或多个指定为关键帧。 对于按时间顺序排列的每组同期帧,使用帧间压缩技术对每个关键帧的主层进行压缩。 另外,使用空间预测压缩技术对所考虑的帧集合内的每个非关键帧的主层进行压缩。 最后,使用帧内压缩技术对当前帧集合中每帧的边界层进行压缩。 压缩通常与压缩过程相反。

    Interactive viewpoint video system and process
    26.
    发明申请
    Interactive viewpoint video system and process 有权
    互动观点视频系统和过程

    公开(公告)号:US20050285875A1

    公开(公告)日:2005-12-29

    申请号:US10880774

    申请日:2004-06-28

    IPC分类号: G06T15/20 G06T19/00 G09G5/00

    摘要: A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.

    摘要翻译: 一种用于生成并再现和显示交互视点视频的系统和过程,其中用户可以在操纵(冻结,减慢或反转)时间并随意改变视点的同时观看动态场景。 通常,使用少量摄像机生成交互视点视频以捕获多个视频流。 采用多视图3D重建和消隐技术来创建视频帧的分层表示,使得能够实现捕获的动态场景的高效压缩和交互式回放,同时允许实时渲染。

    System and process for generating high dynamic range images from multiple exposures of a moving scene
    27.
    发明申请
    System and process for generating high dynamic range images from multiple exposures of a moving scene 有权
    用于从移动场景的多次曝光中产生高动态范围图像的系统和过程

    公开(公告)号:US20050013501A1

    公开(公告)日:2005-01-20

    申请号:US10623033

    申请日:2003-07-18

    CPC分类号: G06T5/50 G06T7/269

    摘要: A system and process for generating a high dynamic range (HDR) image from a bracketed image sequence, even in the presence of scene or camera motion, is presented. This is accomplished by first selecting one of the images as a reference image. Then, each non-reference image is registered with another one of the images, including the reference image, which exhibits an exposure that is both closer to that of the reference image than the image under consideration and closest among the other images to the exposure of the image under consideration, to generate a flow field. The flow fields generated for the non-reference images not already registered with the reference image are concatenated to register each of them with the reference image. Each non-reference image is then warped using its associated flow field. The reference image and the warped images are combined to create a radiance map representing the HDR image.

    摘要翻译: 提出了即使在存在场景或相机运动的情况下也可以从包围的图像序列生成高动态范围(HDR)图像的系统和过程。 这是通过首先选择一个图像作为参考图像来实现的。 然后,将每个非参考图像与包括参考图像的另一个图像一起登记,该参考图像表现出比正在考虑的图像更接近参考图像的曝光,并且在其他图像中最接近曝光 考虑的图像,产生一个流场。 为未参考图像注册的非参考图像生成的流场被连接以将它们注册到参考图像。 然后使用其相关联的流场对每个非参考图像进行翘曲。 参考图像和翘曲图像被组合以产生表示HDR图像的辐射图。

    Object instance recognition using feature symbol triplets
    28.
    发明申请
    Object instance recognition using feature symbol triplets 有权
    使用特征符号三元组的对象实例识别

    公开(公告)号:US20070179921A1

    公开(公告)日:2007-08-02

    申请号:US11342218

    申请日:2006-01-27

    IPC分类号: G06F15/18

    CPC分类号: G06K9/6211

    摘要: A feature symbol triplets object instance recognizer and method for recognizing specific objects in a query image. Generally, the recognizer and method find repeatable features in the image, and match the repeatable features between a query image and a set of training images. More specifically, the recognizer and method finds features in the query image and then groups all possible combinations of three features in to feature triplets. Small regions or “patches” in the query image, and an affine transformation is applied to the patches to identify any similarity between patches in a query image and training images. The affine transformation is computed using position of neighboring features in each feature triplet. Next, all similar patches are found, and then pairs of images are aligned to determine if the patches agree in the position of the object. If they do, then it is said that object is found and identified.

    摘要翻译: 特征符号三元组对象实例识别器和方法,用于识别查询图像中的特定对象。 通常,识别器和方法在图像中找到可重复的特征,并且匹配查询图像和一组训练图像之间的可重复特征。 更具体地,识别器和方法在查询图像中发现特征,然后将三个特征的所有可能组合分组到特征三元组。 查询图像中的小区域或“补丁”,并且仿射变换被应用于补丁以识别查询图像中的修补程序和训练图像之间的任何相似性。 使用每个特征三元组中相邻特征的位置来计算仿射变换。 接下来,找到所有相似的补丁,然后对对成对的图像以确定补丁是否在对象的位置上一致。 如果他们这样做,那么就说发现和识别对象。

    Color segmentation-based stereo 3D reconstruction system and process
    29.
    发明申请
    Color segmentation-based stereo 3D reconstruction system and process 有权
    基于颜色分割的立体3D重建系统和过程

    公开(公告)号:US20050286757A1

    公开(公告)日:2005-12-29

    申请号:US10879327

    申请日:2004-06-28

    IPC分类号: G06K9/00 G06T7/00

    CPC分类号: G06K9/20 G06K2209/40 G06T7/55

    摘要: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.

    摘要翻译: 提出了一种用于基于基于颜色分割的方法从其多个图像计算场景的3D重建的系统和过程。 首先,每个图像被独立地分割。 第二,使用假设一个段内的所有像素具有相同的视差来为每个段计算初始视差空间分布(DSD)。 接下来,每个段的DSD使用相邻段进行细化,并将其投影到其他图像中。 然后在视差平滑阶段放宽每个段具有单个视差的假设。 结果是每个图像的视差图,如果重建应用程序需要它,它又可以用于计算每像素深度图。

    System and process for generating a two-layer, 3D representation of a scene

    公开(公告)号:US20060114253A1

    公开(公告)日:2006-06-01

    申请号:US11334591

    申请日:2006-01-17

    IPC分类号: G06T15/40

    CPC分类号: G06T15/205

    摘要: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.