Compressing and decompressing multiple, layered, video streams employing multi-directional spatial encoding
    5.
    发明授权
    Compressing and decompressing multiple, layered, video streams employing multi-directional spatial encoding 有权
    使用多向空间编码压缩和解压缩多个分层的视频流

    公开(公告)号:US08774274B2

    公开(公告)日:2014-07-08

    申请号:US13348262

    申请日:2012-01-11

    IPC分类号: H04N7/12

    摘要: A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe.

    摘要翻译: 一种用于在构成多个视频流的同步视频帧的顺序集合中压缩和解压缩非关键帧的过程,其中集合中的视频帧从不同视点描绘基本相同的场景。 每组同时期的视频帧具有指定为关键帧的多个帧,其余的是非关键帧。 在一个实施例中,使用多方向空间预测技术来压缩非关键帧。 在另一个实施例中,使用组合链接和空间预测压缩技术来压缩每组同时期视频帧的非关键帧。 所使用的空间预测压缩技术可以是单向技术,其中仅使用一个参考帧,因此使用一条链来预测每个非关键帧,或者它可以是多方向技术,其中两个或更多个参考帧等 链,用于预测每个非关键帧。

    COMPRESSING AND DECOMPRESSING MULTIPLE, LAYERED, VIDEO STREAMS EMPLOYING MULTI-DIRECTIONAL SPATIAL ENCODING
    6.
    发明申请
    COMPRESSING AND DECOMPRESSING MULTIPLE, LAYERED, VIDEO STREAMS EMPLOYING MULTI-DIRECTIONAL SPATIAL ENCODING 有权
    压缩和分解采用多方向空间编码的多层,多层视频流

    公开(公告)号:US20120114037A1

    公开(公告)日:2012-05-10

    申请号:US13348262

    申请日:2012-01-11

    IPC分类号: H04N11/02

    摘要: A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe.

    摘要翻译: 一种用于在构成多个视频流的同步视频帧的顺序集合中压缩和解压缩非关键帧的过程,其中集合中的视频帧从不同视点描绘基本上相同的场景。 每组同时期的视频帧具有指定为关键帧的多个帧,其余的是非关键帧。 在一个实施例中,使用多方向空间预测技术来压缩非关键帧。 在另一个实施例中,使用组合链接和空间预测压缩技术来压缩每组同时期视频帧的非关键帧。 所使用的空间预测压缩技术可以是单向技术,其中仅使用一个参考帧,因此使用一条链来预测每个非关键帧,或者它可以是多方向技术,其中两个或更多个参考帧等 链,用于预测每个非关键帧。

    System and process for generating a two-layer, 3D representation of a scene
    7.
    发明授权
    System and process for generating a two-layer, 3D representation of a scene 有权
    用于生成场景的两层3D表示的系统和过程

    公开(公告)号:US07015926B2

    公开(公告)日:2006-03-21

    申请号:US10879235

    申请日:2004-06-28

    IPC分类号: G09G5/02

    CPC分类号: G06T15/205

    摘要: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.

    摘要翻译: 提出了一种用于从图像生成数字或数字化图像的二层3D表示和图像的像素视差图的系统和过程。 两层表示包括具有显示背景颜色的像素和与图像中的深度不连续区域的相应定位的像素相关联的背景差异的主层以及与在这些深度中未找到的图像的相应定位的像素相关联的颜色和差异的像素 不连续区域。 另一层是由与前述深度不连续区域的对应位置的像素相关联的前景色,前景差异和α值的像素构成的边界层。 深度不连续区域对应于使用其视差图在图像中发现的围绕深度不连续性的规定尺寸的区域。

    Stereo movie editing
    8.
    发明授权
    Stereo movie editing 有权
    立体声电影编辑

    公开(公告)号:US08330802B2

    公开(公告)日:2012-12-11

    申请号:US12331419

    申请日:2008-12-09

    IPC分类号: H04N13/02

    CPC分类号: H04N13/10

    摘要: The stereo movie editing technique described herein combines knowledge of both multi-view stereo algorithms and human depth perception. The technique creates a digital editor, specifically for stereographic cinema. The technique employs an interface that allows intuitive manipulation of the different parameters in a stereo movie setup, such as camera locations and screen position. Using the technique it is possible to reduce or enhance well-known stereo movie effects such as cardboarding and miniaturization. The technique also provides new editing techniques such as directing the user's attention and easier transitions between scenes.

    摘要翻译: 这里描述的立体声电影编辑技术结合了多视角立体声算法和人类深度感知的知识。 该技术创建了一个专门用于立体影院的数字编辑器。 该技术采用允许在立体声电影设置中的不同参数的直观操纵的界面,例如相机位置和屏幕位置。 使用该技术可以减少或增强诸如硬纸板和小型化的众所周知的立体电影效果。 该技术还提供了新的编辑技术,例如指导用户的注意力和更容易的场景之间的转换。

    VIEWER-CENTRIC USER INTERFACE FOR STEREOSCOPIC CINEMA
    10.
    发明申请
    VIEWER-CENTRIC USER INTERFACE FOR STEREOSCOPIC CINEMA 有权
    用于立体电影的查看器 - 中心用户界面

    公开(公告)号:US20100318914A1

    公开(公告)日:2010-12-16

    申请号:US12485179

    申请日:2009-06-16

    IPC分类号: G06F3/048

    摘要: Described is a user interface that displays a representation of a stereo scene, and includes interactive mechanisms for changing parameter values that determine the perceived appearance of that scene. The scene is modeled as if viewed from above, including a representation of a viewer's eyes, a representation of a viewing screen, and an indication simulating what each of the viewer eyes perceives on the viewing screen. Variable parameters may include a vergence parameter, a dolly parameter, a field-of-view parameter, an interocular parameter and a proscenium arch parameter.

    摘要翻译: 描述了显示立体场景的表示的用户界面,并且包括用于改变确定该场景的感知外观的参数值的交互机制。 该场景被建模为仿佛从上方观看,包括观看者的眼睛的表示,观看屏幕的表示,以及模拟观看者眼睛在观看屏幕上感知的每一个的指示。 可变参数可以包括聚集参数,小车参数,视野参数,眼镜参数和前景拱参数。