-
公开(公告)号:US08330802B2
公开(公告)日:2012-12-11
申请号:US12331419
申请日:2008-12-09
申请人: Sanjeev J. Koppal , Sing Bing Kang , Charles Lawrence Zitnick, III , Michael F. Cohen , Bryan Kent Ressler
发明人: Sanjeev J. Koppal , Sing Bing Kang , Charles Lawrence Zitnick, III , Michael F. Cohen , Bryan Kent Ressler
IPC分类号: H04N13/02
CPC分类号: H04N13/10
摘要: The stereo movie editing technique described herein combines knowledge of both multi-view stereo algorithms and human depth perception. The technique creates a digital editor, specifically for stereographic cinema. The technique employs an interface that allows intuitive manipulation of the different parameters in a stereo movie setup, such as camera locations and screen position. Using the technique it is possible to reduce or enhance well-known stereo movie effects such as cardboarding and miniaturization. The technique also provides new editing techniques such as directing the user's attention and easier transitions between scenes.
摘要翻译: 这里描述的立体声电影编辑技术结合了多视角立体声算法和人类深度感知的知识。 该技术创建了一个专门用于立体影院的数字编辑器。 该技术采用允许在立体声电影设置中的不同参数的直观操纵的界面,例如相机位置和屏幕位置。 使用该技术可以减少或增强诸如硬纸板和小型化的众所周知的立体电影效果。 该技术还提供了新的编辑技术,例如指导用户的注意力和更容易的场景之间的转换。
-
公开(公告)号:US20100142801A1
公开(公告)日:2010-06-10
申请号:US12331419
申请日:2008-12-09
申请人: Sanjeev Jagannath Koppal , Sing Bing Kang , Charles Lawrence Zitnick, III , Michael F. Cohen , Bryan Kent Ressler
发明人: Sanjeev Jagannath Koppal , Sing Bing Kang , Charles Lawrence Zitnick, III , Michael F. Cohen , Bryan Kent Ressler
IPC分类号: G06K9/00
CPC分类号: H04N13/10
摘要: The stereo movie editing technique described herein combines knowledge of both multi-view stereo algorithms and human depth perception. The technique creates a digital editor, specifically for stereographic cinema. The technique employs an interface that allows intuitive manipulation of the different parameters in a stereo movie setup, such as camera locations and screen position. Using the technique it is possible to reduce or enhance well-known stereo movie effects such as cardboarding and miniaturization. The technique also provides new editing techniques such as directing the user's attention and easier transitions between scenes.
摘要翻译: 这里描述的立体声电影编辑技术结合了多视角立体声算法和人类深度感知的知识。 该技术创建了一个专门用于立体影院的数字编辑器。 该技术采用允许在立体声电影设置中的不同参数的直观操纵的界面,例如相机位置和屏幕位置。 使用该技术可以减少或增强诸如硬纸板和小型化的众所周知的立体电影效果。 该技术还提供了新的编辑技术,例如指导用户的注意力和更容易的场景之间的转换。
-
公开(公告)号:US20100318914A1
公开(公告)日:2010-12-16
申请号:US12485179
申请日:2009-06-16
申请人: Charles Lawrence Zitnick, III , Bryan K. Ressler , Sing Bing Kang , Michael F. Cohen , Jagannatha Koppal
发明人: Charles Lawrence Zitnick, III , Bryan K. Ressler , Sing Bing Kang , Michael F. Cohen , Jagannatha Koppal
IPC分类号: G06F3/048
CPC分类号: G11B27/034 , G11B27/34 , H04N21/4854 , H04N21/6377 , H04N21/658
摘要: Described is a user interface that displays a representation of a stereo scene, and includes interactive mechanisms for changing parameter values that determine the perceived appearance of that scene. The scene is modeled as if viewed from above, including a representation of a viewer's eyes, a representation of a viewing screen, and an indication simulating what each of the viewer eyes perceives on the viewing screen. Variable parameters may include a vergence parameter, a dolly parameter, a field-of-view parameter, an interocular parameter and a proscenium arch parameter.
摘要翻译: 描述了显示立体场景的表示的用户界面,并且包括用于改变确定该场景的感知外观的参数值的交互机制。 该场景被建模为仿佛从上方观看,包括观看者的眼睛的表示,观看屏幕的表示,以及模拟观看者眼睛在观看屏幕上感知的每一个的指示。 可变参数可以包括聚集参数,小车参数,视野参数,眼镜参数和前景拱参数。
-
公开(公告)号:US09275680B2
公开(公告)日:2016-03-01
申请号:US12485179
申请日:2009-06-16
申请人: Charles Lawrence Zitnick, III , Bryan K. Ressler , Sing Bing Kang , Michael F. Cohen , Jagannatha Koppal
发明人: Charles Lawrence Zitnick, III , Bryan K. Ressler , Sing Bing Kang , Michael F. Cohen , Jagannatha Koppal
IPC分类号: G06F3/048 , G11B27/034 , G11B27/34 , H04N21/485 , H04N21/6377 , H04N21/658
CPC分类号: G11B27/034 , G11B27/34 , H04N21/4854 , H04N21/6377 , H04N21/658
摘要: Described is a user interface that displays a representation of a stereo scene, and includes interactive mechanisms for changing parameter values that determine the perceived appearance of that scene. The scene is modeled as if viewed from above, including a representation of a viewer's eyes, a representation of a viewing screen, and an indication simulating what each of the viewer eyes perceives on the viewing screen. Variable parameters may include a vergence parameter, a dolly parameter, a field-of-view parameter, an interocular parameter and a proscenium arch parameter.
摘要翻译: 描述了显示立体场景的表示的用户界面,并且包括用于改变确定该场景的感知外观的参数值的交互机制。 该场景被建模为仿佛从上方观看,包括观看者的眼睛的表示,观看屏幕的表示,以及模拟观看者眼睛在观看屏幕上感知的每一个的指示。 可变参数可以包括聚集参数,小车参数,视野参数,眼镜参数和前景拱参数。
-
公开(公告)号:US08750645B2
公开(公告)日:2014-06-10
申请号:US12634699
申请日:2009-12-10
CPC分类号: H04N5/23232 , G06T3/4038 , H04N5/23267 , H04N5/262
摘要: A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene.
摘要翻译: 本文描述的方法包括接收场景的图像序列并接收图像序列中的参考图像的指示的动作。 该方法还包括自动地将一个或多个权重自动分配给场景图像序列中的每个图像中的每个像素的动作。 此外,该方法包括至少部分地基于分配给场景的图像序列中的每个图像中的每个像素的一个或多个权重来自动生成合成图像的动作。
-
公开(公告)号:US20110142370A1
公开(公告)日:2011-06-16
申请号:US12634699
申请日:2009-12-10
CPC分类号: H04N5/23232 , G06T3/4038 , H04N5/23267 , H04N5/262
摘要: A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene.
摘要翻译: 本文描述的方法包括接收场景的图像序列并接收图像序列中的参考图像的指示的动作。 该方法还包括自动地将一个或多个权重自动分配给场景图像序列中的每个图像中的每个像素的动作。 此外,该方法包括至少部分地基于分配给场景的图像序列中的每个图像中的每个像素的一个或多个权重来自动生成合成图像的动作。
-
公开(公告)号:US07657060B2
公开(公告)日:2010-02-02
申请号:US10814851
申请日:2004-03-31
申请人: Michael F. Cohen , Ying-Qing Xu , Heung-Yeung Shum , Jue Wang
发明人: Michael F. Cohen , Ying-Qing Xu , Heung-Yeung Shum , Jue Wang
IPC分类号: G06K9/00
CPC分类号: G11B27/034 , G06K9/00711 , G06T15/02 , H04N5/262
摘要: The techniques and mechanisms described herein are directed to a system for stylizing video, such as interactively transforming video to a cartoon-like style. Briefly stated, the techniques include determining a set of volumetric objects within a video, each volumetric object being a segment. Mean shift video segmentation may be used for this step. With that segmentation information, the technique further includes indicating on a limited number of keyframes of the video how segments should be merged into a semantic region. Finally, a contiguous volume is created by interpolating between keyframes by a mean shift constrained interpolation technique to propagate the semantic regions between keyframes.
摘要翻译: 这里描述的技术和机制针对用于对视频进行风格化的系统,诸如将视频交互地变换成卡通样式。 简而言之,技术包括确定视频内的一组体积对象,每个体积对象是一段。 平均移位视频分割可用于该步骤。 利用该分割信息,该技术还包括在片段的有限数量的关键帧上指示片段如何被合并到语义区域中。 最后,通过平均偏移约束插值技术在关键帧之间进行内插以在关键帧之间传播语义区域来创建连续体积。
-
公开(公告)号:US07457477B2
公开(公告)日:2008-11-25
申请号:US10885259
申请日:2004-07-06
CPC分类号: G06T5/50
摘要: A system and method for improving digital flash photographs. The present invention is a technique that significantly improves low-light imaging by giving the end-user all the advantages of flash photography without producing the jarring look. The invention uses an image pair—one taken with flash the other without—to remove noise from the ambient image, sharpen the ambient image using detail from the flash image, correct for color, and remove red-eye.
摘要翻译: 一种用于改进数码闪光照片的系统和方法。 本发明是一种技术,通过给终端用户提供闪光摄影的所有优点,而不会产生刺耳的外观,显着地改善了低光成像。 本发明使用图像对 - 其他闪光拍摄而不是从环境图像中去除噪声,使用来自闪光图像的细节来锐化环境图像,校正颜色并去除红眼。
-
公开(公告)号:US07450758B2
公开(公告)日:2008-11-11
申请号:US11942606
申请日:2007-11-19
申请人: Michael F. Cohen , Ying-Qing Xu , Heung-Yeung Shum , Jue Wang
发明人: Michael F. Cohen , Ying-Qing Xu , Heung-Yeung Shum , Jue Wang
IPC分类号: G06K9/00
CPC分类号: G11B27/034 , G06K9/00711 , G06T15/02 , H04N5/262
摘要: The techniques and mechanisms described herein are directed to a system for stylizing video, such as interactively transforming video to a cartoon-like style. Briefly stated, the techniques include determining a set of volumetric objects within a video, each volumetric object being a segment. Mean shift video segmentation may be used for this step. With that segmentation information, the technique further includes indicating on a limited number of keyframes of the video how segments should be merged into a semantic region. Finally, a contiguous volume is created by interpolating between keyframes by a mean shift constrained interpolation technique to propagate the semantic regions between keyframes.
摘要翻译: 这里描述的技术和机制针对用于对视频进行风格化的系统,诸如将视频交互地变换成卡通样式。 简而言之,技术包括确定视频内的一组体积对象,每个体积对象是一段。 平均移位视频分割可用于该步骤。 利用该分割信息,该技术还包括在片段的有限数量的关键帧上指示片段如何被合并到语义区域中。 最后,通过平均偏移约束插值技术在关键帧之间进行内插以在关键帧之间传播语义区域来创建连续体积。
-
公开(公告)号:US07149329B2
公开(公告)日:2006-12-12
申请号:US10968553
申请日:2004-10-19
IPC分类号: G06K9/00
CPC分类号: G06T13/40 , G06K9/00201 , G06K9/00248 , G06K9/00268 , G06K9/00281 , G06T7/251 , G06T7/55 , G06T7/579 , G06T7/74 , G06T9/001 , G06T15/205 , G06T17/00 , G06T17/10 , G06T17/20 , G06T2200/08 , G06T2207/10012 , G06T2207/10016 , G06T2207/10021 , G06T2207/30201 , H04N19/162 , H04N19/503
摘要: Described herein is a technique for creating a 3D face model using images obtained from an inexpensive camera associated with a general-purpose computer. Two still images of the user are captured, and two video sequences. The user is asked to identify five facial features, which are used to calculate a mask and to perform fitting operations. Based on a comparison of the still images, deformation vectors are applied to a neutral face model to create the 3D model. The video sequences are used to create a texture map. The process of creating the texture map references the previously obtained 3D model to determine poses of the sequential video images.
-
-
-
-
-
-
-
-
-