STEREOSCOPIC 3D LIQUID CRYSTAL DISPLAY APPARATUS WITH BLACK DATA INSERTION
    61.
    发明申请
    STEREOSCOPIC 3D LIQUID CRYSTAL DISPLAY APPARATUS WITH BLACK DATA INSERTION 审中-公开
    STEREOSCOPIC 3D液晶显示设备与黑色数据插入

    公开(公告)号:WO2008144110A1

    公开(公告)日:2008-11-27

    申请号:PCT/US2008/058636

    申请日:2008-03-28

    Abstract: A display apparatus includes an liquid crystal display panel having a frame response time of less than 5 milliseconds, drive electronics configured to drive the liquid crystal display panel to black between images that are provided to the liquid crystal display panel at a rate of at least 90 images per second, and a backlight positioned to provide light to the liquid crystal display panel. The backlight includes a right eye solid state light source and a left eye solid state light source capable of being modulated between the right eye solid state light source and a left eye solid state light source at a rate of at least 90 Hertz.

    Abstract translation: 显示装置包括具有小于5毫秒的帧响应时间的液晶显示面板,驱动电子设备被配置为以至少90的速率将液晶显示面板驱动到提供给液晶显示面板的图像之间的黑色 每秒的图像,以及被定位成向液晶显示面板提供光的背光。 背光源包括能够以至少90赫兹的速率在右眼固态光源和左眼固态光源之间调制的右眼固态光源和左眼固态光源。

    表示システム
    62.
    发明申请
    表示システム 审中-公开
    显示系统

    公开(公告)号:WO2008056753A1

    公开(公告)日:2008-05-15

    申请号:PCT/JP2007/071743

    申请日:2007-11-08

    Abstract: 【課題】 蛍光灯などの間欠照明の下で2種類以上の画像から光シャッターを用いる利用者に正常画像を見せる場合に、フリッカが生じるのを解消する。 【解決手段】 2種類以上の画像を順次繰り返し表示可能な表示パネル11Aと、この表示パネル11Aの画面表示のうちの特定の画像の表示周期に合わせて開く光シャッター13を備えた表示システム10において、周期的に点滅する間欠照明15下においては、表示パネル11Aの特定画像の表示周期を間欠照明15の点滅周期の整数倍にする。

    Abstract translation: [问题]为了消除在诸如荧光灯的间歇照明下使用来自两种或更多种图像的使用光学快门的用户的正常图像的消除。 解决问题的手段在包括可以顺序地反复显示两种或更多种图像的显示面板(11A)的显示系统(10)中,以及以特定图像的显示周期打开的光学快门(13) 在显示面板(11A)的画面上显示的图像中,将显示面板(11A)上的特定图像的显示周期设定为等间歇照明下的间歇照明(15)的闪烁周期的整数倍(15 )定期闪烁

    STEREOSCOPIC VIDEO SEQUENCES CODING SYSTEM AND METHOD
    64.
    发明申请
    STEREOSCOPIC VIDEO SEQUENCES CODING SYSTEM AND METHOD 审中-公开
    立体视频序列编码系统和方法

    公开(公告)号:WO2003088682A1

    公开(公告)日:2003-10-23

    申请号:PCT/CA2003/000524

    申请日:2003-04-09

    Abstract: A method for decoding a compressed image stream, the image stream having a plurality of frames, each frame consisting of a merged image including pixels from a left image and pixels from a right image. The method involves the steps of receiving each merged image; changing a clock domain from the original input signal to an internal domain; for each merged image, placing at least two adjacent pixels into an input buffer and interpolating an intermediate pixel, for forming a reconstructed left frame and a reconstructed right frame according to provenance of the adjacent pixels; and reconstructing a stereoscopic image stream from the left and right image frames. The invention also teaches a system for decoding a compressed image stream.

    Abstract translation: 一种用于解码压缩图像流的方法,所述图像流具有多个帧,每个帧由包括来自左图像的像素和来自右图像的像素的合并图像组成。 该方法包括接收每个合并图像的步骤; 将时钟域从原始输入信号改变为内部域; 对于每个合并的图像,将至少两个相邻像素放置到输入缓冲器中并内插中间像素,用于根据相邻像素的来源形成重建的左帧和重建的右帧; 以及从左和右图像帧重构立体图像流。 本发明还教导了一种用于对压缩图像流进行解码的系统。

    METHOD OF AND SCALING UNIT FOR SCALING A THREE-DIMENSIONAL MODEL AND DISPLAY APPARATUS
    65.
    发明申请
    METHOD OF AND SCALING UNIT FOR SCALING A THREE-DIMENSIONAL MODEL AND DISPLAY APPARATUS 审中-公开
    用于缩放三维模型和显示设备的方法和缩放单元

    公开(公告)号:WO2003058556A1

    公开(公告)日:2003-07-17

    申请号:PCT/IB2002/005369

    申请日:2002-12-09

    CPC classification number: G06T15/20 G06T3/40 H04N13/128 H04N13/144

    Abstract: A method of scaling a three-dimensional model (100) into a scaled three-dimensional model (108) in a dimension which is related with depth which method is based on properties of human visual perception. The method is based on discrimination or distinguishing between relevant parts of the information represented by the three-dimensional model for which the human visual perception is sensitive and in irrelevant parts of the information represented by the three-dimensional model for which the human visual perception is insensitive. Properties of the human visual perception are e.g. sensitivity to a discontinuity in a signal representing depth and sensitivity to a difference of luminance values between neighboring pixels of a two-dimensional view of the three-dimensional model.

    Abstract translation: 一种将三维模型(100)缩放成与该深度相关的尺寸的缩放三维模型(108)的方法,该方法基于人类视觉感知的性质。 该方法基于鉴别或区分由人类视觉感知敏感的三维模型所表示的信息的相关部分以及由人类视觉感知为三维模型所代表的信息的不相关部分 不敏感的。 人类视觉感知的属性是例如。 表示对三维模型的二维视图的相邻像素之间的亮度差的深度和灵敏度的信号的不连续性的敏感度。

    SINGLE DEPTH TRACKED ACCOMMODATION-VERGENCE SOLUTIONS
    68.
    发明申请
    SINGLE DEPTH TRACKED ACCOMMODATION-VERGENCE SOLUTIONS 审中-公开
    单深度跟踪住宿 - 复原解决方案

    公开(公告)号:WO2018027015A1

    公开(公告)日:2018-02-08

    申请号:PCT/US2017/045267

    申请日:2017-08-03

    Abstract: While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.

    Abstract translation: 当观看者正在观看包括第一左图像和第一右图像的​​第一立体图像时,观看者的左眼的左聚散角和右眼的右右眼的右右聚度角 观众被确定。 至少部分基于(i)观看者的左眼的左聚散角和(ii)观看者的右眼的右聚散角来确定虚拟对象深度。 包括用于观看者的第二左图像和第二右图像的第二立体图像被呈现在一个或多个图像显示器上。 第二立体图像在第一立体图像之后。 第二立体图像从一个或多个图像显示器投影到虚拟物体深度处的虚拟物体平面。

    LIVE ACTION VOLUMETRIC VIDEO COMPRESSION / DECOMPRESSION AND PLAYBACK
    69.
    发明申请
    LIVE ACTION VOLUMETRIC VIDEO COMPRESSION / DECOMPRESSION AND PLAYBACK 审中-公开
    实时动作音量视频压缩/减压和播放

    公开(公告)号:WO2017189490A1

    公开(公告)日:2017-11-02

    申请号:PCT/US2017/029261

    申请日:2017-04-25

    Applicant: HYPEVR

    Abstract: A method for compressing geometric data and video is disclosed. The method includes receiving video and associated geometric data for a physical location, generating a background video from the video, and generating background geometric data for the geometric data outside of a predetermined distance from a capture point for the video as a skybox sphere at a non-parallax distance. The method further includes generating a geometric shape for a first detected object within the predetermined distance from the capture point from the geometric data, generating shape textures for the geometric shape from the video, and encoding the background video and shape textures as compressed video along with the geometric shape and the background geometric data as encoded volumetric video.

    Abstract translation: 公开了一种用于压缩几何数据和视频的方法。 该方法包括:接收视频和相关联的几何数据的物理位置,生成从视频一个背景视频,并且在一个非生成从捕捉点的视频作为天空盒球预定距离的外侧的几何数据的背景几何数据 - 视差距离。 该方法进一步包括从几何数据产生距离捕获点预定距离内的第一检测对象的几何形状,从视频生成几何形状的形状纹理,并且将背景视频和形状纹理作为压缩视频与 几何形状和背景几何数据作为编码体积视频。

    CALCULATION OF TEMPORALLY COHERENT DISPARITY FROM SEQUENCE OF VIDEO FRAMES
    70.
    发明申请
    CALCULATION OF TEMPORALLY COHERENT DISPARITY FROM SEQUENCE OF VIDEO FRAMES 审中-公开
    从视频帧序列计算时间相干差

    公开(公告)号:WO2017143572A1

    公开(公告)日:2017-08-31

    申请号:PCT/CN2016/074584

    申请日:2016-02-25

    Inventor: WU, Yi JIANG, Yong

    CPC classification number: H04N13/128 H04N13/144 H04N2013/0085

    Abstract: Techniques are provided for calculating temporally coherent disparity values for pixels in a sequence of image frames. An example method may include calculating initial spatial disparity costs between a pixel of a first image frame from a reference camera and pixels from an image frame from a secondary camera. The method may also include estimating a motion vector for the pixel of the first reference camera image frame to a corresponding pixel from a second reference camera image frame. The method may further include calculating a confidence value for the estimated motion vector based on a measure of similarity between the colors of the pixels of the first and second image frames from the reference camera. The method may further include calculating temporally coherent disparity costs based on the initial spatial disparity costs weighted by the confidence value and selecting a disparity value based on those costs.

    Abstract translation: 提供了用于计算图像帧序列中的像素的时间相干视差值的技术。 示例方法可以包括计算来自参考相机的第一图像帧的像素和来自辅助相机的来自图像帧的像素之间的初始空间视差成本。 该方法还可以包括将第一参考相机图像帧的像素的运动矢量估计为来自第二参考相机图像帧的对应像素。 该方法可以进一步包括基于来自参考相机的第一和第二图像帧的像素的颜色之间的相似性度量来计算估计的运动向量的置信度值。 该方法可以进一步包括基于由置信度值加权的初始空间视差成本并且基于那些成本选择视差值来计算时间上一致的视差成本。

Patent Agency Ranking