INTERPOLATING AND RENDERING SUB-PHASES OF A 4D DATASET
    11.
    发明申请
    INTERPOLATING AND RENDERING SUB-PHASES OF A 4D DATASET 审中-公开
    4D DATASET的插画和渲染子阶段

    公开(公告)号:US20110050692A1

    公开(公告)日:2011-03-03

    申请号:US12552261

    申请日:2009-09-01

    IPC分类号: G06T17/00

    CPC分类号: G06T15/08 G06T3/0093

    摘要: A technique for rendering a deformable volume includes acquiring 3D images of a deformable volume including an object during phases of a deformation motion. The 3D images include voxels, a portion of which move from original coordinate locations during a primary phase to deformed coordinate locations during each subsequent phase of the deformation motion. Deformation matrixes each based upon one of the 3D images during a different one of the phases are generated. The deformation matrixes each include transformation vectors describing how to return the voxels from their deformed coordination locations to their original coordinate locations of the primary phase. A sub-phase 3D image of the deformable volume between consecutive phases is generated by interpolating between the transformation vectors of the consecutive phases associated with a given coordinate location within the deformable volume and retrieving voxel data from a primary 3D image at voxel locations referenced by interpolated transformation vectors.

    摘要翻译: 用于呈现可变形体积的技术包括在变形运动的相位期间获取包括物体的可变形体积的3D图像。 3D图像包括体素,其一部分在变形运动的每个后续阶段期间在初级阶段期间从原始坐标位置移动到变形的坐标位置。 生成在不同的一个相位期间各自基于3D图像之一的变形矩阵。 变形矩阵各自包括描述如何将体素从其变形的协调位置返回到其初级相位的原始坐标位置的变换向量。 通过在与可变形体积内的给定坐标位置相关联的连续相位的变换矢量之间进行内插来生成连续相位之间的可变形体积的子阶段3D图像,并且通过内插参考的体素位置处从原始3D图像检索体素数据 转化载体。

    Direct volume rendering of 4D deformable volume images

    公开(公告)号:US20060072821A1

    公开(公告)日:2006-04-06

    申请号:US11095223

    申请日:2005-03-31

    申请人: Bai Wang

    发明人: Bai Wang

    IPC分类号: G06K9/34

    CPC分类号: G06T7/00 G06T15/08

    摘要: A method and system are presented for direct volume rendering of a deformable volume dataset that represent an object in motion. An original 3D image dataset of the object is acquired at an initial stage of the motion, and segmented. Deformed 3D image datasets of the object are acquired at subsequent stages of the motion. Deformation matrices are computed between the segmented original 3D image dataset and the deformed 3D image datasets. A plurality of deformed mesh patches are generated based on the deformation matrices. 2D textures are generated by dynamically sampling the segmented 3D image dataset, and applied to the deformed mesh patches. The textured deformed mesh patches are evaluated, blended, and composited to generate one or more volume-rendered images of the object in motion.