Determining a four-dimensional CT image based on three-dimensional CT data and four-dimensional model data
    1.
    发明授权
    Determining a four-dimensional CT image based on three-dimensional CT data and four-dimensional model data 有权
    基于三维CT数据和四维模型数据确定四维CT图像

    公开(公告)号:US09367926B2

    公开(公告)日:2016-06-14

    申请号:US14437789

    申请日:2012-10-26

    申请人: Brainlab AG

    IPC分类号: G06K9/00 G06T7/20 G06T13/20

    摘要: The invention relates to a data processing method of determining a change of an image of an anatomical body part of a patient's body, the method being executed by a computer and comprising the following steps: a) acquiring static medical image data comprising static medical image information describing anatomical body part in a first anatomical spatial state of an anatomical vital spatial change of the anatomical body part; b) acquiring patient model data comprising patient model information describing a model body part corresponding to the anatomical body part, wherein the patient model information describes the model body part in a plurality of model spatial states of a model vital spatial change corresponding to the anatomical vital spatial change; c) determining spatial state mapping data comprising spatial state mapping information describing at least one of a first mapping from the model body part in a first one of the plurality of model spatial states to the model body part in a second, different one of the plurality of model spatial states, the first model spatial state corresponding to the first anatomical spatial state, and a second mapping from the model body part in the first model spatial state to the anatomical body part in the first anatomical spatial state; d) determining, based on the static medical image data and the spatial state mapping data, transformed medical image data comprising transformed medical image information describing the anatomical body part in a second anatomical spatial state of the anatomical vital spatial change, the second anatomical spatial state corresponding to the second model spatial state.

    摘要翻译: 本发明涉及一种确定患者身体的解剖体部分的图像变化的数据处理方法,该方法由计算机执行,并且包括以下步骤:a)获取包括静态医学图像信息的静态医学图像数据 在解剖体部分的解剖学重要空间变化的第一解剖空间状态中描述解剖体部分; b)获取患者模型数据,所述患者模型数据包括描述对应于所述解剖体部分的模型身体部位的患者模型信息,其中所述患者模型信息描述模型重要空间变化的多个模型空间状态中的模型身体部位, 空间变化 c)确定空间状态映射数据,所述空间状态映射数据包括描述从所述多个模型空间状态中的第一模型空间状态中的模型主体部分到所述多个模型空间状态中的第二不同模型主体部分中的模型主体部分的至少一个映射信息 模型空间状态,对应于第一解剖空间状态的第一模型空间状态,以及从第一模型空间状态中的模型身体部分到第一解剖空间状态中的解剖体部分的第二映射; d)基于静态医学图像数据和空间状态映射数据确定包括描述解剖学主体部分的解剖学空间变化的第二解剖空间状态的变换的医学图像信息的转换的医学图像数据,第二解剖空间状态 对应于第二模型空间状态。

    Determining a Four-Dimensional CT Image Based on Three-Dimensional CT Data and Four-Dimensional Model Data
    2.
    发明申请
    Determining a Four-Dimensional CT Image Based on Three-Dimensional CT Data and Four-Dimensional Model Data 有权
    基于三维CT数据和四维模型数据确定四维CT图像

    公开(公告)号:US20150302608A1

    公开(公告)日:2015-10-22

    申请号:US14437789

    申请日:2012-10-26

    申请人: Brainlab AG

    IPC分类号: G06T7/20 G06T13/20

    摘要: The invention relates to a data processing method of determining a change of an image of an anatomical body part of a patient's body, the method being executed by a computer and comprising the following steps: a) acquiring static medical image data comprising static medical image information describing anatomical body part in a first anatomical spatial state of an anatomical vital spatial change of the anatomical body part; b) acquiring patient model data comprising patient model information describing a model body part corresponding to the anatomical body part, wherein the patient model information describes the model body part in a plurality of model spatial states of a model vital spatial change corresponding to the anatomical vital spatial change; c) determining spatial state mapping data comprising spatial state mapping information describing at least one of a first mapping from the model body part in a first one of the plurality of model spatial states to the model body part in a second, different one of the plurality of model spatial states, the first model spatial state corresponding to the first anatomical spatial state, and a second mapping from the model body part in the first model spatial state to the anatomical body part in the first anatomical spatial state; d) determining, based on the static medical image data and the spatial state mapping data, transformed medical image data comprising transformed medical image information describing the anatomical body part in a second anatomical spatial state of the anatomical vital spatial change, the second anatomical spatial state corresponding to the second model spatial state.

    摘要翻译: 本发明涉及一种确定患者身体的解剖体部分的图像变化的数据处理方法,该方法由计算机执行,并且包括以下步骤:a)获取包括静态医学图像信息的静态医学图像数据 在解剖体部分的解剖学重要空间变化的第一解剖空间状态中描述解剖体部分; b)获取患者模型数据,所述患者模型数据包括描述对应于所述解剖体部分的模型身体部位的患者模型信息,其中所述患者模型信息描述模型重要空间变化的多个模型空间状态中的模型身体部位, 空间变化 c)确定空间状态映射数据,所述空间状态映射数据包括描述从所述多个模型空间状态中的第一模型空间状态中的模型主体部分到所述多个模型空间状态中的第二不同模型主体部分中的模型主体部分的至少一个映射信息 模型空间状态,对应于第一解剖空间状态的第一模型空间状态,以及从第一模型空间状态中的模型身体部分到第一解剖空间状态中的解剖体部分的第二映射; d)基于静态医学图像数据和空间状态映射数据确定包括描述解剖学主体部分的解剖学空间变化的第二解剖空间状态的变换的医学图像信息的转换的医学图像数据,第二解剖空间状态 对应于第二模型空间状态。

    Method for detecting human body motion in frames of a video sequence
    3.
    发明授权
    Method for detecting human body motion in frames of a video sequence 失效
    用于在视频序列的帧中检测人体运动的方法

    公开(公告)号:US5930379A

    公开(公告)日:1999-07-27

    申请号:US876603

    申请日:1997-06-16

    IPC分类号: G06K9/00 G06T7/20 G06T15/70

    摘要: In a computerized method, a moving object is detected in a sequence of frames of a video of a scene. Each of the frames includes a plurality of pixels representing measured light intensity values at specific locations in the scene. The pixels are organized in a regularized pattern in a memory. The object is modeled as a branched kinematic chain composed of links connected at joints. The frames are iteratively segmented by assigning groups of pixels having like pixel motion to individual links, while estimating motion parameters for the groups of pixels assigned to the individual links until the segmented pixels and their motion parameters converge and can be identified with the moving object as modeled by the kinematic chain.

    摘要翻译: 在计算机化方法中,以场景的视频的帧序列检测移动物体。 每个帧包括表示场景中特定位置处的测量光强度值的多个像素。 像素以存储器中的正则化图案组织。 该对象被建模为由在连接处连接的链接组成的分支运动链。 通过将具有类似像素运动的像素组分配给各个链路,同时估计分配给各个链接的像素组的运动参数,直到分割的像素及其运动参数收敛并且可以用运动对象来识别,帧被迭代地分段 由运动链模拟。

    Method and apparatus for three-dimensional, textured models from plural
video images
    4.
    发明授权
    Method and apparatus for three-dimensional, textured models from plural video images 失效
    用于来自多个视频图像的三维纹理模型的方法和装置

    公开(公告)号:US5511153A

    公开(公告)日:1996-04-23

    申请号:US183142

    申请日:1994-01-18

    IPC分类号: G06T7/20 G06F15/00

    摘要: A method and apparatus for generating three-dimensional, textured computer models from a series of video images of an object is disclosed. The invention operates by tracking a selected group of object features through a series of image frames and, based on changes in their relative positions, estimates parameters specifying camera focal length, translation and rotation, and the positions of the tracked features in the camera reference frame. After segmentation of the images into two-dimensional bounded regions that each correspond to a discrete surface component of the actual object, the texture contained in the various video frames is applied to these regions to produce a final three-dimensional model that is both geometrically and photometrically specified.

    摘要翻译: 公开了一种从物体的一系列视频图像生成三维纹理计算机模型的方法和装置。 本发明通过一系列图像帧跟踪所选择的一组对象特征来操作,并且基于其相对位置的变化,估计指定相机焦距,平移和旋转的参数以及相机参考帧中的跟踪特征的位置 。 在将图像分割成每个对应于实际物体的离散表面分量的二维有界区域之后,将包含在各种视频帧中的纹理应用于这些区域以产生最终的三维模型,其在几何和 光度指定。

    Visual target tracking
    7.
    发明授权
    Visual target tracking 有权
    视觉目标跟踪

    公开(公告)号:US09039528B2

    公开(公告)日:2015-05-26

    申请号:US13309306

    申请日:2011-12-01

    申请人: Ryan M. Geiss

    发明人: Ryan M. Geiss

    IPC分类号: A63F13/00 G06T7/20

    摘要: A method of tracking a target includes receiving an observed depth image of the target from a source and obtaining a posed model of the target. The model is rasterized into a synthesized depth image, and the pose of the model is adjusted based, at least in part, on differences between the observed depth image and the synthesized depth image.

    摘要翻译: 跟踪目标的方法包括从源接收目标的观测深度图像并获得目标的姿态模型。 该模型被光栅化为合成深度图像,并且至少部分地基于观察到的深度图像和合成深度图像之间的差异来调整模型的姿态。

    Systems and methods for tracking objects
    8.
    发明授权
    Systems and methods for tracking objects 有权
    跟踪对象的系统和方法

    公开(公告)号:US08971575B2

    公开(公告)日:2015-03-03

    申请号:US13684451

    申请日:2012-11-23

    申请人: Cyberlink Corp.

    IPC分类号: G06K9/00 G06T7/20

    摘要: Various embodiments are disclosed for performing object tracking. One embodiment is a system for tracking an object in a plurality of frames, comprising a probability map generator configured to generate a probability map by estimating probability values of pixels in the frame, wherein the probability of each pixel corresponds to a likelihood of the pixel being located within the object. The system further comprises a contour model generator configured to identify a contour model of the object based on a temporal prediction method, a contour weighting map generator configured to derive a contour weighting map based on thickness characteristics of the contour model, a tracking refinement module configured to refine the probability map according to weight values specified in the contour weighting map, and an object tracker configured to track a location of the object within the plurality of frames based on the refined probability map.

    摘要翻译: 公开了用于执行对象跟踪的各种实施例。 一个实施例是用于跟踪多个帧中的对象的系统,包括概率图生成器,其被配置为通过估计帧中的像素的概率值来生成概率图,其中每个像素的概率对应于像素的可能性 位于对象内。 该系统还包括轮廓模型发生器,其被配置为基于时间预测方法来识别对象的轮廓模型;轮廓加权映射发生器,被配置为基于轮廓模型的厚度特征导出轮廓加权图;跟踪细化模块, 根据轮廓加权图中指定的权重值来细化概率图,以及对象跟踪器,被配置为基于精确的概率图来跟踪多个帧内的对象的位置。

    Apparatus and method for tracking facial motion through a sequence of
images
    9.
    发明授权
    Apparatus and method for tracking facial motion through a sequence of images 失效
    用于通过一系列图像跟踪面部运动的装置和方法

    公开(公告)号:US5802220A

    公开(公告)日:1998-09-01

    申请号:US574176

    申请日:1995-12-15

    IPC分类号: G06K9/00 G06T7/20 G06F9/36

    摘要: A system tracks human head and facial features over time by analyzing a sequence of images. The system provides descriptions of motion of both head and facial features between two image frames. These descriptions of motion are further analyzed by the system to recognize facial movement and expression. The system analyzes motion between two images using parameterized models of image motion. Initially, a first image in a sequence of images is segmented into a face region and a plurality of facial feature regions. A planar model is used to recover motion parameters that estimate motion between the segmented face region in the first image and a second image in the sequence of images. The second image is warped or shifted back towards the first image using the estimated motion parameters of the planar model, in order to model the facial features relative to the first image. An affine model and an affine model with curvature are used to recover motion parameters that estimate the image motion between the segmented facial feature regions and the warped second image. The recovered motion parameters of the facial feature regions represent the relative motions of the facial features between the first image and the warped image. The face region in the second image is tracked using the recovered motion parameters of the face region. The facial feature regions in the second image are tracked using both the recovered motion parameters for the face region and the motion parameters for the facial feature regions. The parameters describing the motion of the face and facial features are filtered to derive mid-level predicates that define facial gestures occurring between the two images. These mid-level predicates are evaluated over time to determine facial expression and gestures occurring in the image sequence.

    摘要翻译: 系统通过分析图像序列来跟踪人的头部和脸部特征。 该系统提供两个图像帧之间的头部和面部特征的运动的描述。 运动的这些描述进一步被系统分析以识别面部运动和表达。 该系统使用图像运动的参数化模型分析两幅图像之间的运动。 首先,图像序列中的第一图像被分割成面部区域和多个面部特征区域。 平面模型用于恢复估计第一图像中的分割面部区域与图像序列中的第二图像之间的运动的运动参数。 使用平面模型的估计运动参数将第二图像扭曲或向后移回第一图像,以便相对于第一图像对面部特征进行建模。 使用仿射模型和具有曲率的仿射模型来恢复估计分割的面部特征区域和翘曲的第二图像之间的图像运动的运动参数。 面部特征区域的恢复的运动参数表示第一图像和翘曲图像之间的面部特征的相对运动。 使用面部区域的恢复的运动参数来跟踪第二图像中的面部区域。 使用恢复的面部区域的运动参数和面部特征区域的运动参数来跟踪第二图像中的面部特征区域。 描述描述脸部和面部特征的运动的参数被过滤以得出定义在两个图像之间出现的面部手势的中间谓词。 随着时间的推移评估这些中级谓词以确定图像序列中出现的面部表情和手势。