Real-time self collision and obstacle avoidance
    1.
    发明授权
    Real-time self collision and obstacle avoidance 有权
    实时自身碰撞和避障

    公开(公告)号:US08170287B2

    公开(公告)日:2012-05-01

    申请号:US12257664

    申请日:2008-10-24

    IPC分类号: G06K9/00

    CPC分类号: G11C11/5621 G11C16/349

    摘要: A system, method, and computer program product for avoiding collision of a body segment with unconnected structures in an articulated system are described. A virtual surface is constructed surrounding an actual surface of the body segment. Distances between the body segment and unconnected structures are monitored. Responding to an unconnected structure penetrating the virtual surface, a redirected joint motion that prevents the unconnected structure from penetrating deeper into the virtual surface is determined. The body segment is redirected based on the redirected joint motion to avoid colliding with the unconnected structure.

    摘要翻译: 描述了一种系统,方法和计算机程序产品,用于避免关节系统中的主体段与未连接结构的碰撞。 围绕身体部分的实际表面构造虚拟表面。 监测身体段与未连接结构之间的距离。 响应穿透虚拟表面的未连接的结构,确定防止未连接结构深入虚拟表面的重定向关节运动。 基于重定向的关节运动,主体部分被重定向,以避免与未连接的结构相冲突。

    Target orientation estimation using depth sensing
    2.
    发明授权
    Target orientation estimation using depth sensing 有权
    使用深度感测的目标姿态估计

    公开(公告)号:US08031906B2

    公开(公告)日:2011-10-04

    申请号:US12572619

    申请日:2009-10-02

    IPC分类号: G06K9/00

    摘要: A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.

    摘要翻译: 用于基于实时视频数据估计目标的方向的系统使用包括在视频中的深度数据来确定估计的取向。 该系统包括能够在深度窗口内进行深度感测的飞行时间相机。 相机输出混合图像数据(颜色和深度)。 执行分割以确定图像内的目标的位置。 跟踪用于从帧到帧跟随目标位置。 在训练模式期间,以与每个帧相关联的对应取向收集目标特定训练图像集合。 在估计模式期间,分类器将新图像与存储的训练集进行比较以确定估计的方位。 运动估计方法使用基于光流和深度约束的累积旋转/平移参数计算。 每次图像对应于主导方向时,参数将重置为参考值。

    Controlled human pose estimation from depth image streams
    3.
    发明申请
    Controlled human pose estimation from depth image streams 有权
    从深度图像流控制的人体姿态估计

    公开(公告)号:US20090175540A1

    公开(公告)日:2009-07-09

    申请号:US12317369

    申请日:2008-12-19

    IPC分类号: G06K9/46

    摘要: A system, method, and computer program product for estimating upper body human pose are described. According to one aspect, a plurality of anatomical features are detected in a depth image of the human actor. The method detects a head, neck, and torso (H-N-T) template in the depth image, and detects the features in the depth image based on the H-N-T template. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model.

    摘要翻译: 描述了一种用于估计上身人体姿势的系统,方法和计算机程序产品。 根据一个方面,在人类角色的深度图像中检测多个解剖特征。 该方法在深度图像中检测头颈部和躯干(H-N-T)模板,并根据H-N-T模板检测深度图像中的特征。 基于人体模型的检测特征和运动学约束来估计人类模型的估计姿势。

    Target orientation estimation using depth sensing
    4.
    发明申请
    Target orientation estimation using depth sensing 有权
    使用深度感测的目标姿态估计

    公开(公告)号:US20050058337A1

    公开(公告)日:2005-03-17

    申请号:US10868707

    申请日:2004-06-14

    摘要: A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.

    摘要翻译: 用于基于实时视频数据估计目标的方向的系统使用包括在视频中的深度数据来确定估计的取向。 该系统包括能够在深度窗口内进行深度感测的飞行时间相机。 相机输出混合图像数据(颜色和深度)。 执行分割以确定图像内的目标的位置。 跟踪用于从帧到帧跟随目标位置。 在训练模式期间,以与每个帧相关联的对应取向收集目标特定训练图像集合。 在估计模式期间,分类器将新图像与存储的训练集进行比较以确定估计的方位。 运动估计方法使用基于光流和深度约束的累积旋转/平移参数计算。 每次图像对应于主导方向时,参数将重置为参考值。

    Human pose estimation and tracking using label assignment
    5.
    发明授权
    Human pose estimation and tracking using label assignment 有权
    使用标签分配进行人体姿态估计和跟踪

    公开(公告)号:US08351646B2

    公开(公告)日:2013-01-08

    申请号:US11869435

    申请日:2007-10-09

    IPC分类号: G06K9/00

    摘要: A method and apparatus for estimating poses of a subject by grouping data points generated by a depth image into groups representing labeled parts of the subject, and then fitting a model representing the subject to the data points using the grouping of the data points. The grouping of the data points is performed by grouping the data points to segments based on proximity of the data points, and then using constraint conditions to assign the segments to the labeled parts. The model is fitted to the data points by using the grouping of the data points to the labeled parts.

    摘要翻译: 一种用于通过将由深度图像生成的数据点分组成表示所述对象的标记部分的组来估计被摄体姿势的方法和装置,然后使用所述数据点的分组将表示所述对象的模型拟合到所述数据点。 通过基于数据点的接近度将数据点分组到段来执行数据点的分组,然后使用约束条件将段分配给标记的部分。 通过使用数据点到标记部分的分组,将模型拟合到数据点。

    TARGET ORIENTATION ESTIMATION USING DEPTH SENSING
    6.
    发明申请
    TARGET ORIENTATION ESTIMATION USING DEPTH SENSING 有权
    使用深度感测的目标定位估计

    公开(公告)号:US20100034427A1

    公开(公告)日:2010-02-11

    申请号:US12572619

    申请日:2009-10-02

    IPC分类号: G06K9/00

    摘要: A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.

    摘要翻译: 用于基于实时视频数据估计目标的方向的系统使用包括在视频中的深度数据来确定估计的取向。 该系统包括能够在深度窗口内进行深度感测的飞行时间相机。 相机输出混合图像数据(颜色和深度)。 执行分割以确定图像内的目标的位置。 跟踪用于从帧到帧跟随目标位置。 在训练模式期间,以与每个帧相关联的对应取向收集目标特定训练图像集合。 在估计模式期间,分类器将新图像与存储的训练集进行比较以确定估计的方位。 运动估计方法使用基于光流和深度约束的累积旋转/平移参数计算。 每次图像对应于主导方向时,参数将重置为参考值。

    Pose estimation based on critical point analysis
    7.
    发明授权
    Pose estimation based on critical point analysis 有权
    基于临界点分析的姿态估计

    公开(公告)号:US07317836B2

    公开(公告)日:2008-01-08

    申请号:US11378573

    申请日:2006-03-17

    摘要: Methods and systems for estimating a pose of a subject. The subject can be a human, an animal, a robot, or the like. A camera receives depth information associated with a subject, a pose estimation module to determine a pose or action of the subject from images, and an interaction module to output a response to the perceived pose or action. The pose estimation module separates portions of the image containing the subject into classified and unclassified portions. The portions can be segmented using k-means clustering. The classified portions can be known objects, such as a head and a torso, that are tracked across the images. The unclassified portions are swept across an x and y axis to identify local minimums and local maximums. The critical points are derived from the local minimums and local maximums. Potential joint sections are identified by connecting various critical points, and the joint sections having sufficient probability of corresponding to an object on the subject are selected.

    摘要翻译: 用于估计受试者姿势的方法和系统。 受试者可以是人,动物,机器人等。 相机接收与被摄体相关联的深度信息,姿势估计模块,用于根据图像确定被摄体的姿势或动作,以及交互模块,以输出对所感知的姿势或动作的响应。 姿态估计模块将包含被摄体的图像的部分分成分类的和未分类的部分。 可以使用k均值聚类来分割这些部分。 分类部分可以是跨越图像被跟踪的已知对象,例如头部和躯干。 未分类的部分扫过x和y轴以识别局部最小值和局部最大值。 关键点是从局部最小值和局部最大值得出。 通过连接各种临界点来识别潜在的接合部分,并且选择具有与被检体上的物体相对应的足够概率的关节部分。

    Target orientation estimation using depth sensing
    10.
    发明授权
    Target orientation estimation using depth sensing 有权
    使用深度感测的目标姿态估计

    公开(公告)号:US07620202B2

    公开(公告)日:2009-11-17

    申请号:US10868707

    申请日:2004-06-14

    IPC分类号: G06K9/00 H04N5/225

    摘要: A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.

    摘要翻译: 用于基于实时视频数据估计目标的方向的系统使用包括在视频中的深度数据来确定估计的取向。 该系统包括能够在深度窗口内进行深度感测的飞行时间相机。 相机输出混合图像数据(颜色和深度)。 执行分割以确定图像内的目标的位置。 跟踪用于从帧到帧跟随目标位置。 在训练模式期间,以与每个帧相关联的对应取向收集目标特定训练图像集合。 在估计模式期间,分类器将新图像与存储的训练集进行比较以确定估计的方位。 运动估计方法使用基于光流和深度约束的累积旋转/平移参数计算。 每次图像对应于主导方向时,参数将重置为参考值。