Video processing
    1.
    发明授权
    Video processing 有权
    视频处理

    公开(公告)号:US08824801B2

    公开(公告)日:2014-09-02

    申请号:US12122129

    申请日:2008-05-16

    摘要: A method and apparatus for processing video is disclosed. In an embodiment, image features of an object within a frame of video footage are identified and the movement of each of these features is tracked throughout the video footage to determine its trajectory (track). The tracks are analyzed, the maximum separation of the tracks is determined and used to determine a texture map, which is in turn interpolated to provide an unwrap mosaic for the object. The process may be iterated to provide an improved mosaic. Effects or artwork can be overlaid on this mosaic and the edited mosaic can be warped via the mapping, and combined with layers of the original footage. The effect or artwork may move with the object's surface.

    摘要翻译: 公开了一种用于处理视频的方法和装置。 在一个实施例中,识别视频镜头帧内的对象的图像特征,并且在整个视频镜头中跟踪这些特征中的每一个的移动以确定其轨迹(轨迹)。 分析轨道,确定轨道的最大间距并用于确定纹理图,该纹理图又被内插以为对象提供展开马赛克。 可以迭代该过程以提供改进的马赛克。 效果或艺术品可以叠加在这个马赛克上,编辑的马赛克可以通过映射扭曲,并结合原始素材的图层。 效果或作品可能会随物体表面移动。

    Video Processing
    2.
    发明申请
    Video Processing 有权
    视频处理

    公开(公告)号:US20090285544A1

    公开(公告)日:2009-11-19

    申请号:US12122129

    申请日:2008-05-16

    IPC分类号: H04N5/93

    摘要: A method and apparatus for processing video is disclosed. In an embodiment, image features of an object within a frame of video footage are identified and the movement of each of these features is tracked throughout the video footage to determine its trajectory (track). The tracks are analyzed, the maximum separation of the tracks is determined and used to determine a texture map, which is in turn interpolated to provide an unwrap mosaic for the object. The process may be iterated to provide an improved mosaic. Effects or artwork can be overlaid on this mosaic and the edited mosaic can be warped via the mapping, and combined with layers of the original footage. The effect or artwork may move with the object's surface.

    摘要翻译: 公开了一种用于处理视频的方法和装置。 在一个实施例中,识别视频镜头帧内的对象的图像特征,并且在整个视频镜头中跟踪这些特征中的每一个的移动以确定其轨迹(轨迹)。 分析轨道,确定轨道的最大间距并用于确定纹理图,该纹理图又被内插以为对象提供展开马赛克。 可以迭代该过程以提供改进的马赛克。 效果或艺术品可以叠加在这个马赛克上,编辑的马赛克可以通过映射扭曲,并结合原始素材的图层。 效果或作品可能会随物体表面移动。

    Predicting joint positions
    3.
    发明授权
    Predicting joint positions 有权
    预测联合职位

    公开(公告)号:US08571263B2

    公开(公告)日:2013-10-29

    申请号:US13050858

    申请日:2011-03-17

    IPC分类号: G06K9/00

    摘要: Predicting joint positions is described, for example, to find joint positions of humans or animals (or parts thereof) in an image to control a computer game or for other applications. In an embodiment image elements of a depth image make joint position votes so that for example, an image element depicting part of a torso may vote for a position of a neck joint, a left knee joint and a right knee joint. A random decision forest may be trained to enable image elements to vote for the positions of one or more joints and the training process may use training images of bodies with specified joint positions. In an example a joint position vote is expressed as a vector representing a distance and a direction of a joint position from an image element making the vote. The random decision forest may be trained using a mixture of objectives.

    摘要翻译: 例如,描述关节位置的描述是为了在图像中找到人或动物(或其部分)的联合位置,以控制计算机游戏或用于其他应用。 在一个实施例中,深度图像的图像元素进行联合位置投票,使得例如描绘躯干的一部分的图像元素可以投射颈部关节,左膝关节和右膝关节的位置。 可以对随机决策林进行训练,以使图像元素能够对一个或多个关节的位置进行投票,并且训练过程可以使用具有指定关节位置的身体的训练图像。 在一个例子中,联合立场表决被表示为表示从投票的图像元素的联合位置的距离和方向的向量。 可以使用目标混合来训练随机决策林。

    Moving object segmentation using depth images
    4.
    发明授权
    Moving object segmentation using depth images 有权
    使用深度图像移动物体分割

    公开(公告)号:US08401225B2

    公开(公告)日:2013-03-19

    申请号:US13017626

    申请日:2011-01-31

    IPC分类号: G06K9/00

    摘要: Moving object segmentation using depth images is described. In an example, a moving object is segmented from the background of a depth image of a scene received from a mobile depth camera. A previous depth image of the scene is retrieved, and compared to the current depth image using an iterative closest point algorithm. The iterative closest point algorithm includes a determination of a set of points that correspond between the current depth image and the previous depth image. During the determination of the set of points, one or more outlying points are detected that do not correspond between the two depth images, and the image elements at these outlying points are labeled as belonging to the moving object. In examples, the iterative closest point algorithm is executed as part of an algorithm for tracking the mobile depth camera, and hence the segmentation does not add substantial additional computational complexity.

    摘要翻译: 描述使用深度图像来移动物体分割。 在一个示例中,从从移动深度相机接收的场景的深度图像的背景中分割移动物体。 检索场景的先前深度图像,并使用迭代最近点算法与当前深度图像进行比较。 迭代最近点算法包括对当前深度图像和先前深度图像之间对应的一组点的确定。 在确定点集合期间,检测到一个或多个在两个深度图像之间不对应的离开点,并且将这些离散点处的图像元素标记为属于移动对象。 在示例中,迭代最近点算法作为用于跟踪移动深度相机的算法的一部分被执行,因此分割不会增加实质的额外的计算复杂度。

    Generating computer models of 3D objects
    5.
    发明授权
    Generating computer models of 3D objects 有权
    生成3D对象的计算机模型

    公开(公告)号:US09053571B2

    公开(公告)日:2015-06-09

    申请号:US13154288

    申请日:2011-06-06

    摘要: Generating computer models of 3D objects is described. In one example, depth images of an object captured by a substantially static depth camera are used to generate the model, which is stored in a memory device in a three-dimensional volume. Portions of the depth image determined to relate to the background are removed to leave a foreground depth image. The position and orientation of the object in the foreground depth image is tracked by comparison to a preceding depth image, and the foreground depth image is integrated into the volume by using the position and orientation to determine where to add data derived from the foreground depth image into the volume. In examples, the object is hand-rotated by a user before the depth camera. Hands that occlude the object are integrated out of the model as they do not move in sync with the object due to re-gripping.

    摘要翻译: 描述生成3D对象的计算机模型。 在一个示例中,使用由基本上静态的深度相机拍摄的对象的深度图像来生成存储在三维体积中的存储器设备中的模型。 确定与背景相关的深度图像的部分被去除以留下前景深度图像。 通过与前一个深度图像进行比较来跟踪前景深度图像中的对象的位置和方向,并且通过使用位置和方向来将前景深度图像集成到卷中,以确定在哪里添加从前景深度图像导出的数据 进入卷。 在示例中,该对象在深度相机之前由用户手动旋转。 闭合对象的手从模型中集成出来,因为它们不会因为重新抓取而与对象同步移动。

    Three-dimensional environment reconstruction
    8.
    发明授权
    Three-dimensional environment reconstruction 有权
    三维环境重建

    公开(公告)号:US08587583B2

    公开(公告)日:2013-11-19

    申请号:US13017690

    申请日:2011-01-31

    IPC分类号: G06T17/05 G06T19/00

    CPC分类号: G06T17/00 G06T2200/08

    摘要: Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value.

    摘要翻译: 描述了三维环境重建。 在一个示例中,在由存储在存储器件上的体素组成的3D体积中生成真实世界环境的3D模型。 该模型是从描述相机位置和方向的数据构建的,以及具有指示从相机到环境中的点的距离的像素的深度图像。 单独的执行线程被分配给卷的平面中的每个体素。 每个线程使用摄像机位置和方向来确定其相关体素的相应深度图像位置,确定与相关体素和相应位置处的环境中的点之间的距离有关的因子,并更新相关联的体素的存储值 体素使用因子。 每个线程遍历卷的剩余平面中的等效体素,重复更新存储值的过程。

    Three-Dimensional Environment Reconstruction
    10.
    发明申请
    Three-Dimensional Environment Reconstruction 有权
    三维环境重建

    公开(公告)号:US20120194516A1

    公开(公告)日:2012-08-02

    申请号:US13017690

    申请日:2011-01-31

    IPC分类号: G06T17/00

    CPC分类号: G06T17/00 G06T2200/08

    摘要: Three-dimensional environment reconstruction is described. In an example, a 3D model of a real-world environment is generated in a 3D volume made up of voxels stored on a memory device. The model is built from data describing a camera location and orientation, and a depth image with pixels indicating a distance from the camera to a point in the environment. A separate execution thread is assigned to each voxel in a plane of the volume. Each thread uses the camera location and orientation to determine a corresponding depth image location for its associated voxel, determines a factor relating to the distance between the associated voxel and the point in the environment at the corresponding location, and updates a stored value at the associated voxel using the factor. Each thread iterates through an equivalent voxel in the remaining planes of the volume, repeating the process to update the stored value.

    摘要翻译: 描述了三维环境重建。 在一个示例中,在由存储在存储器件上的体素组成的3D体积中生成真实世界环境的3D模型。 该模型是从描述相机位置和方向的数据构建的,以及具有指示从相机到环境中的点的距离的像素的深度图像。 单独的执行线程被分配给卷的平面中的每个体素。 每个线程使用摄像机位置和方向来确定其相关体素的相应深度图像位置,确定与相关体素和相应位置处的环境中的点之间的距离有关的因子,并更新相关联的体素的存储值 体素使用因素。 每个线程遍历卷的剩余平面中的等效体素,重复更新存储值的过程。