Online calibration of cameras
    21.
    发明授权

    公开(公告)号:US09866820B1

    公开(公告)日:2018-01-09

    申请号:US14321519

    申请日:2014-07-01

    CPC classification number: H04N13/246 G06K9/00281 H04N13/239

    Abstract: An electronic device can have two or more pairs of cameras capable of performing three-dimensional imaging. In order to provide accurate disparity information, these cameras should be sufficiently calibrated. Automatic calibration can be performed by periodically capturing images with a pair of front-facing cameras and locating matching facial or other feature points in corresponding images captured by those cameras. Correspondences can be detected between feature points and the corresponding feature points can be normalized and outlier feature points can be rejected. A transformation matrix can be determined using at least a portion of remaining feature points and can be used to determine rotation and translation parameters to correct for misalignment between the cameras. The calibration parameters can be refined or otherwise adjusted, and can be used or stored for use in correcting images subsequently captured by those cameras.

    Preview streaming of video data
    25.
    发明授权
    Preview streaming of video data 有权
    预览视频数据流

    公开(公告)号:US09578279B1

    公开(公告)日:2017-02-21

    申请号:US14974800

    申请日:2015-12-18

    Abstract: A system and method for generating preview data from video data and using the preview data to select portions of the video data or determine an order with which to upload the video data. The system may sample video data to generate sampled video data and may identify portions of the sampled video data having complexity metrics exceeding a threshold. The system may upload a first portion of the video data corresponding to the identified portions while omitting a second portion of the video data. The system may determine an order with which to upload portions of the video data based on a complexity of the video data. Therefore, portions of the video data that may require additional processing after being uploaded may be prioritized and uploaded first. As a result, a latency between the video data being uploaded and a video summarization being received is reduced.

    Abstract translation: 一种用于从视频数据生成预览数据并使用预览数据来选择视频数据的部分或者确定用于上传视频数据的顺序的系统和方法。 系统可以采样视频数据以产生采样的视频数据,并且可以识别具有超过阈值的复杂度度量的采样视频数据的部分。 该系统可以在省略视频数据的第二部分的同时上传与所识别的部分相对应的视频数据的第一部分。 系统可以基于视频数据的复杂度来确定用于上传视频数据的部分的顺序。 因此,上传后可能需要额外处理的视频数据的部分可以被优先排列并首先上传。 因此,正在上传的视频数据和正在接收的视频摘要之间的延迟被减少。

    Object identification through stereo association
    26.
    发明授权
    Object identification through stereo association 有权
    通过立体声关联对象识别

    公开(公告)号:US09298974B1

    公开(公告)日:2016-03-29

    申请号:US14307493

    申请日:2014-06-18

    Abstract: Various embodiments enable a primary user to be identified and tracked using stereo association and multiple tracking algorithms. For example, a face detection algorithm can be run on each image captured by a respective camera independently. Stereo association can be performed to match faces between cameras. If the faces are matched and a primary user is determined, a face pair is created and used as the first data point in memory for initializing object tracking. Further, features of a user's face can be extracted and the change in position of these features between images can determine what tracking method will be used for that particular frame.

    Abstract translation: 各种实施例使得能够使用立体声关联和多个跟踪算法来识别和跟踪主要用户。 例如,可以独立地通过各个相机拍摄的每个图像上运行面部检测算法。 可以执行立体声协会来匹配相机之间的面孔。 如果脸部匹配并且确定了主要用户,则创建面部对并将其用作用于初始化对象跟踪的存储器中的第一数据点。 此外,可以提取用户面部的特征,并且图像之间的这些特征的位置变化可以确定将为该特定帧使用什么跟踪方法。

    Generation of synthetic image data using three-dimensional models

    公开(公告)号:US10909349B1

    公开(公告)日:2021-02-02

    申请号:US16450499

    申请日:2019-06-24

    Abstract: Techniques are generally described for object detection in image data. First image data comprising a three-dimensional model representing an object may be received. First background image data comprising a first plurality of pixel values may be received. A first feature vector representing the three-dimensional model may be generated. A second feature vector representing the first plurality of pixel values of the first background image data may be generated. A first machine learning model may generate a transformed representation of the three-dimensional model using the first feature vector. First foreground image data comprising a two-dimensional representation of the transformed representation of the three-dimensional model may be generated. A frame of composite image data may be generated by combining the first foreground image data with the first background image data.

    Frame selection of video data
    29.
    发明授权

    公开(公告)号:US10482925B1

    公开(公告)日:2019-11-19

    申请号:US15783584

    申请日:2017-10-13

    Abstract: A system and method for selecting portions of video data from preview video data is provided. The system may extract image features from the preview video data and discard video frames associated with poor image quality based on the image features. The system may determine similarity scores between individual video frames and corresponding transition costs and may identify transition points in the preview video data based on the similarity scores and/or transition costs. The system may select portions of the video data for further processing based on the transition points and the image features. By selecting portions of the video data, the system may reduce a bandwidth consumption, processing burden and/or latency associated with uploading the video data or performing further processing.

    Self-validating structured light depth sensor system

    公开(公告)号:US10282857B1

    公开(公告)日:2019-05-07

    申请号:US15634772

    申请日:2017-06-27

    Abstract: Devices and techniques are described for validation of depth data. A first pattern and second pattern may be projected. A first image of the first pattern and a second image of the second pattern may be captured. A first code word may be determined for a first pixel address based on a first value of the first pixel address in the first pattern and a second value of the first pixel address in the second pattern. A third pattern may be projected. A second code word may be determined for the first pixel address based on a third value of the first pixel address in the third pattern and the second value of the first pixel address in the second pattern. A confidence value of the first pixel address may be assigned based on the first code word and the second code word corresponding to the same projector column.

Patent Agency Ranking