Robust scale estimation in real-time monocular SFM for autonomous driving
    44.
    发明授权
    Robust scale estimation in real-time monocular SFM for autonomous driving 有权
    用于自主驾驶的实时单目SFM的鲁棒尺度估计

    公开(公告)号:US09189689B2

    公开(公告)日:2015-11-17

    申请号:US14451280

    申请日:2014-08-04

    CPC classification number: G06K9/00791 G06K9/46 G06K2009/4666

    Abstract: A method for performing three-dimensional (3D) localization requiring only a single camera including capturing images from only one camera; generating a cue combination from sparse features, dense stereo and object bounding boxes; correcting for scale in monocular structure from motion (SFM) using the cue combination for estimating a ground plane; and performing localization by combining SFM, ground plane and object bounding boxes to produce a 3D object localization.

    Abstract translation: 一种用于执行三维(3D)定位的方法,其仅需要单个相机,包括仅从一个相机捕获图像; 从稀疏特征,密集的立体声和物体边界框中产生提示组合; 使用用于估计接地平面的提示组合来校正来自运动的单眼结构(SFM)中的尺度; 并通过组合SFM,地平面和对象边界框来执行定位,以产生3D对象定位。

    Shape from Motion for Unknown, Arbitrary Lighting and Reflectance
    45.
    发明申请
    Shape from Motion for Unknown, Arbitrary Lighting and Reflectance 有权
    来自运动的未知,任意照明和反射的形状

    公开(公告)号:US20140132727A1

    公开(公告)日:2014-05-15

    申请号:US14073794

    申请日:2013-11-06

    CPC classification number: G06T7/0055 G06T7/514 G06T7/55

    Abstract: Systems and methods are disclosed for determining three dimensional (3D) shape by capturing with a camera a plurality of images of an object in differential motion; derive a general relation that relates spatial and temporal image derivatives to BRDF derivatives; exploiting rank deficiency to eliminate BRDF terms and recover depth or normal for directional lighting; and using depth-normal-BRDF relation to recover depth or normal for unknown arbitrary lightings.

    Abstract translation: 公开了用于通过用相机捕获差分运动中的对象的多个图像来确定三维(3D)形状的系统和方法; 导出将空间和时间图像衍生物与BRDF衍生物相关联的一般关系; 利用职级缺陷消除BRDF术语,恢复定向照明的深度或正常; 并使用深度正常BRDF关系恢复未知任意照明的深度或正常。

    VIEW SYNTHESIS FOR SELF-DRIVING
    47.
    发明申请

    公开(公告)号:US20250118009A1

    公开(公告)日:2025-04-10

    申请号:US18903348

    申请日:2024-10-01

    Abstract: A computer-implemented method for synthesizing an image includes capturing data from a scene and fusing grid-based representations of the scene from different encodings to inherit beneficial properties of the different encodings, The encodings include Lidar encoding and a high definition map encoding. Rays are rendered from fused grid-based representations. A density and color are determined for points in the rays. A volume rendering is employed for the rays with the density and color. An image is synthesized from the volume rendered rays with the density and the color.

    AUTOMATIC MULTI-MODALITY SENSOR CALIBRATION WITH NEAR-INFRARED IMAGES

    公开(公告)号:US20250117029A1

    公开(公告)日:2025-04-10

    申请号:US18905280

    申请日:2024-10-03

    Abstract: Systems and methods for automatic multi-modality sensor calibration with near-infrared images (NIR). Image keypoints from collected images and NIR keypoints from NIR can be detected. A deep-learning-based neural network that learns relation graphs between the image keypoints and the NIR keypoints can match the image keypoints and the NIR keypoints. Three dimensional (3D) points from 3D point cloud data can be filtered based on corresponding 3D points from the NIR keypoints (NIR-to-3D points) to obtain filtered NIR-to-3D points. An extrinsic calibration can be optimized based on a reprojection error computed from the filtered NIR-to-3D points to obtain an optimized extrinsic calibration for an autonomous entity control system. An entity can be controlled by employing the optimized extrinsic calibration for the autonomous entity control system.

    GENERATING ADVERSARIAL DRIVING SCENARIOS FOR AUTONOMOUS VEHICLES

    公开(公告)号:US20250115278A1

    公开(公告)日:2025-04-10

    申请号:US18905695

    申请日:2024-10-03

    Abstract: Systems and methods for generating adversarial driving scenarios for autonomous vehicles. An artificial intelligence model can compute an adversarial loss function by minimizing the distance between predicted adversarial perturbed trajectories and corresponding generated neighbor future trajectories from input data. A traffic violation loss function can be computed based on observed adversarial agents adhering to driving rules from the input data. A comfort loss function can be computed based on the predicted driving characteristics of adversarial vehicles relevant to comfort of hypothetical passengers from the input data. A planner module can be trained for autonomous vehicles based on a combined loss function of the adversarial loss function, the traffic violation loss function and the comfort loss function to generate adversarial driving scenarios. An autonomous vehicle can be controlled based on trajectories generated in the adversarial driving scenarios.

    Semantic image capture fault detection

    公开(公告)号:US12205356B2

    公开(公告)日:2025-01-21

    申请号:US18188766

    申请日:2023-03-23

    Abstract: Methods and systems for detecting faults include capturing an image of a scene using a camera. The image is embedded using a segmentation model that includes an image branch having an image embedding layer that embeds images into a joint latent space and a text branch having a text embedding layer that embeds text into the joint latent space. Semantic information is generated for a region of the image corresponding to a predetermined static object using the embedded image. A fault of the camera is identified based on a discrepancy between the semantic information and semantic information of the predetermined static image. The fault of the camera is corrected.

Patent Agency Ranking