3D SURFACE STRUCTURE ESTIMATION USING NEURAL NETWORKS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230139772A1

    公开(公告)日:2023-05-04

    申请号:US17452749

    申请日:2021-10-28

    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a simulated environment. For example, a simulation may be run to simulate a virtual world or environment, render frames of virtual sensor data (e.g., images), and generate corresponding depth maps and segmentation masks (identifying a component of the simulated environment such as a road). To generate input training data, 3D structure estimation may be performed on a rendered frame to generate a representation of a 3D surface structure of the road. To generate corresponding ground truth training data, a corresponding depth map and segmentation mask may be used to generate a dense representation of the 3D surface structure.

    USING NEURAL NETWORKS FOR 3D SURFACE STRUCTURE ESTIMATION BASED ON REAL-WORLD DATA FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230135234A1

    公开(公告)日:2023-05-04

    申请号:US17452752

    申请日:2021-10-28

    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated from real-world data. For example, one or more vehicles may collect image data and LiDAR data while navigating through a real-world environment. To generate input training data, 3D surface structure estimation may be performed on captured image data to generate a sparse representation of a 3D surface structure of interest (e.g., a 3D road surface). To generate corresponding ground truth training data, captured LiDAR data may be smoothed, subject to outlier removal, subject to triangulation to filling missing values, accumulated from multiple LiDAR sensors, aligned with corresponding frames of image data, and/or annotated to identify 3D points on the 3D surface of interest, and the identified 3D points may be projected to generate a dense representation of the 3D surface structure.

    END-TO-END EVALUATION OF PERCEPTION SYSTEMS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20220340149A1

    公开(公告)日:2022-10-27

    申请号:US17726407

    申请日:2022-04-21

    Abstract: In various examples, an end-to-end perception evaluation system for autonomous and semi-autonomous machine applications may be implemented to evaluate how the accuracy or precision of outputs of machine learning models—such as deep neural networks (DNNs)—impact downstream performance of the machine when relied upon. For example, decisions computed by the system using ground truth output types may be compared to decisions computed by the system using the perception outputs. As a result, discrepancies in downstream decision making of the system between the ground truth information and the perception information may be evaluated to either aid in updating or retraining of the machine learning model or aid in generating more accurate or precise ground truth information.

    SURFACE PROFILE ESTIMATION AND BUMP DETECTION FOR AUTONOMOUS MACHINE APPLICATIONS

    公开(公告)号:US20210183093A1

    公开(公告)日:2021-06-17

    申请号:US17103680

    申请日:2020-11-24

    Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof

    End-to-end evaluation of perception systems for autonomous systems and applications

    公开(公告)号:US12269488B2

    公开(公告)日:2025-04-08

    申请号:US17726407

    申请日:2022-04-21

    Abstract: In various examples, an end-to-end perception evaluation system for autonomous and semi-autonomous machine applications may be implemented to evaluate how the accuracy or precision of outputs of machine learning models—such as deep neural networks (DNNs)—impact downstream performance of the machine when relied upon. For example, decisions computed by the system using ground truth output types may be compared to decisions computed by the system using the perception outputs. As a result, discrepancies in downstream decision making of the system between the ground truth information and the perception information may be evaluated to either aid in updating or retraining of the machine learning model or aid in generating more accurate or precise ground truth information.

    MOTION-BASED OBJECT DETECTION FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20250029264A1

    公开(公告)日:2025-01-23

    申请号:US18905939

    申请日:2024-10-03

    Abstract: In various examples, an ego-machine may analyze sensor data to identify and track features in the sensor data using. Geometry of the tracked features may be used to analyze motion flow to determine whether the motion flow violates one or more geometrical constraints. As such, tracked features may be identified as dynamic features when the motion flow corresponding to the tracked features violates the one or more static constraints for static features. Tracked features that are determined to be dynamic features may be clustered together according to their location and feature track. Once features have been clustered together, the system may calculate a detection bounding shape for the clustered features. The bounding shape information may then be used by the ego-machine for path planning, control decisions, obstacle avoidance, and/or other operations.

    CAMERA CALIBRATION FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

    公开(公告)号:US20250022175A1

    公开(公告)日:2025-01-16

    申请号:US18349779

    申请日:2023-07-10

    Abstract: In various examples, sensor calibration for autonomous or semi-autonomous systems and applications is described herein. Systems and methods are disclosed that calibrate image sensors, such as cameras, using images captured by the image sensors at different time instances. For instance, a first image sensor may generate first image data representing at least two images and a second image sensor may generate second image data representing at least one image. One or more feature points may then be tracked between the images represented by the first image data and the image represented by the second image data. Additionally, the feature point(s), timestamps associated with the images, poses associated with image sensors (e.g., poses of a vehicle), and/or other information may be used to determine one or more values of one or more parameters that calibrate the first image sensor with the second image sensor.

Patent Agency Ranking