-
公开(公告)号:US20210342608A1
公开(公告)日:2021-11-04
申请号:US17377053
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20240410981A1
公开(公告)日:2024-12-12
申请号:US18810728
申请日:2024-08-21
Applicant: NVIDIA CORPORATION
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/81 , G06N3/045 , G06T19/00 , G06V10/10 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
13.
公开(公告)号:US20240111025A1
公开(公告)日:2024-04-04
申请号:US18531103
申请日:2023-12-06
Applicant: NVIDIA Corporation
Inventor: Tilman Wekel , Sangmin Oh , David Nister , Joachim Pehserl , Neda Cvijetic , Ibrahim Eden
IPC: G01S7/48 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58
CPC classification number: G01S7/4802 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58 , G01S7/28
Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US11788861B2
公开(公告)日:2023-10-17
申请号:US17008074
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: David Nister , Ruchi Bhargava , Vaibhav Thukral , Michael Grabner , Ibrahim Eden , Jeffrey Liu
CPC classification number: G01C21/3841 , G01C21/1652 , G01C21/3811 , G01C21/3867 , G01C21/3878 , G01C21/3896 , G06N3/02
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US11301697B2
公开(公告)日:2022-04-12
申请号:US16938473
申请日:2020-07-24
Applicant: Nvidia Corporation
Inventor: Ishwar Kulkarni , Ibrahim Eden , Michael Kroepfl , David Nister
Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping 3D point cloud data points into a 2D depth map, fetching a group of the mapped 3D point cloud data points that are within a bounded window of the 2D depth map; and generating geometric space parameters based on the group of the mapped 3D point cloud data points. The generated geometric space parameters may be used for object motion, obstacle detection, freespace detection, and/or landmark detection for an area surrounding a vehicle.
-
公开(公告)号:US20210150230A1
公开(公告)日:2021-05-20
申请号:US16915346
申请日:2020-06-29
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20210063198A1
公开(公告)日:2021-03-04
申请号:US17008074
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: David Nister , Ruchi Bhargava , Vaibhav Thukral , Michael Grabner , Ibrahim Eden , Jeffrey Liu
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US10776983B2
公开(公告)日:2020-09-15
申请号:US16051263
申请日:2018-07-31
Applicant: Nvidia Corporation
Inventor: Ishwar Kulkarni , Ibrahim Eden , Michael Kroepfl , David Nister
IPC: G01S17/58 , G01S17/89 , G01S17/931 , G01K9/00 , G06T11/00 , G06T15/04 , G06T7/20 , G06T7/30 , G06T7/521 , G06K9/00
Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.
-
公开(公告)号:US20250045952A1
公开(公告)日:2025-02-06
申请号:US18363265
申请日:2023-08-01
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ruchita Bhargava , Ibrahim Eden , Amala Sanjay Deshmukh , Ryan Oldja , Ke Chen , Sai Krishnan Chandrasekar , Minwoo Park
IPC: G06T7/73
Abstract: In various examples, systems and methods are disclosed relating to real-time multiview map generation using neural networks. A system can receive sensors images of an environment, such as images from one or more camera, RADAR, LIDAR, and/or ultrasound sensors. The system can process the sensor images using one or more neural networks, such as neural networks implementing attention structures, to detect features in the environment such as lane lines, lane dividers, wait lines, or boundaries. The system can represent the features in various views, including top-down/bird's eye view representations. The system can provide the representations for operations including map generation, map updating, perception, and object detection.
-
公开(公告)号:US12051206B2
公开(公告)日:2024-07-30
申请号:US16938706
申请日:2020-07-24
Applicant: NVIDIA Corporation
Inventor: Ke Chen , Nikolai Smolyanskiy , Alexey Kamenev , Ryan Oldja , Tilman Wekel , David Nister , Joachim Pehserl , Ibrahim Eden , Sangmin Oh , Ruchi Bhargava
IPC: G06T7/00 , G05D1/00 , G06F18/00 , G06F18/22 , G06F18/23 , G06T5/50 , G06T7/10 , G06T7/11 , G06V10/82 , G06V20/56 , G06V20/58 , G06V10/44
CPC classification number: G06T7/11 , G05D1/0088 , G06F18/22 , G06F18/23 , G06T5/50 , G06T7/10 , G06V10/82 , G06V20/56 , G06V20/58 , G06T2207/10028 , G06T2207/20084 , G06T2207/30252 , G06V10/454
Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.
-
-
-
-
-
-
-
-
-