-
公开(公告)号:US20240410981A1
公开(公告)日:2024-12-12
申请号:US18810728
申请日:2024-08-21
Applicant: NVIDIA CORPORATION
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/81 , G06N3/045 , G06T19/00 , G06V10/10 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
2.
公开(公告)号:US20240111025A1
公开(公告)日:2024-04-04
申请号:US18531103
申请日:2023-12-06
Applicant: NVIDIA Corporation
Inventor: Tilman Wekel , Sangmin Oh , David Nister , Joachim Pehserl , Neda Cvijetic , Ibrahim Eden
IPC: G01S7/48 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58
CPC classification number: G01S7/4802 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58 , G01S7/28
Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20210156963A1
公开(公告)日:2021-05-27
申请号:US16836618
申请日:2020-03-31
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
-
公开(公告)号:US20210150230A1
公开(公告)日:2021-05-20
申请号:US16915346
申请日:2020-06-29
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20240029447A1
公开(公告)日:2024-01-25
申请号:US18482183
申请日:2023-10-06
Applicant: NVIDIA Corporation
Inventor: Nikolai SMOLYANSKIY , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , G01S17/931 , B60W60/0016 , B60W60/0027 , B60W60/0011 , G01S17/89 , G05D1/0088 , G06T19/006 , G06V20/58 , G06N3/045 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US11532168B2
公开(公告)日:2022-12-20
申请号:US16915346
申请日:2020-06-29
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
7.
公开(公告)号:US20240362935A1
公开(公告)日:2024-10-31
申请号:US18305185
申请日:2023-04-21
Applicant: NVIDIA Corporation
Inventor: Anton Mitrokhin , Roman Parys , Alexey Solovey , Tilman Wekel
Abstract: In various examples, generating maps using first sensor data and then annotating second sensor data using the maps for autonomous systems and applications is described herein. Systems and methods are disclosed that automatically propagate annotations associated with the first sensor data generated using a first type of sensor, such as a LiDAR sensor, to the second sensor data generated using a second type of sensor, such as an image sensor(s). To propagate the annotations, the first type of sensor data may be used to generate a map, where the map represents the locations of static objects as well as the locations of dynamic objects at various instances in time. The map and annotations associated with the first sensor data may then be used to annotate the second sensor data and/or determine additional information associated with the objects represented by the second sensors data.
-
公开(公告)号:US12050285B2
公开(公告)日:2024-07-30
申请号:US17976581
申请日:2022-10-28
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
CPC classification number: G01S7/417 , G01S13/865 , G01S13/89 , G06N3/04 , G06N3/08
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
-
9.
公开(公告)号:US11906660B2
公开(公告)日:2024-02-20
申请号:US17005788
申请日:2020-08-28
Applicant: NVIDIA Corporation
Inventor: Tilman Wekel , Sangmin Oh , David Nister , Joachim Pehserl , Neda Cvijetic , Ibrahim Eden
IPC: G01S7/00 , G01S7/48 , G01S17/894 , G01S7/481 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58 , G01S7/28
CPC classification number: G01S7/4802 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58 , G01S7/28
Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
10.
公开(公告)号:US11531088B2
公开(公告)日:2022-12-20
申请号:US16836618
申请日:2020-03-31
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
-
-
-
-
-
-
-
-
-