-
11.
公开(公告)号:US20210156960A1
公开(公告)日:2021-05-27
申请号:US16836583
申请日:2020-03-31
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
IPC: G01S7/41 , G06N3/08 , G06T7/73 , G06T7/246 , G01S13/931
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US10705525B2
公开(公告)日:2020-07-07
申请号:US15939116
申请日:2018-03-28
Applicant: NVIDIA Corporation
IPC: G05D1/00 , G05D1/02 , B62D6/00 , G06N3/08 , G06N7/00 , B62D15/02 , G06K9/00 , G06K9/62 , G05D1/10 , G06N3/04
Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
-
公开(公告)号:US20180292825A1
公开(公告)日:2018-10-11
申请号:US15939116
申请日:2018-03-28
Applicant: NVIDIA Corporation
CPC classification number: G05D1/0088 , B62D6/001 , B62D15/025 , G05D1/0221 , G05D1/024 , G05D1/0242 , G05D1/0246 , G05D1/0255 , G05D1/0257 , G05D1/0268 , G05D1/102 , G05D2201/0213 , G06K9/00 , G06K9/00791 , G06K9/00986 , G06K9/6273 , G06N3/04 , G06N3/08 , G06N3/084 , G06N7/005
Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
-
公开(公告)号:US12164059B2
公开(公告)日:2024-12-10
申请号:US17377064
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/00 , G06N3/045 , G06T19/00 , G06V10/10 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12080078B2
公开(公告)日:2024-09-03
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12072443B2
公开(公告)日:2024-08-27
申请号:US17377053
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/00 , G06N3/045 , G06T19/00 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58 , G06V10/10
CPC classification number: G01S7/4802 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V10/25 , G06V10/26 , G06V10/454 , G06V10/764 , G06V10/774 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/58 , G06V20/584 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261 , G06V10/16
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20240273919A1
公开(公告)日:2024-08-15
申请号:US18647415
申请日:2024-04-26
Applicant: NVIDIA CORPORATION
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
18.
公开(公告)号:US11960026B2
公开(公告)日:2024-04-16
申请号:US17976581
申请日:2022-10-28
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
CPC classification number: G01S7/417 , G01S13/865 , G01S13/89 , G06N3/04 , G06N3/08
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
-
公开(公告)号:US11915493B2
公开(公告)日:2024-02-27
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20230169321A1
公开(公告)日:2023-06-01
申请号:US18160694
申请日:2023-01-27
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Alexey Kamenev , Stan Birchfield
IPC: G06N3/063 , G06T7/593 , G06N3/084 , G06N3/088 , G06T1/20 , G01S17/86 , G01S17/89 , G06F18/22 , G06N3/045 , G06N3/048
CPC classification number: G06N3/063 , G06T7/593 , G06N3/084 , G06N3/088 , G06T1/20 , G01S17/86 , G01S17/89 , G06F18/22 , G06N3/045 , G06N3/048 , G06T2207/10052 , G06T2207/20084 , G06T2207/10012
Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
-
-
-
-
-
-
-
-
-