-
公开(公告)号:US20210063198A1
公开(公告)日:2021-03-04
申请号:US17008074
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: David Nister , Ruchi Bhargava , Vaibhav Thukral , Michael Grabner , Ibrahim Eden , Jeffrey Liu
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US12001958B2
公开(公告)日:2024-06-04
申请号:US16824199
申请日:2020-03-19
Applicant: NVIDIA Corporation
Inventor: Alexey Kamenev , Nikolai Smolyanskiy , Ishwar Kulkarni , Ollin Boer Bohan , Fangkai Yang , Alperen Degirmenci , Ruchi Bhargava , Urs Muller , David Nister , Rotem Aviv
Abstract: In various examples, past location information corresponding to actors in an environment and map information may be applied to a deep neural network (DNN)—such as a recurrent neural network (RNN)—trained to compute information corresponding to future trajectories of the actors. The output of the DNN may include, for each future time slice the DNN is trained to predict, a confidence map representing a confidence for each pixel that an actor is present and a vector field representing locations of actors in confidence maps for prior time slices. The vector fields may thus be used to track an object through confidence maps for each future time slice to generate a predicted future trajectory for each actor. The predicted future trajectories, in addition to tracked past trajectories, may be used to generate full trajectories for the actors that may aid an ego-vehicle in navigating the environment.
-
公开(公告)号:US20240029447A1
公开(公告)日:2024-01-25
申请号:US18482183
申请日:2023-10-06
Applicant: NVIDIA Corporation
Inventor: Nikolai SMOLYANSKIY , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , G01S17/931 , B60W60/0016 , B60W60/0027 , B60W60/0011 , G01S17/89 , G05D1/0088 , G06T19/006 , G06V20/58 , G06N3/045 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20230366698A1
公开(公告)日:2023-11-16
申请号:US18352578
申请日:2023-07-14
Applicant: NVIDIA Corporation
Inventor: David Nister , Ruchi Bhargava , Vaibhav Thukral , Michael Grabner , Ibrahim Eden , Jeffrey Liu
CPC classification number: G01C21/3841 , G01C21/3896 , G01C21/3878 , G01C21/1652 , G01C21/3867 , G06N3/02 , G01C21/3811
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US11713978B2
公开(公告)日:2023-08-01
申请号:US17008100
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: Amir Akbarzadeh , David Nister , Ruchi Bhargava , Birgit Henke , Ivana Stojanovic , Yu Sheng
CPC classification number: G01C21/3841 , G01C21/1652 , G01C21/3811 , G01C21/3867 , G01C21/3878 , G01C21/3896 , G06N3/02
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US11532168B2
公开(公告)日:2022-12-20
申请号:US16915346
申请日:2020-06-29
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20220138568A1
公开(公告)日:2022-05-05
申请号:US17453055
申请日:2021-11-01
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Alexey Kamenev , Lirui Wang , David Nister , Ollin Boer Bohan , Ishwar Kulkarni , Fangkai Yang , Julia Ng , Alperen Degirmenci , Ruchi Bhargava , Rotem Aviv
Abstract: In various examples, reinforcement learning is used to train at least one machine learning model (MLM) to control a vehicle by leveraging a deep neural network (DNN) trained on real-world data by using imitation learning to predict movements of one or more actors to define a world model. The DNN may be trained from real-world data to predict attributes of actors, such as locations and/or movements, from input attributes. The predictions may define states of the environment in a simulator, and one or more attributes of one or more actors input into the DNN may be modified or controlled by the simulator to simulate conditions that may otherwise be unfeasible. The MLM(s) may leverage predictions made by the DNN to predict one or more actions for the vehicle.
-
公开(公告)号:US12164059B2
公开(公告)日:2024-12-10
申请号:US17377064
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/00 , G06N3/045 , G06T19/00 , G06V10/10 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12080078B2
公开(公告)日:2024-09-03
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12072443B2
公开(公告)日:2024-08-27
申请号:US17377053
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/00 , G06N3/045 , G06T19/00 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58 , G06V10/10
CPC classification number: G01S7/4802 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V10/25 , G06V10/26 , G06V10/454 , G06V10/764 , G06V10/774 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/58 , G06V20/584 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261 , G06V10/16
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
-
-
-
-
-
-
-
-