-
公开(公告)号:US20240116538A1
公开(公告)日:2024-04-11
申请号:US18545856
申请日:2023-12-19
Applicant: NVIDIA Corporation
Inventor: Zhenyi Zhang , Yizhou Wang , David Nister , Neda Cvijetic
IPC: B60W60/00 , B60W30/095 , B60W30/18 , B60W40/105
CPC classification number: B60W60/0011 , B60W30/0956 , B60W30/18163 , B60W40/105 , B60W2420/403 , B60W2420/408 , B60W2552/53
Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
-
公开(公告)号:US11928822B2
公开(公告)日:2024-03-12
申请号:US17864026
申请日:2022-07-13
Applicant: NVIDIA Corporation
Inventor: Trung Pham , Berta Rodriguez Hervas , Minwoo Park , David Nister , Neda Cvijetic
IPC: G06T7/11 , G05B13/02 , G06F18/21 , G06F18/24 , G06N3/04 , G06N3/08 , G06T5/00 , G06T11/20 , G06V10/26 , G06V10/34 , G06V10/44 , G06V10/82 , G06V20/56 , G06V30/19 , G06V30/262
CPC classification number: G06T7/11 , G05B13/027 , G06F18/21 , G06F18/24 , G06N3/04 , G06N3/08 , G06T3/4046 , G06T5/002 , G06T11/20 , G06V10/267 , G06V10/34 , G06V10/454 , G06V10/82 , G06V20/56 , G06V30/19173 , G06V30/274 , G06T2207/20081 , G06T2207/20084 , G06T2207/30252 , G06T2210/12
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
-
公开(公告)号:US20230357076A1
公开(公告)日:2023-11-09
申请号:US18311172
申请日:2023-05-02
Applicant: NVIDIA Corporation
Inventor: Michael Kroepfl , Amir Akbarzadeh , Ruchi Bhargava , Viabhav Thukral , Neda Cvijetic , Vadim Cugunovs , David Nister , Birgit Henke , Ibrahim Eden , Youding Zhu , Michael Grabner , Ivana Stojanovic , Yu Sheng , Jeffrey Liu , Enliang Zheng , Jordan Marr , Andrew Carley
IPC: C03C17/36
CPC classification number: C03C17/3607 , C03C17/3639 , C03C17/3644 , C03C17/366 , C03C17/3626 , C03C17/3668 , C03C17/3642 , C03C17/3681 , C03C2217/70 , C03C2217/216 , C03C2217/228 , C03C2217/24 , C03C2217/256 , C03C2217/281 , C03C2217/22 , C03C2217/23 , C03C2218/156
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US20230004164A1
公开(公告)日:2023-01-05
申请号:US17940664
申请日:2022-09-08
Applicant: NVIDIA Corporation
Inventor: Davide Marco Onofrio , Hae-Jong Seo , David Nister , Minwoo Park , Neda Cvijetic
Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.
-
公开(公告)号:US20210197858A1
公开(公告)日:2021-07-01
申请号:US17130667
申请日:2020-12-22
Applicant: NVIDIA Corporation
Inventor: Zhenyi Zhang , Yizhou Wang , David Nister , Neda Cvijetic
IPC: B60W60/00 , B60W30/18 , B60W30/095 , B60W40/105
Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
-
公开(公告)号:US20210063200A1
公开(公告)日:2021-03-04
申请号:US17007873
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: Michael Kroepfl , Amir Akbarzadeh , Ruchi Bhargava , Vaibhav Thukral , Neda Cvijetic , Vadim Cugunovs , David Nister , Birgit Henke , Ibrahim Eden , Youding Zhu , Michael Grabner , Ivana Stojanovic , Yu Sheng , Jeffrey Liu , Enliang Zheng , Jordan Marr , Andrew Carley
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US12051332B2
公开(公告)日:2024-07-30
申请号:US17940664
申请日:2022-09-08
Applicant: NVIDIA Corporation
Inventor: Davide Marco Onofrio , Hae-Jong Seo , David Nister , Minwoo Park , Neda Cvijetic
CPC classification number: G08G1/167 , G05D1/0088 , G05D1/0214 , G05D1/0219 , G05D1/0223 , G06F18/23 , G06N3/08 , G06V20/588
Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections.
-
8.
公开(公告)号:US20240111025A1
公开(公告)日:2024-04-04
申请号:US18531103
申请日:2023-12-06
Applicant: NVIDIA Corporation
Inventor: Tilman Wekel , Sangmin Oh , David Nister , Joachim Pehserl , Neda Cvijetic , Ibrahim Eden
IPC: G01S7/48 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58
CPC classification number: G01S7/4802 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58 , G01S7/28
Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20230166733A1
公开(公告)日:2023-06-01
申请号:US18162576
申请日:2023-01-31
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
IPC: B60W30/18 , G06N3/08 , G08G1/01 , B60W30/095 , B60W60/00 , B60W30/09 , G06V20/56 , G06V10/25 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/70
CPC classification number: B60W30/18154 , G06N3/08 , G08G1/0125 , B60W30/095 , B60W60/0011 , B60W30/09 , G06V20/588 , G06V10/25 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/70 , G06V20/56
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US11648945B2
公开(公告)日:2023-05-16
申请号:US16814351
申请日:2020-03-10
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
CPC classification number: G06V20/588 , B60W30/09 , B60W30/095 , B60W60/0011 , G06N3/08 , G06V10/751 , G08G1/0125
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
-
-
-
-
-
-
-
-