-
公开(公告)号:US20230004164A1
公开(公告)日:2023-01-05
申请号:US17940664
申请日:2022-09-08
Applicant: NVIDIA Corporation
Inventor: Davide Marco Onofrio , Hae-Jong Seo , David Nister , Minwoo Park , Neda Cvijetic
Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.
-
公开(公告)号:US20220237925A1
公开(公告)日:2022-07-28
申请号:US17718721
申请日:2022-04-12
Applicant: Nvidia Corporation
Inventor: Ishwar Kulkarni , Ibrahim Eden , Michael Kroepfl , David Nister
Abstract: LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping one or more point cloud data points into a depth map, the one or more point cloud data points being generated using one or more sensors; determining one or more mapped point cloud data points within a bounded area of the depth map, and detecting, using one or more processing units and for an environment surrounding a machine corresponding to the one or more sensors, a location of one or more entities based on the one or more mapped point cloud data points.
-
公开(公告)号:US20210233307A1
公开(公告)日:2021-07-29
申请号:US17228460
申请日:2021-04-12
Applicant: NVIDIA Corporation
Inventor: Philippe Bouttefroy , David Nister , Ibrahim Eden
Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
-
公开(公告)号:US20210197858A1
公开(公告)日:2021-07-01
申请号:US17130667
申请日:2020-12-22
Applicant: NVIDIA Corporation
Inventor: Zhenyi Zhang , Yizhou Wang , David Nister , Neda Cvijetic
IPC: B60W60/00 , B60W30/18 , B60W30/095 , B60W40/105
Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
-
公开(公告)号:US10991155B2
公开(公告)日:2021-04-27
申请号:US16385921
申请日:2019-04-16
Applicant: NVIDIA Corporation
Inventor: Philippe Bouttefroy , David Nister , Ibrahim Eden
Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
-
公开(公告)号:US20210063200A1
公开(公告)日:2021-03-04
申请号:US17007873
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: Michael Kroepfl , Amir Akbarzadeh , Ruchi Bhargava , Vaibhav Thukral , Neda Cvijetic , Vadim Cugunovs , David Nister , Birgit Henke , Ibrahim Eden , Youding Zhu , Michael Grabner , Ivana Stojanovic , Yu Sheng , Jeffrey Liu , Enliang Zheng , Jordan Marr , Andrew Carley
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US20210026355A1
公开(公告)日:2021-01-28
申请号:US16938706
申请日:2020-07-24
Applicant: NVIDIA Corporation
Inventor: Ke Chen , Nikolai Smolyanskiy , Alexey Kamenev , Ryan Oldja , Tilman Wekel , David Nister , Joachim Pehserl , Ibrahim Eden , Sangmin Oh , Ruchi Bhargava
Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.
-
公开(公告)号:US20200293064A1
公开(公告)日:2020-09-17
申请号:US16514404
申请日:2019-07-17
Applicant: NVIDIA Corporation
Inventor: Yue Wu , Pekka Janis , Xin Tong , Cheng-Chieh Yang , Minwoo Park , David Nister
Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
-
99.
公开(公告)号:US20200218979A1
公开(公告)日:2020-07-09
申请号:US16813306
申请日:2020-03-09
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
-
公开(公告)号:US20190258251A1
公开(公告)日:2019-08-22
申请号:US16186473
申请日:2018-11-09
Applicant: NVIDIA Corporation
Inventor: Michael Alan DITTY , Gary HICOK , Jonathan SWEEDLER , Clement FARABET , Mohammed Abdulla YOUSUF , Tai-Yuen CHAN , Ram GANAPATHI , Ashok SRINIVASAN , Michael Rod TRUOG , Karl GREB , John George MATHIESON , David Nister , Kevin Flory , Daniel Perrin , Dan Hettena
Abstract: Autonomous driving is one of the world's most challenging computational problems. Very large amounts of data from cameras, RADARs, LIDARs, and HD-Maps must be processed to generate commands to control the car safely and comfortably in real-time. This challenging task requires a dedicated supercomputer that is energy-efficient and low-power, complex high-performance software, and breakthroughs in deep learning AI algorithms. To meet this task, the present technology provides advanced systems and methods that facilitate autonomous driving functionality, including a platform for autonomous driving Levels 3, 4, and/or 5. In preferred embodiments, the technology provides an end-to-end platform with a flexible architecture, including an architecture for autonomous vehicles that leverages computer vision and known ADAS techniques, providing diversity and redundancy, and meeting functional safety standards. The technology provides for a faster, more reliable, safer, energy-efficient and space-efficient System-on-a-Chip, which may be integrated into a flexible, expandable platform that enables a wide-range of autonomous vehicles, including cars, taxis, trucks, and buses, as well as watercraft and aircraft.
-
-
-
-
-
-
-
-
-