-
公开(公告)号:US11915493B2
公开(公告)日:2024-02-27
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US11874663B2
公开(公告)日:2024-01-16
申请号:US17896825
申请日:2022-08-26
Applicant: NVIDIA Corporation
Inventor: Gary Hicok , Michael Cox , Miguel Sainz , Martin Hempel , Ratin Kumar , Timo Roman , Gordon Grigor , David Nister , Justin Ebert , Chin-Hsien Shih , Tony Tam , Ruchi Bhargava
CPC classification number: G05D1/0088 , G05B13/027 , G05D1/0055 , G05D1/0242 , G05D1/0246 , G05D1/0257 , G06Q10/02 , G06Q50/30 , G05D2201/0213
Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
-
公开(公告)号:US11860628B2
公开(公告)日:2024-01-02
申请号:US17352777
申请日:2021-06-21
Applicant: NVIDIA Corporation
Inventor: David Nister , Yizhou Wang , Jaikrishna Soundararajan , Sachit Kadle
IPC: G05D1/00 , G06T1/20 , G05D1/02 , B60W50/06 , B60W60/00 , B60W30/09 , B60W30/06 , G06T7/70 , G06V20/58
CPC classification number: G05D1/0088 , B60W50/06 , B60W60/0015 , G05D1/0214 , G06T1/20 , G06T7/70 , G06V20/58 , B60W30/06 , B60W30/09 , G06T2207/30241 , G06T2207/30261
Abstract: To determine a path through a pose configuration space, trajectories of poses may be evaluated in parallel based at least on translating the trajectories along at least one axis of the pose configuration space (e.g., an orientation axis). A trajectory may include at least a portion of a turn having a fixed turn radius. Turns or turn portions that have the same turn radius and initial orientation can be translatively shifted along and processed in parallel along the orientation axis as they are translated copies of each other, but with different starting points. Trajectories may be evaluated based at least on processing variables used to evaluate reachability as bit vectors with threads effectively performing large vector operations in synchronization. A parallel reduction pattern may be used to account for dependencies that may exist between sections of a trajectory for evaluating reachability, allowing for the sections to be processed in parallel.
-
公开(公告)号:US11854401B2
公开(公告)日:2023-12-26
申请号:US18067176
申请日:2022-12-16
Applicant: NVIDIA Corporation
Inventor: Yue Wu , Pekka Janis , Xin Tong , Cheng-Chieh Yang , Minwoo Park , David Nister
IPC: G08G1/16 , G06V10/82 , G06V20/58 , G06V20/10 , G06F18/214 , G05D1/00 , G05D1/02 , G06N3/04 , G06T7/20
CPC classification number: G08G1/166 , G05D1/0088 , G05D1/0289 , G06F18/214 , G06N3/0418 , G06T7/20 , G06V10/82 , G06V20/10 , G06V20/58 , G05D2201/0213
Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
-
公开(公告)号:US20230357076A1
公开(公告)日:2023-11-09
申请号:US18311172
申请日:2023-05-02
Applicant: NVIDIA Corporation
Inventor: Michael Kroepfl , Amir Akbarzadeh , Ruchi Bhargava , Viabhav Thukral , Neda Cvijetic , Vadim Cugunovs , David Nister , Birgit Henke , Ibrahim Eden , Youding Zhu , Michael Grabner , Ivana Stojanovic , Yu Sheng , Jeffrey Liu , Enliang Zheng , Jordan Marr , Andrew Carley
IPC: C03C17/36
CPC classification number: C03C17/3607 , C03C17/3639 , C03C17/3644 , C03C17/366 , C03C17/3626 , C03C17/3668 , C03C17/3642 , C03C17/3681 , C03C2217/70 , C03C2217/216 , C03C2217/228 , C03C2217/24 , C03C2217/256 , C03C2217/281 , C03C2217/22 , C03C2217/23 , C03C2218/156
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
86.
公开(公告)号:US20230334317A1
公开(公告)日:2023-10-19
申请号:US18337854
申请日:2023-06-20
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
IPC: G06N3/08 , B60W30/14 , B60W60/00 , G06V20/56 , G06F18/214 , G06V10/762
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06V20/56 , G06F18/2155 , G06V10/763
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN.
-
公开(公告)号:US20230213945A1
公开(公告)日:2023-07-06
申请号:US17565837
申请日:2021-12-30
Applicant: NVIDIA Corporation
Inventor: Neeraj Sajjan , Mehmet K. Kocamaz , Junghyun Kwon , Sangmin Oh , Minwoo Park , David Nister
CPC classification number: G05D1/0248 , G05D1/0257 , G06N3/08 , G05D1/0221 , G05D1/0219 , G05D1/0088 , G05D1/0251 , G05D1/0255 , G05D2201/0213
Abstract: In various examples, one or more output channels of a deep neural network (DNN) may be used to determine assignments of obstacles to paths. To increase the accuracy of the DNN, the input to the DNN may include an input image, one or more representations of path locations, and/or one or more representations of obstacle locations. The system may thus repurpose previously computed information—e.g., obstacle locations, path locations, etc.—from other operations of the system, and use them to generate more detailed inputs for the DNN to increase accuracy of the obstacle to path assignments. Once the output channels are computed using the DNN, computed bounding shapes for the objects may be compared to the outputs to determine the path assignments for each object.
-
公开(公告)号:US20230204383A1
公开(公告)日:2023-06-29
申请号:US18175713
申请日:2023-02-28
Applicant: NVIDIA Corporation
Inventor: Amir Akbarzadeh , David Nister , Ruchi Bhargava , Birgit Henke , Ivana Stojanovic , Yu Sheng
CPC classification number: G01C21/3841 , G01C21/1652 , G01C21/3811 , G01C21/3867 , G01C21/3878 , G01C21/3896 , G06N3/02
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams – or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data – corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data – and ultimately a fused high definition (HD) map – that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US11675359B2
公开(公告)日:2023-06-13
申请号:US16433994
申请日:2019-06-06
Applicant: NVIDIA Corporation
Inventor: Regan Blythe Towal , Maroof Mohammed Farooq , Vijay Chintalapudi , Carolina Parada , David Nister
CPC classification number: G05D1/0221 , G06N3/04 , G06T7/60 , G06V10/764 , G06V10/82 , G06V20/56 , G06V20/588
Abstract: In various examples, a deep learning solution for path detection is implemented to generate a more abstract definition of a drivable path without reliance on explicit lane-markings—by using a detection-based approach. Using approaches of the present disclosure, the identification of drivable paths may be possible in environments where conventional approaches are unreliable, or fail—such as where lane markings do not exist or are occluded. The deep learning solution may generate outputs that represent geometries for one or more drivable paths in an environment and confidence values corresponding to path types or classes that the geometries correspond. These outputs may be directly useable by an autonomous vehicle—such as an autonomous driving software stack—with minimal post-processing.
-
公开(公告)号:US11644834B2
公开(公告)日:2023-05-09
申请号:US16186473
申请日:2018-11-09
Applicant: NVIDIA Corporation
Inventor: Michael Alan Ditty , Gary Hicok , Jonathan Sweedler , Clement Farabet , Mohammed Abdulla Yousuf , Tai-Yuen Chan , Ram Ganapathi , Ashok Srinivasan , Michael Rod Truog , Karl Greb , John George Mathieson , David Nister , Kevin Flory , Daniel Perrin , Dan Hettena
CPC classification number: G05D1/0088 , G05D1/0248 , G05D1/0274 , G06F15/7807 , G06N3/063 , G06V20/58 , G06V20/588 , G05D2201/0213 , G06N3/0454
Abstract: Autonomous driving is one of the world's most challenging computational problems. Very large amounts of data from cameras, RADARs, LIDARs, and HD-Maps must be processed to generate commands to control the car safely and comfortably in real-time. This challenging task requires a dedicated supercomputer that is energy-efficient and low-power, complex high-performance software, and breakthroughs in deep learning AI algorithms. To meet this task, the present technology provides advanced systems and methods that facilitate autonomous driving functionality, including a platform for autonomous driving Levels 3, 4, and/or 5. In preferred embodiments, the technology provides an end-to-end platform with a flexible architecture, including an architecture for autonomous vehicles that leverages computer vision and known ADAS techniques, providing diversity and redundancy, and meeting functional safety standards. The technology provides for a faster, more reliable, safer, energy-efficient and space-efficient System-on-a-Chip, which may be integrated into a flexible, expandable platform that enables a wide-range of autonomous vehicles, including cars, taxis, trucks, and buses, as well as watercraft and aircraft.
-
-
-
-
-
-
-
-
-