-
公开(公告)号:US20240053749A1
公开(公告)日:2024-02-15
申请号:US18494374
申请日:2023-10-25
Applicant: NVIDIA Corporation
Inventor: David Nister , Yizhou Wang , Jaikrishna Soundararajan , Sachit Kadle
CPC classification number: G05D1/0088 , G06T7/70 , G06T1/20 , G05D1/0214 , B60W50/06 , B60W60/0015 , G06V20/58 , B60W30/09
Abstract: To determine a path through a pose configuration space, trajectories of poses may be evaluated in parallel based at least on translating the trajectories along at least one axis of the pose configuration space (e.g., an orientation axis). A trajectory may include at least a portion of a turn having a fixed turn radius. Turns or turn portions that have the same turn radius and initial orientation can be translatively shifted along and processed in parallel along the orientation axis as they are translated copies of each other, but with different starting points. Trajectories may be evaluated based at least on processing variables used to evaluate reachability as bit vectors with threads effectively performing large vector operations in synchronization. A parallel reduction pattern may be used to account for dependencies that may exist between sections of a trajectory for evaluating reachability, allowing for the sections to be processed in parallel.
-
公开(公告)号:US11897471B2
公开(公告)日:2024-02-13
申请号:US18162576
申请日:2023-01-31
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
IPC: B60W30/18 , G06N3/08 , G08G1/01 , B60W30/095 , B60W60/00 , B60W30/09 , G06V20/56 , G06V10/25 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/70 , G06V10/75
CPC classification number: B60W30/18154 , B60W30/09 , B60W30/095 , B60W60/0011 , G06N3/08 , G06V10/25 , G06V10/751 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/588 , G06V20/70 , G08G1/0125
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
153.
公开(公告)号:US11885907B2
公开(公告)日:2024-01-30
申请号:US16836583
申请日:2020-03-31
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
IPC: G01S7/295 , G06T7/246 , G06T7/73 , G01S7/41 , G01S13/931 , G06N3/08 , G06V10/764 , G06V10/82 , G06V20/58 , G06V20/64
CPC classification number: G01S7/2955 , G01S7/414 , G01S7/417 , G01S13/931 , G06N3/08 , G06T7/246 , G06T7/73 , G06V10/764 , G06V10/82 , G06V20/58 , G06V20/64 , G06T2207/10044 , G06T2207/20084 , G06T2207/30261
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US11884294B2
公开(公告)日:2024-01-30
申请号:US17130667
申请日:2020-12-22
Applicant: NVIDIA Corporation
Inventor: Zhenyi Zhang , Yizhou Wang , David Nister , Neda Cvijetic
IPC: B60W60/00 , B60W30/18 , B60W30/095 , B60W40/105
CPC classification number: B60W60/0011 , B60W30/0956 , B60W30/18163 , B60W40/105 , B60W2420/42 , B60W2420/52 , B60W2552/53
Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
-
公开(公告)号:US20230152801A1
公开(公告)日:2023-05-18
申请号:US18151012
申请日:2023-01-06
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Xiaolin Lin , Hae-Jong Seo , David Nister , Neda Cvijetic
IPC: G05D1/00 , G06N3/04 , G06V20/56 , G06F18/214 , G06F18/23 , G06F18/2411 , G06V10/764 , G06V10/776 , G06V10/82 , G06V10/44 , G06V10/48 , G06V10/94
CPC classification number: G05D1/0077 , G05D1/0088 , G06F18/23 , G06F18/2155 , G06F18/2411 , G06N3/0418 , G06V10/48 , G06V10/82 , G06V10/457 , G06V10/764 , G06V10/776 , G06V10/955 , G06V20/588 , G05D2201/0213
Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment - e.g., for updating a world model - in a variety of autonomous machine applications.
-
公开(公告)号:US20230130814A1
公开(公告)日:2023-04-27
申请号:US17512495
申请日:2021-10-27
Applicant: NVIDIA Corporation
Inventor: David Nister , Minwoo Park , Miguel Sainz Serra , Vaibhav Thukral , Berta Rodriguez Hervas
Abstract: In examples, autonomous vehicles are enabled to negotiate yield scenarios in a safe and predictable manner. In response to detecting a yield scenario, a wait element data structure is generated that encodes geometries of an ego path, a contender path that includes at least one contention point with the ego path, as well as a state of contention associated with the at least on contention point. Geometry of yield scenario context may also be encoded, such as inside ground of an intersection, entry or exit lines, etc. The wait element data structure is passed to a yield planner of the autonomous vehicle. The yield planner determines a yielding behavior for the autonomous vehicle based at least on the wait element data structure. A control system of the autonomous vehicle may operate the autonomous vehicle in accordance with the yield behavior, such that the autonomous vehicle safely negotiates the yield scenario.
-
公开(公告)号:US11604470B2
公开(公告)日:2023-03-14
申请号:US17356337
申请日:2021-06-23
Applicant: NVIDIA Corporation
Inventor: David Nister , Hon-Leung Lee , Julia Ng , Yizhou Wang
IPC: G05D1/02 , B60W30/09 , G05D1/08 , B60W30/095
Abstract: In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
-
158.
公开(公告)号:US20230049567A1
公开(公告)日:2023-02-16
申请号:US17976581
申请日:2022-10-28
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
-
159.
公开(公告)号:US20230012645A1
公开(公告)日:2023-01-19
申请号:US17952881
申请日:2022-09-26
Applicant: NVIDIA Corporation
Inventor: Hae-Jong Seo , Abhishek Bajpayee , David Nister , Minwoo Park , Neda Cvijetic
Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
-
公开(公告)号:US20220415059A1
公开(公告)日:2022-12-29
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
-
-
-
-
-
-
-
-