-
公开(公告)号:US12026955B2
公开(公告)日:2024-07-02
申请号:US17489346
申请日:2021-09-29
Applicant: NVIDIA Corporation
Inventor: Mehmet Kocamaz , Neeraj Sajjan , Sangmin Oh , David Nister , Junghyun Kwon , Minwoo Park
CPC classification number: G06V20/58 , G06N3/08 , G06V10/255 , G06V10/95 , G06V20/588 , G06V20/64
Abstract: In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.
-
公开(公告)号:US20240169549A1
公开(公告)日:2024-05-23
申请号:US18424219
申请日:2024-01-26
Applicant: NVIDIA Corporation
Inventor: Dongwoo Lee , Junghyun Kwon , Sangmin Oh , Wenchao Zheng , Hae-Jong Seo , David Nister , Berta Rodriguez Hervas
CPC classification number: G06T7/13 , G06T7/40 , G06T17/30 , G06V10/454 , G06V10/751 , G06V10/772 , G06V10/82 , G06V20/586 , G06T2207/10021 , G06T2207/20084 , G06T2207/30264
Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
-
公开(公告)号:US11966228B2
公开(公告)日:2024-04-23
申请号:US18083159
申请日:2022-12-16
Applicant: NVIDIA Corporation
Inventor: David Nister , Hon-Leung Lee , Julia Ng , Yizhou Wang
IPC: G05D1/00 , B60W30/09 , B60W30/095
CPC classification number: G05D1/0214 , B60W30/09 , B60W30/095 , G05D1/0221 , G05D1/0223 , G05D1/0231 , G05D1/0242 , G05D1/0255 , G05D1/0257 , G05D1/027 , G05D1/0278 , G05D1/0289 , G05D1/0891 , B60W2520/06 , B60W2520/10 , B60W2520/14 , B60W2520/16 , B60W2520/18 , B60W2554/00
Abstract: In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
-
44.
公开(公告)号:US20240111025A1
公开(公告)日:2024-04-04
申请号:US18531103
申请日:2023-12-06
Applicant: NVIDIA Corporation
Inventor: Tilman Wekel , Sangmin Oh , David Nister , Joachim Pehserl , Neda Cvijetic , Ibrahim Eden
IPC: G01S7/48 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58
CPC classification number: G01S7/4802 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58 , G01S7/28
Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20230418299A1
公开(公告)日:2023-12-28
申请号:US18464621
申请日:2023-09-11
Applicant: NVIDIA CORPORATION
Inventor: David Nister , Anton Vorontsov
CPC classification number: G05D1/0214 , B60R1/00 , G05D1/0231 , B60W30/08 , G06V20/58 , G06V20/584 , B60R2300/30 , G05D1/0242 , G05D1/0255 , G05D1/0257
Abstract: In various examples, sensor data representative of a field of view of at least one sensor of a vehicle in an environment is received from the at least one sensor. Based at least in part on the sensor data, parameters of an object located in the environment are determined. Trajectories of the object are modeled toward target positions based at least in part on the parameters of the object. From the trajectories, safe time intervals (and/or safe arrival times) over which the vehicle occupying the plurality of target positions would not result in a collision with the object are computed. Based at least in part on the safe time intervals (and/or safe arrival times) and a position of the vehicle in the environment a trajectory for the vehicle may be generated and/or analyzed.
-
公开(公告)号:US11788861B2
公开(公告)日:2023-10-17
申请号:US17008074
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: David Nister , Ruchi Bhargava , Vaibhav Thukral , Michael Grabner , Ibrahim Eden , Jeffrey Liu
CPC classification number: G01C21/3841 , G01C21/1652 , G01C21/3811 , G01C21/3867 , G01C21/3878 , G01C21/3896 , G06N3/02
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
47.
公开(公告)号:US11769052B2
公开(公告)日:2023-09-26
申请号:US17449310
申请日:2021-09-29
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
IPC: G06K9/00 , G06N3/08 , B60W30/14 , B60W60/00 , G06V20/56 , G06F18/214 , G06V10/762
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
-
公开(公告)号:US20230166733A1
公开(公告)日:2023-06-01
申请号:US18162576
申请日:2023-01-31
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
IPC: B60W30/18 , G06N3/08 , G08G1/01 , B60W30/095 , B60W60/00 , B60W30/09 , G06V20/56 , G06V10/25 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/70
CPC classification number: B60W30/18154 , G06N3/08 , G08G1/0125 , B60W30/095 , B60W60/0011 , B60W30/09 , G06V20/588 , G06V10/25 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/70 , G06V20/56
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US11648945B2
公开(公告)日:2023-05-16
申请号:US16814351
申请日:2020-03-10
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
CPC classification number: G06V20/588 , B60W30/09 , B60W30/095 , B60W60/0011 , G06N3/08 , G06V10/751 , G08G1/0125
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US20230124848A1
公开(公告)日:2023-04-20
申请号:US18083159
申请日:2022-12-16
Applicant: NVIDIA Corporation
Inventor: David Nister , Hon-Leung Lee , Julia Ng , Yizhou Wang
IPC: G05D1/02 , B60W30/09 , G05D1/08 , B60W30/095
Abstract: In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
-
-
-
-
-
-
-
-
-