-
公开(公告)号:US20240127454A1
公开(公告)日:2024-04-18
申请号:US18391276
申请日:2023-12-20
Applicant: NVIDIA Corporation
Inventor: Trung Pham , Berta Rodriguez Hervas , Minwoo Park , David Nister , Neda Cvijetic
IPC: G06T7/11 , G05B13/02 , G06F18/21 , G06F18/24 , G06N3/04 , G06N3/08 , G06T3/4046 , G06T5/70 , G06T11/20 , G06V10/26 , G06V10/34 , G06V10/44 , G06V10/82 , G06V20/56 , G06V30/19 , G06V30/262
CPC classification number: G06T7/11 , G05B13/027 , G06F18/21 , G06F18/24 , G06N3/04 , G06N3/08 , G06T3/4046 , G06T5/70 , G06T11/20 , G06V10/267 , G06V10/34 , G06V10/454 , G06V10/82 , G06V20/56 , G06V30/19173 , G06V30/274 , G06T2207/20081 , G06T2207/20084 , G06T2207/30252 , G06T2210/12
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
-
公开(公告)号:US20240101118A1
公开(公告)日:2024-03-28
申请号:US18537527
申请日:2023-12-12
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
IPC: B60W30/18 , B60W30/09 , B60W30/095 , B60W60/00 , G06N3/08 , G06V10/25 , G06V10/75 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/70 , G08G1/01
CPC classification number: B60W30/18154 , B60W30/09 , B60W30/095 , B60W60/0011 , G06N3/08 , G06V10/25 , G06V10/751 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/588 , G06V20/70 , G08G1/0125
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
13.
公开(公告)号:US11927502B2
公开(公告)日:2024-03-12
申请号:US16860824
申请日:2020-04-28
Applicant: NVIDIA Corporation
Inventor: Jesse Hong , Urs Muller , Bernhard Firner , Zongyi Yang , Joyjit Daw , David Nister , Roberto Giuseppe Luca Valenti , Rotem Aviv
IPC: G01M17/007 , B60W30/08 , B60W30/12 , B60W30/14 , B60W50/00 , B60W50/04 , B60W60/00 , G06V10/774 , G06V20/56 , G07C5/08 , G06F11/36
CPC classification number: G01M17/007 , B60W30/08 , B60W30/12 , B60W30/143 , B60W50/04 , B60W50/045 , B60W60/0011 , G06V10/774 , G06V20/56 , G07C5/08 , B60W2050/0028 , G06F11/3684 , G06F11/3696
Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
-
公开(公告)号:US11921502B2
公开(公告)日:2024-03-05
申请号:US18151012
申请日:2023-01-06
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Xiaolin Lin , Hae-Jong Seo , David Nister , Neda Cvijetic
IPC: G05D1/00 , G05D1/02 , G06F18/214 , G06F18/23 , G06F18/2411 , G06N3/04 , G06N3/08 , G06V10/44 , G06V10/48 , G06V10/75 , G06V10/764 , G06V10/766 , G06V10/776 , G06V10/82 , G06V10/94 , G06V20/56
CPC classification number: G05D1/0077 , G05D1/0088 , G06F18/2155 , G06F18/23 , G06F18/2411 , G06N3/0418 , G06V10/457 , G06V10/48 , G06V10/751 , G06V10/764 , G06V10/776 , G06V10/82 , G06V10/955 , G06V20/588 , G05D2201/0213
Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.
-
公开(公告)号:US20240029447A1
公开(公告)日:2024-01-25
申请号:US18482183
申请日:2023-10-06
Applicant: NVIDIA Corporation
Inventor: Nikolai SMOLYANSKIY , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , G01S17/931 , B60W60/0016 , B60W60/0027 , B60W60/0011 , G01S17/89 , G05D1/0088 , G06T19/006 , G06V20/58 , G06N3/045 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20230366698A1
公开(公告)日:2023-11-16
申请号:US18352578
申请日:2023-07-14
Applicant: NVIDIA Corporation
Inventor: David Nister , Ruchi Bhargava , Vaibhav Thukral , Michael Grabner , Ibrahim Eden , Jeffrey Liu
CPC classification number: G01C21/3841 , G01C21/3896 , G01C21/3878 , G01C21/1652 , G01C21/3867 , G06N3/02 , G01C21/3811
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US11803192B2
公开(公告)日:2023-10-31
申请号:US17900622
申请日:2022-08-31
Applicant: NVIDIA Corporation
Inventor: Michael Grabner , Jeremy Furtek , David Nister
CPC classification number: G05D1/0253 , G05D1/0251 , G06T7/246 , G06T7/285 , G06T7/35 , G06T2207/10021 , G06T2207/10028 , G06T2207/30252
Abstract: Systems and methods for performing visual odometry more rapidly. Pairs of representations from sensor data (such as images from one or more cameras) are selected, and features common to both representations of the pair are identified. Portions of bundle adjustment matrices that correspond to the pair are updated using the common features. These updates are maintained in register memory until all portions of the matrices that correspond to the pair are updated. By selecting only common features of one particular pair of representations, updated matrix values may be kept in registers. Accordingly, matrix updates for each common feature may be collectively saved with a single write of the registers to other memory. In this manner, fewer write operations are performed from register memory to other memory, thus reducing the time required to update bundle adjustment matrices and thus speeding the bundle adjustment process.
-
公开(公告)号:US11789449B2
公开(公告)日:2023-10-17
申请号:US16269921
申请日:2019-02-07
Applicant: NVIDIA Corporation
Inventor: David Nister , Anton Vorontsov
CPC classification number: G05D1/0214 , B60R1/00 , B60W30/08 , G05D1/0231 , G06V20/58 , G06V20/584 , B60R2300/30 , G05D1/0242 , G05D1/0255 , G05D1/0257
Abstract: In various examples, sensor data representative of a field of view of at least one sensor of a vehicle in an environment is received from the at least one sensor. Based at least in part on the sensor data, parameters of an object located in the environment are determined. Trajectories of the object are modeled toward target positions based at least in part on the parameters of the object. From the trajectories, safe time intervals (and/or safe arrival times) over which the vehicle occupying the plurality of target positions would not result in a collision with the object are computed. Based at least in part on the safe time intervals (and/or safe arrival times) and a position of the vehicle in the environment a trajectory for the vehicle may be generated and/or analyzed.
-
公开(公告)号:US11713978B2
公开(公告)日:2023-08-01
申请号:US17008100
申请日:2020-08-31
Applicant: NVIDIA Corporation
Inventor: Amir Akbarzadeh , David Nister , Ruchi Bhargava , Birgit Henke , Ivana Stojanovic , Yu Sheng
CPC classification number: G01C21/3841 , G01C21/1652 , G01C21/3811 , G01C21/3867 , G01C21/3878 , G01C21/3896 , G06N3/02
Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
-
公开(公告)号:US11704890B2
公开(公告)日:2023-07-18
申请号:US17522624
申请日:2021-11-09
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
CPC classification number: G06V10/25 , G06T7/536 , G06V10/454 , G06V10/70 , G06V10/82 , G06V20/58 , G06T2207/20084 , G06T2207/30261
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
-
-
-
-
-
-
-
-
-