-
公开(公告)号:US20220122001A1
公开(公告)日:2022-04-21
申请号:US17219350
申请日:2021-03-31
Applicant: Nvidia Corporation
Inventor: Tae Eun Choe , Aman Kishore , Junghyun Kwon , Minwoo Park , Pengfei Hao , Akshita Mittel
Abstract: Approaches presented herein provide for the generation of synthetic data to fortify a dataset for use in training a network via imitation learning. In at least one embodiment, a system is evaluated to identify failure cases, such as may correspond to false positives and false negative detections. Additional synthetic data imitating these failure cases can then be generated and utilized to provide a more abundant dataset. A network or model can then be trained, or retrained, with the original training data and the additional synthetic data. In one or more embodiments, these steps may be repeated until the evaluation metric converges, with additional synthetic training data being generated corresponding to the failure cases at each training pass.
-
12.
公开(公告)号:US20200218979A1
公开(公告)日:2020-07-09
申请号:US16813306
申请日:2020-03-09
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
-
公开(公告)号:US12272152B2
公开(公告)日:2025-04-08
申请号:US17551986
申请日:2021-12-15
Applicant: NVIDIA Corporation
Inventor: Mehmet K. Kocamaz , Ke Xu , Sangmin Oh , Junghyun Kwon
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate object tracking paths for the vehicle to facilitate navigational controls in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as feature descriptor maps including feature descriptor vectors corresponding to objects included in a sensor(s) field of view. The outputs may be decoded and/or otherwise post-processed to reconstruct object tracking and to determine proposed or potential paths for navigating the vehicle.
-
14.
公开(公告)号:US20240320986A1
公开(公告)日:2024-09-26
申请号:US18734354
申请日:2024-06-05
Applicant: NVIDIA Corporation
Inventor: Mehmet Kocamaz , Neeraj Sajjan , Sangmin Oh , David Nister , Junghyun Kwon , Minwoo Park
CPC classification number: G06V20/58 , G06N3/08 , G06V10/255 , G06V10/95 , G06V20/588 , G06V20/64
Abstract: In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.
-
公开(公告)号:US20240101118A1
公开(公告)日:2024-03-28
申请号:US18537527
申请日:2023-12-12
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
IPC: B60W30/18 , B60W30/09 , B60W30/095 , B60W60/00 , G06N3/08 , G06V10/25 , G06V10/75 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/70 , G08G1/01
CPC classification number: B60W30/18154 , B60W30/09 , B60W30/095 , B60W60/0011 , G06N3/08 , G06V10/25 , G06V10/751 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/588 , G06V20/70 , G08G1/0125
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US20200293796A1
公开(公告)日:2020-09-17
申请号:US16814351
申请日:2020-03-10
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US11961243B2
公开(公告)日:2024-04-16
申请号:US17187228
申请日:2021-02-26
Applicant: NVIDIA Corporation
Inventor: Dong Zhang , Sangmin Oh , Junghyun Kwon , Baris Evrim Demiroz , Tae Eun Choe , Minwoo Park , Chethan Ningaraju , Hao Tsui , Eric Viscito , Jagadeesh Sankaran , Yongqing Liang
IPC: G06T7/00 , B60W60/00 , G06F18/214 , G06N3/08 , G06T7/246 , G06V10/25 , G06V10/75 , G06V20/58 , G06V20/56
CPC classification number: G06T7/246 , B60W60/001 , G06F18/2148 , G06N3/08 , G06V10/25 , G06V10/751 , G06V20/58 , G06V20/56
Abstract: A geometric approach may be used to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have a value above a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring.
-
公开(公告)号:US20230282005A1
公开(公告)日:2023-09-07
申请号:US18309878
申请日:2023-05-01
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Junghyun Kwon , Mehmet K. Kocamaz , Hae-Jong Seo , Berta Rodriguez Hervas , Tae Eun Choe
CPC classification number: G06V20/588 , B60W60/00272 , G06T7/292 , G06V20/58 , B60W2554/4029 , B60W2554/4044 , B60W2556/35 , G06T2207/20081 , G06T2207/20084
Abstract: In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
-
公开(公告)号:US11688181B2
公开(公告)日:2023-06-27
申请号:US17353231
申请日:2021-06-21
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Junghyun Kwon , Mehmet K. Kocamaz , Hae-Jong Seo , Berta Rodriguez Hervas , Tae Eun Choe
CPC classification number: G06V20/588 , B60W60/00272 , G06T7/292 , G06V20/58 , B60W2554/4029 , B60W2554/4044 , B60W2556/35 , G06T2207/20081 , G06T2207/20084
Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
-
公开(公告)号:US20230099494A1
公开(公告)日:2023-03-30
申请号:US17489346
申请日:2021-09-29
Applicant: NVIDIA Corporation
Inventor: Mehmet Kocamaz , Neeraj Sajjan , Sangmin Oh , David Nister , Junghyun Kwon , Minwoo Park
Abstract: In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.
-
-
-
-
-
-
-
-
-