-
公开(公告)号:US11195331B2
公开(公告)日:2021-12-07
申请号:US16820164
申请日:2020-03-16
Applicant: NVIDIA Corporation
Inventor: Dongwoo Lee , Junghyun Kwon , Sangmin Oh , Wenchao Zheng , Hae-Jong Seo , David Nister , Berta Rodriguez Hervas
Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
-
22.
公开(公告)号:US11170299B2
公开(公告)日:2021-11-09
申请号:US16813306
申请日:2020-03-09
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
-
23.
公开(公告)号:US12073325B2
公开(公告)日:2024-08-27
申请号:US18337854
申请日:2023-06-20
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
IPC: G06K9/00 , B60W30/14 , B60W60/00 , G06F18/214 , G06N3/08 , G06V10/762 , G06V20/56
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN.
-
公开(公告)号:US12026955B2
公开(公告)日:2024-07-02
申请号:US17489346
申请日:2021-09-29
Applicant: NVIDIA Corporation
Inventor: Mehmet Kocamaz , Neeraj Sajjan , Sangmin Oh , David Nister , Junghyun Kwon , Minwoo Park
CPC classification number: G06V20/58 , G06N3/08 , G06V10/255 , G06V10/95 , G06V20/588 , G06V20/64
Abstract: In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.
-
公开(公告)号:US20240169549A1
公开(公告)日:2024-05-23
申请号:US18424219
申请日:2024-01-26
Applicant: NVIDIA Corporation
Inventor: Dongwoo Lee , Junghyun Kwon , Sangmin Oh , Wenchao Zheng , Hae-Jong Seo , David Nister , Berta Rodriguez Hervas
CPC classification number: G06T7/13 , G06T7/40 , G06T17/30 , G06V10/454 , G06V10/751 , G06V10/772 , G06V10/82 , G06V20/586 , G06T2207/10021 , G06T2207/20084 , G06T2207/30264
Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
-
26.
公开(公告)号:US20240020953A1
公开(公告)日:2024-01-18
申请号:US18353453
申请日:2023-07-17
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Trung Pham , Junghyun Kwon , Sayed Mehdi Sajjadi Mohammadabadi , Bor-Jeng Chen , Xin Liu , Bala Siva Sashank Jujjavarapu , Mehran Maghoumi
CPC classification number: G06V10/7715 , G06V20/56 , G06V10/82
Abstract: In various examples, feature values corresponding to a plurality of views are transformed into feature values of a shared orientation or perspective to generate a feature map—such as a Bird's-Eye-View (BEV), top-down, orthogonally projected, and/or other shared perspective feature map type. Feature values corresponding to a region of a view may be transformed into feature values using a neural network. The feature values may be assigned to bins of a grid and values assigned to at least one same bin may be combined to generate one or more feature values for the feature map. To assign the transformed features to the bins, one or more portions of a view may be projected into one or more bins using polynomial curves. Radial and/or angular bins may be used to represent the environment for the feature map.
-
27.
公开(公告)号:US11769052B2
公开(公告)日:2023-09-26
申请号:US17449310
申请日:2021-09-29
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
IPC: G06K9/00 , G06N3/08 , B60W30/14 , B60W60/00 , G06V20/56 , G06F18/214 , G06V10/762
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
-
28.
公开(公告)号:US20230186640A1
公开(公告)日:2023-06-15
申请号:US17551986
申请日:2021-12-15
Applicant: NVIDIA Corporation
Inventor: Mehmet K. Kocamaz , Ke Xu , Sangmin Oh , Junghyun Kwon
CPC classification number: G06V20/58 , G06K9/6232 , G06V10/82 , G06V10/46 , G06V10/225 , G06T7/246 , B60W60/001 , G06N3/08 , G06T2207/30252 , G06T2207/20084 , G06T2207/20081 , B60W2420/42
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate object tracking paths for the vehicle to facilitate navigational controls in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as feature descriptor maps including feature descriptor vectors corresponding to objects included in a sensor(s) field of view. The outputs may be decoded and/or otherwise post-processed to reconstruct object tracking and to determine proposed or potential paths for navigating the vehicle.
-
公开(公告)号:US20230166733A1
公开(公告)日:2023-06-01
申请号:US18162576
申请日:2023-01-31
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
IPC: B60W30/18 , G06N3/08 , G08G1/01 , B60W30/095 , B60W60/00 , B60W30/09 , G06V20/56 , G06V10/25 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/70
CPC classification number: B60W30/18154 , G06N3/08 , G08G1/0125 , B60W30/095 , B60W60/0011 , B60W30/09 , G06V20/588 , G06V10/25 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/70 , G06V20/56
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US11648945B2
公开(公告)日:2023-05-16
申请号:US16814351
申请日:2020-03-10
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
CPC classification number: G06V20/588 , B60W30/09 , B60W30/095 , B60W60/0011 , G06N3/08 , G06V10/751 , G08G1/0125
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
-
-
-
-
-
-
-
-