-
公开(公告)号:US20200210726A1
公开(公告)日:2020-07-02
申请号:US16728595
申请日:2019-12-27
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Sashank Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
-
公开(公告)号:US20250123605A1
公开(公告)日:2025-04-17
申请号:US18989849
申请日:2024-12-20
Applicant: NVIDIA Corporation
Inventor: Hans Jonas Nilsson , Michael Cox , Sangmin Oh , Joachim Pehserl , Aidin Ehsanibenafati
Abstract: In various examples, systems and methods are disclosed that perform sensor fusion using rule-based and learned processing methods to take advantage of the accuracy of learned approaches and the decomposition benefits of rule-based approaches for satisfying higher levels of safety requirements. For example, in-parallel and/or in-serial combinations of early rule-based sensor fusion, late rule-based sensor fusion, early learned sensor fusion, or late learned sensor fusion may be used to solve various safety goals associated with various required safety levels at a high level of accuracy and precision. In embodiments, learned sensor fusion may be used to make more conservative decisions than the rule-based sensor fusion (as determined using, e.g., severity (S), exposure (E), and controllability (C) (SEC) associated with a current safety goal), but the rule-based sensor fusion may be relied upon where the learned sensor fusion decision may be less conservative than the corresponding rule-based sensor fusion.
-
公开(公告)号:US12235353B2
公开(公告)日:2025-02-25
申请号:US17454389
申请日:2021-11-10
Applicant: NVIDIA Corporation
Inventor: Gang Pan , Joachim Pehserl , Dong Zhang , Baris Evrim Demiroz , Samuel Rupp Ogden , Tae Eun Choe , Sangmin Oh
IPC: G01S13/931 , G01S13/86 , G01S17/86 , G01S17/931
Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.
-
公开(公告)号:US12093824B2
公开(公告)日:2024-09-17
申请号:US18343291
申请日:2023-06-28
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Sashank Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
IPC: G06K9/00 , B60W30/14 , B60W60/00 , G06F18/214 , G06N3/08 , G06V10/762 , G06V20/56
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
-
45.
公开(公告)号:US12073325B2
公开(公告)日:2024-08-27
申请号:US18337854
申请日:2023-06-20
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
IPC: G06K9/00 , B60W30/14 , B60W60/00 , G06F18/214 , G06N3/08 , G06V10/762 , G06V20/56
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN.
-
公开(公告)号:US12026955B2
公开(公告)日:2024-07-02
申请号:US17489346
申请日:2021-09-29
Applicant: NVIDIA Corporation
Inventor: Mehmet Kocamaz , Neeraj Sajjan , Sangmin Oh , David Nister , Junghyun Kwon , Minwoo Park
CPC classification number: G06V20/58 , G06N3/08 , G06V10/255 , G06V10/95 , G06V20/588 , G06V20/64
Abstract: In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.
-
公开(公告)号:US20240176017A1
公开(公告)日:2024-05-30
申请号:US18060376
申请日:2022-11-30
Applicant: NVIDIA Corporation
Inventor: David Weikersdorfer , Qian Lin , Aman Jhunjhunwala , Emilie Lucie Eloïse Wirbel , Sangmin Oh , Minwoo Park , Gyeong Woo Cheon , Arthur Henry Rajala , Bor-Jeng Chen
IPC: G01S15/931 , G01S15/86
CPC classification number: G01S15/931 , G01S15/86 , G01S2015/938
Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations.
-
公开(公告)号:US20240169549A1
公开(公告)日:2024-05-23
申请号:US18424219
申请日:2024-01-26
Applicant: NVIDIA Corporation
Inventor: Dongwoo Lee , Junghyun Kwon , Sangmin Oh , Wenchao Zheng , Hae-Jong Seo , David Nister , Berta Rodriguez Hervas
CPC classification number: G06T7/13 , G06T7/40 , G06T17/30 , G06V10/454 , G06V10/751 , G06V10/772 , G06V10/82 , G06V20/586 , G06T2207/10021 , G06T2207/20084 , G06T2207/30264
Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
-
49.
公开(公告)号:US20240111025A1
公开(公告)日:2024-04-04
申请号:US18531103
申请日:2023-12-06
Applicant: NVIDIA Corporation
Inventor: Tilman Wekel , Sangmin Oh , David Nister , Joachim Pehserl , Neda Cvijetic , Ibrahim Eden
IPC: G01S7/48 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58
CPC classification number: G01S7/4802 , G01S7/481 , G01S17/894 , G01S17/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58 , G01S7/28
Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20230360231A1
公开(公告)日:2023-11-09
申请号:US17955814
申请日:2022-09-29
Applicant: NVIDIA Corporation
Inventor: Mehmet K. Kocamaz , Daniel Per Olof Svensson , Hang Dou , Sangmin Oh , Minwoo Park , Kexuan Zou
IPC: G06T7/246
CPC classification number: G06T7/246 , G06T2207/30252
Abstract: In various examples, techniques for multi-dimensional tracking of objects using two-dimensional (2D) sensor data are described. Systems and methods may use first image data to determine a first 2D detected location and a first three-dimensional (3D) detected location of an object. The systems and methods may then determine a 2D estimated location using the first 2D detected location and a 3D estimated location using the first 3D detected location. The systems and methods may use second image data to determine a second 2D detected location and a second 3D detected location of a detected object, and may then determine that the object corresponds to the detected object using the 2D estimated location, the 3D estimated location, the second 2D detected location, and the second 3D detected location. The systems and method then generate, modify, delete, or otherwise update an object track that includes 2D state information and 3D state information.
-
-
-
-
-
-
-
-
-