-
公开(公告)号:US12272152B2
公开(公告)日:2025-04-08
申请号:US17551986
申请日:2021-12-15
Applicant: NVIDIA Corporation
Inventor: Mehmet K. Kocamaz , Ke Xu , Sangmin Oh , Junghyun Kwon
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate object tracking paths for the vehicle to facilitate navigational controls in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as feature descriptor maps including feature descriptor vectors corresponding to objects included in a sensor(s) field of view. The outputs may be decoded and/or otherwise post-processed to reconstruct object tracking and to determine proposed or potential paths for navigating the vehicle.
-
12.
公开(公告)号:US20240320986A1
公开(公告)日:2024-09-26
申请号:US18734354
申请日:2024-06-05
Applicant: NVIDIA Corporation
Inventor: Mehmet Kocamaz , Neeraj Sajjan , Sangmin Oh , David Nister , Junghyun Kwon , Minwoo Park
CPC classification number: G06V20/58 , G06N3/08 , G06V10/255 , G06V10/95 , G06V20/588 , G06V20/64
Abstract: In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.
-
公开(公告)号:US20240232616A9
公开(公告)日:2024-07-11
申请号:US18343291
申请日:2023-06-28
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Sashank Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
IPC: G06N3/08 , B60W30/14 , B60W60/00 , G06F18/214 , G06V10/762 , G06V20/56
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
-
公开(公告)号:US20240176018A1
公开(公告)日:2024-05-30
申请号:US18060444
申请日:2022-11-30
Applicant: NVIDIA Corporation
Inventor: David Weikersdorfer , Qian Lin , Aman Jhunjhunwala , Emilie Lucie Eloïse Wirbel , Sangmin Oh , Minwoo Park , Gyeong Woo Cheon , Arthur Henry Rajala , Bor-Jeng Chen
IPC: G01S15/931 , G01S15/86
CPC classification number: G01S15/931 , G01S15/86 , G01S2015/938
Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations
-
公开(公告)号:US20230360255A1
公开(公告)日:2023-11-09
申请号:US17955822
申请日:2022-09-29
Applicant: NVIDIA Corporation
Inventor: Mehmet K. Kocamaz , Daniel Per Olof Svensson , Hang Dou , Sangmin Oh , Minwoo Park , Kexuan Zou
CPC classification number: G06T7/73 , G06T7/20 , G06V2201/07 , G06T2207/30241 , G06T2207/30252
Abstract: In various examples, techniques for multi-dimensional tracking of objects using two-dimensional (2D) sensor data are described. Systems and methods may use first image data to determine a first 2D detected location and a first three-dimensional (3D) detected location of an object. The systems and methods may then determine a 2D estimated location using the first 2D detected location and a 3D estimated location using the first 3D detected location. The systems and methods may use second image data to determine a second 2D detected location and a second 3D detected location of a detected object, and may then determine that the object corresponds to the detected object using the 2D estimated location, the 3D estimated location, the second 2D detected location, and the second 3D detected location. The systems and method then generate, modify, delete, or otherwise update an object track that includes 2D state information and 3D state information.
-
公开(公告)号:US11704890B2
公开(公告)日:2023-07-18
申请号:US17522624
申请日:2021-11-09
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
CPC classification number: G06V10/25 , G06T7/536 , G06V10/454 , G06V10/70 , G06V10/82 , G06V20/58 , G06T2207/20084 , G06T2207/30261
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
-
公开(公告)号:US20230142299A1
公开(公告)日:2023-05-11
申请号:US17454389
申请日:2021-11-10
Applicant: NVIDIA Corporation
Inventor: Gang Pan , Joachim Pehserl , Dong Zhang , Baris Evrim Demiroz , Samuel Rupp Ogden , Tae Eun Choe , Sangmin Oh
IPC: G01S13/931 , G01S13/86 , G01S17/931 , G01S17/86
CPC classification number: G01S13/931 , G01S13/865 , G01S17/931 , G01S17/86 , G01S13/867 , G01S2013/932 , G01S2013/9318
Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.
-
公开(公告)号:US11308338B2
公开(公告)日:2022-04-19
申请号:US16728595
申请日:2019-12-27
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Sashank Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
-
公开(公告)号:US20240403640A1
公开(公告)日:2024-12-05
申请号:US18799229
申请日:2024-08-09
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Sashank Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
IPC: G06N3/08 , B60W30/14 , B60W60/00 , G06F18/214 , G06V10/762 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
-
20.
公开(公告)号:US12050285B2
公开(公告)日:2024-07-30
申请号:US17976581
申请日:2022-10-28
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
CPC classification number: G01S7/417 , G01S13/865 , G01S13/89 , G06N3/04 , G06N3/08
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
-
-
-
-
-
-
-
-
-