-
1.
公开(公告)号:US20240061075A1
公开(公告)日:2024-02-22
申请号:US18493452
申请日:2023-10-24
Applicant: NVIDIA Corporation
Inventor: Alexander POPOV , Nikolai SMOLYANSKIY , Ryan OLDJA , Shane Murray , Tilman WEKEL , David NISTER , Joachim PEHSERL , Ruchi BHARGAVA , Sangmin OH
IPC: G01S7/295 , G06T7/246 , G06T7/73 , G01S7/41 , G01S13/931 , G06N3/08 , G06V10/764 , G06V10/82 , G06V20/58 , G06V20/64
CPC classification number: G01S7/2955 , G06T7/246 , G06T7/73 , G01S7/414 , G01S7/417 , G01S13/931 , G06N3/08 , G06V10/764 , G06V10/82 , G06V20/58 , G06V20/64 , G06T2207/10044 , G06T2207/20084 , G06T2207/30261
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20240096102A1
公开(公告)日:2024-03-21
申请号:US18366298
申请日:2023-08-07
Applicant: NVIDIA Corporation
Inventor: Alexander POPOV , David NISTER , Nikolai SMOLYANSKIY , PATRIK GEBHARDT , Ke CHEN , Ryan OLDJA , Hee Seok LEE , Shane MURRAY , Ruchi BHARGAVA , Tilman WEKEL , Sangmin OH
IPC: G06V20/56 , G01S13/89 , G01S17/89 , G06V10/774
CPC classification number: G06V20/56 , G01S13/89 , G01S17/89 , G06V10/774
Abstract: Systems and methods are disclosed that relate to freespace detection using machine learning models. First data that may include object labels may be obtained from a first sensor and freespace may be identified using the first data and the object labels. The first data may be annotated to include freespace labels that correspond to freespace within an operational environment. Freespace annotated data may be generated by combining the one or more freespace labels with second data obtained from a second sensor, with the freespace annotated data corresponding to a viewable area in the operational environment. The viewable area may be determined by tracing one or more rays from the second sensor within the field of view of the second sensor relative to the first data. The freespace annotated data may be input into a machine learning model to train the machine learning model to detect freespace using the second data.
-
公开(公告)号:US20250014186A1
公开(公告)日:2025-01-09
申请号:US18397921
申请日:2023-12-27
Applicant: NVIDIA Corporation
Inventor: Ke CHEN , Nikolai SMOLYANSKIY , Alexey KAMENEV , Ryan OLDJA , Tilman WEKEL , David NISTER , Joachim PEHSERL , Ibrahim EDEN , Sangmin OH , Ruchi BHARGAVA
IPC: G06T7/11 , G05D1/81 , G06F18/22 , G06F18/23 , G06T5/50 , G06T7/10 , G06V10/44 , G06V10/82 , G06V20/56 , G06V20/58
Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.
-
-