-
公开(公告)号:US12235353B2
公开(公告)日:2025-02-25
申请号:US17454389
申请日:2021-11-10
Applicant: NVIDIA Corporation
Inventor: Gang Pan , Joachim Pehserl , Dong Zhang , Baris Evrim Demiroz , Samuel Rupp Ogden , Tae Eun Choe , Sangmin Oh
IPC: G01S13/931 , G01S13/86 , G01S17/86 , G01S17/931
Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.
-
12.
公开(公告)号:US20240317263A1
公开(公告)日:2024-09-26
申请号:US18680378
申请日:2024-05-31
Applicant: NVIDIA CORPORATION
Inventor: Ahyun SEO , Tae Eun Choe , Minwoo Park , Jung Seock Joo
CPC classification number: B60W60/0015 , G06N3/04
Abstract: Systems and methods are disclosed relating to viewpoint adapted perception for autonomous machines and applications. A 3D perception network may be adapted to handle unavailable target rig data by training one or more layers of the 3D perception network as part of a training network using real source rig data and simulated source and target rig data. Feature statistics extracted from the real source data may be used to transform the features extracted from the simulated data during training. The paths for real and simulated data through the resulting network may be alternately trained on real and simulated data to update shared weights for the different paths. As such, one or more of the paths through the training network(s) may be designated as the 3D perception network, and target rig data may be applied to the 3D perception network to perform one or more perception tasks.
-
公开(公告)号:US20210264175A1
公开(公告)日:2021-08-26
申请号:US17187228
申请日:2021-02-26
Applicant: NVIDIA Corporation
Inventor: Dong Zhang , Sangmin Oh , Junghyun Kwon , Baris Evrim Demiroz , Tae Eun Choe , Minwoo Park , Chethan Ningaraju , Hao Tsui , Eric Viscito , Jagadeesh Sankaran , Yongqing Liang
Abstract: Systems and methods are disclosed that use a geometric approach to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have an disparity image value about a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring that corresponds to the object.
-
公开(公告)号:US11801861B2
公开(公告)日:2023-10-31
申请号:US17150954
申请日:2021-01-15
Applicant: NVIDIA Corporation
Inventor: Tae Eun Choe , Pengfei Hao , Xiaolin Lin , Minwoo Park
CPC classification number: B60W60/001 , B60W50/06 , G06N3/04 , G06N3/08 , B60W2420/42 , B60W2554/4029 , B60W2554/80
Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.
-
公开(公告)号:US20230294727A1
公开(公告)日:2023-09-21
申请号:US17695621
申请日:2022-03-15
Applicant: NVIDIA Corporation
Inventor: Sangmin Oh , Baris Evrim Demiroz , Gang Pan , Dong Zhang , Joachim Pehserl , Samuel Rupp Ogden , Tae Eun Choe
CPC classification number: B60W60/001 , G06K9/6288 , G06F9/5072 , B60W2555/20 , B60W2420/42 , B60W2420/52
Abstract: In various examples, a hazard detection system plots hazard indicators from multiple detection sensors to grid cells of an occupancy grid corresponding to a driving environment. For example, as the ego-machine travels along a roadway, one or more sensors of the ego-machine may capture sensor data representing the driving environment. A system of the ego-machine may then analyze the sensor data to determine the existence and/or location of the one or more hazards within an occupancy grid—and thus within the environment. When a hazard is detected using a respective sensor, the system may plot an indicator of the hazard to one or more grid cells that correspond to the detected location of the hazard. Based, at least in part, on a fused or combined confidence of the hazard indicators for each grid cell, the system may predict whether the corresponding grid cell is occupied by a hazard.
-
公开(公告)号:US20210406560A1
公开(公告)日:2021-12-30
申请号:US17353231
申请日:2021-06-21
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Junghyun Kwon , Mehmet K. Kocamaz , Hae-Jong Seo , Berta Rodriguez Hervas , Tae Eun Choe
Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
-
-
-
-
-