-
公开(公告)号:US20240176017A1
公开(公告)日:2024-05-30
申请号:US18060376
申请日:2022-11-30
Applicant: NVIDIA Corporation
Inventor: David Weikersdorfer , Qian Lin , Aman Jhunjhunwala , Emilie Lucie Eloïse Wirbel , Sangmin Oh , Minwoo Park , Gyeong Woo Cheon , Arthur Henry Rajala , Bor-Jeng Chen
IPC: G01S15/931 , G01S15/86
CPC classification number: G01S15/931 , G01S15/86 , G01S2015/938
Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations.
-
公开(公告)号:US20240176018A1
公开(公告)日:2024-05-30
申请号:US18060444
申请日:2022-11-30
Applicant: NVIDIA Corporation
Inventor: David Weikersdorfer , Qian Lin , Aman Jhunjhunwala , Emilie Lucie Eloïse Wirbel , Sangmin Oh , Minwoo Park , Gyeong Woo Cheon , Arthur Henry Rajala , Bor-Jeng Chen
IPC: G01S15/931 , G01S15/86
CPC classification number: G01S15/931 , G01S15/86 , G01S2015/938
Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations
-