-
公开(公告)号:US20240176018A1
公开(公告)日:2024-05-30
申请号:US18060444
申请日:2022-11-30
Applicant: NVIDIA Corporation
Inventor: David Weikersdorfer , Qian Lin , Aman Jhunjhunwala , Emilie Lucie Eloïse Wirbel , Sangmin Oh , Minwoo Park , Gyeong Woo Cheon , Arthur Henry Rajala , Bor-Jeng Chen
IPC: G01S15/931 , G01S15/86
CPC classification number: G01S15/931 , G01S15/86 , G01S2015/938
Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations
-
公开(公告)号:US20240176017A1
公开(公告)日:2024-05-30
申请号:US18060376
申请日:2022-11-30
Applicant: NVIDIA Corporation
Inventor: David Weikersdorfer , Qian Lin , Aman Jhunjhunwala , Emilie Lucie Eloïse Wirbel , Sangmin Oh , Minwoo Park , Gyeong Woo Cheon , Arthur Henry Rajala , Bor-Jeng Chen
IPC: G01S15/931 , G01S15/86
CPC classification number: G01S15/931 , G01S15/86 , G01S2015/938
Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations.
-
3.
公开(公告)号:US20240020953A1
公开(公告)日:2024-01-18
申请号:US18353453
申请日:2023-07-17
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Trung Pham , Junghyun Kwon , Sayed Mehdi Sajjadi Mohammadabadi , Bor-Jeng Chen , Xin Liu , Bala Siva Sashank Jujjavarapu , Mehran Maghoumi
CPC classification number: G06V10/7715 , G06V20/56 , G06V10/82
Abstract: In various examples, feature values corresponding to a plurality of views are transformed into feature values of a shared orientation or perspective to generate a feature map—such as a Bird's-Eye-View (BEV), top-down, orthogonally projected, and/or other shared perspective feature map type. Feature values corresponding to a region of a view may be transformed into feature values using a neural network. The feature values may be assigned to bins of a grid and values assigned to at least one same bin may be combined to generate one or more feature values for the feature map. To assign the transformed features to the bins, one or more portions of a view may be projected into one or more bins using polynomial curves. Radial and/or angular bins may be used to represent the environment for the feature map.
-
-