-
公开(公告)号:US20230169348A1
公开(公告)日:2023-06-01
申请号:US18160662
申请日:2023-01-27
Applicant: Martin Ivanov GERDZHEV , Ehsan TAGHAVI , Ryan RAZANI , Bingbing LIU
Inventor: Martin Ivanov GERDZHEV , Ehsan TAGHAVI , Ryan RAZANI , Bingbing LIU
IPC: G06N3/084
CPC classification number: G06N3/084
Abstract: Method and system for computing a total variation loss for use in backpropagation during training a neural network which individually classifies data points, comprising: predicting, using a neural network, a respective label for each data point in a set of input data points; determining a variation indicator that indicates a variance between: (i) smoothness of the predicted labels among neighboring data points and (ii) smoothness of the ground truth labels among the same neighboring data points; and computing the total variation loss based on the variation indicator.
-
12.
公开(公告)号:US11527084B2
公开(公告)日:2022-12-13
申请号:US16926096
申请日:2020-07-10
Applicant: Ehsan Taghavi , Amirhosein Nabatchian , Bingbing Liu
Inventor: Ehsan Taghavi , Amirhosein Nabatchian , Bingbing Liu
Abstract: A system and method for generating a bounding box for an object in proximity to a vehicle are disclosed. The method includes: receiving a three-dimensional (3D) point cloud representative of an environment; receiving a two-dimensional (2D) image of the environment; processing the 3D point cloud to identify an object cluster of 3D data points for a 3D object in the 3D point cloud; processing the 2D image to detect a 2D object in the 2D image and generate information regarding the 2D object from the 2D image; and when the 3D object and the 2D object correspond to the same object in the environment: generating a bird's eye view (BEV) bounding box for the object based on the object cluster of 3D data points and the information from the 2D image.
-
13.
公开(公告)号:US20220300681A1
公开(公告)日:2022-09-22
申请号:US17203718
申请日:2021-03-16
Applicant: Yuan REN , Ehsan TAGHAVI , Bingbing LIU
Inventor: Yuan REN , Ehsan TAGHAVI , Bingbing LIU
Abstract: Devices, systems, methods, and media are described for point cloud data augmentation using model injection, for the purpose of training machine learning models to perform point cloud segmentation and object detection. A library of surface models is generated from point cloud object instances in LIDAR-generated point cloud frames. The surface models can be used to inject new object instances into target point cloud frames at an arbitrary location within the target frame to generate new, augmented point cloud data. The augmented point cloud data may then be used as training data to improve the accuracy of a machine learned model trained using a machine learning algorithm to perform a segmentation and/or object detection task.
-
-