-
公开(公告)号:US20250035752A1
公开(公告)日:2025-01-30
申请号:US18794534
申请日:2024-08-05
Applicant: Aurora Operations, Inc.
Inventor: Lei Wang , Sen Lin , Andrew Steil Michaels
IPC: G01S7/481 , H01S5/02253
Abstract: A light detection and ranging (LIDAR) device includes a first wafer layer, a laser assembly disposed on the first wafer layer, a capping layer, a second wafer layer, and a photonic integrated circuit (PIC). The capping layer is coupled to the first wafer layer and configured to seal the laser assembly. The second wafer layer is at least partially coupled to the first wafer layer. The PIC is formed on the second wafer layer. The second wafer includes an exit feature configured to outcouple laser light from the laser assembly.
-
公开(公告)号:US12210344B2
公开(公告)日:2025-01-28
申请号:US18513119
申请日:2023-11-17
Applicant: Aurora Operations, Inc.
Inventor: Raquel Urtasun , Mengye Ren , Andrei Pokrovsky , Bin Yang
IPC: G05D1/00 , G01S17/86 , G01S17/89 , G01S17/931
Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
-
公开(公告)号:US12202512B1
公开(公告)日:2025-01-21
申请号:US18628336
申请日:2024-04-05
Applicant: Aurora Operations, Inc.
Inventor: Davis Edward King
Abstract: An example method includes (a) obtaining an object detection from a perception system that describes an object in an environment of the autonomous vehicle; (b) obtaining, from a reference dataset, a label that describes a reference position of the object in the environment; (c) determining a plurality of component divergence values respectively for a plurality of divergence metrics, wherein a respective divergence value characterizes a respective difference between the object detection and the label; (d) providing the plurality of component divergence values to a machine-learned model to generate a score that indicates an aggregate divergence between the object detection and the label, wherein the machine-learned model includes a plurality of learned parameters defining an influence of the plurality of component divergence values on the score; (e) evaluating a quality of a match between the object detection and the label based on the score.
-
公开(公告)号:USD1056734S1
公开(公告)日:2025-01-07
申请号:US29870313
申请日:2023-01-20
Applicant: Aurora Operations, Inc.
Designer: Woonghee Han , John Paxton , Albert Shane
-
15.
公开(公告)号:US20240427022A1
公开(公告)日:2024-12-26
申请号:US18672986
申请日:2024-05-23
Applicant: Aurora Operations, Inc.
Inventor: Raquel Urtasun , Min Bai , Shenlong Wang
IPC: G01S17/931 , G06T7/10 , G06T7/70 , G06T17/00 , G06T17/10 , G06V10/26 , G06V10/80 , G06V20/56 , G06V20/58
Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation. And, the method can include identifying one or more travel way features based at least in part on the enhanced three-dimensional segmentation.
-
公开(公告)号:USD1054884S1
公开(公告)日:2024-12-24
申请号:US29870317
申请日:2023-01-20
Applicant: Aurora Operations, Inc.
Designer: Woonghee Han , John Paxton , Albert Shane
-
公开(公告)号:US12169256B2
公开(公告)日:2024-12-17
申请号:US17321045
申请日:2021-05-14
Applicant: Aurora Operations, Inc.
Inventor: Jean-Sebastien Valois , David McAllister Bradley , Adam Charles Watson , Peter Anthony Melick , Andrew Gilbert Miller
IPC: G01S7/00 , G01S7/497 , G01S17/08 , G01S17/931 , G01S7/40
Abstract: A vehicle sensor calibration system can detect an SDV on a turntable surrounded by a plurality of fiducial targets, and rotate the turntable using a control mechanism to provide the sensor system of the SDV with a sensor view of the plurality of fiducial targets. The vehicle sensor calibration system can receive, over a communication link with the SDV, a data log corresponding to the sensor view from the sensor system of the SDV recorded as the SDV rotates on the turntable. Thereafter, the vehicle sensor calibration system can analyze the sensor data to determine a set of calibration parameters to calibrate the sensor system of the SDV.
-
公开(公告)号:US20240411311A1
公开(公告)日:2024-12-12
申请号:US18812632
申请日:2024-08-22
Applicant: Aurora Operations, Inc.
Inventor: Scott K. Boehmke
IPC: G05D1/00 , G01C21/34 , G01S7/481 , G01S7/497 , G01S17/931
Abstract: A planar-beam, light detection and ranging (PLADAR) system can include a laser scanner that emits a planar-beam, and a detector array that detects reflected light from the planar beam.
-
公开(公告)号:US20240369977A1
公开(公告)日:2024-11-07
申请号:US18656210
申请日:2024-05-06
Applicant: Aurora Operations, Inc.
Inventor: Abhishek Mohta , Fang-Chieh Chou , Carlos Vallespi-Gonzalez , Brian C. Becker , Nemanja Djuric
Abstract: Systems and methods are disclosed for detecting and predicting the motion of objects within the surrounding environment of a system such as an autonomous vehicle. For example, an autonomous vehicle can obtain sensor data from a plurality of sensors comprising at least two different sensor modalities (e.g., RADAR, LIDAR, camera) and fused together to create a fused sensor sample. The fused sensor sample can then be provided as input to a machine learning model (e.g., a machine learning model for object detection and/or motion prediction). The machine learning model can have been trained by independently applying sensor dropout to the at least two different sensor modalities. Outputs received from the machine learning model in response to receipt of the fused sensor samples are characterized by improved generalization performance over multiple sensor modalities, thus yielding improved performance in detecting objects and predicting their future locations, as well as improved navigation performance.
-
公开(公告)号:US20240367688A1
公开(公告)日:2024-11-07
申请号:US18658674
申请日:2024-05-08
Applicant: Aurora Operations, Inc.
Inventor: Alexander Yuhao Cui , Abbas Sadat , Sergio Casas , Renjie Liao , Raquel Urtasun
Abstract: Systems and methods are disclosed for motion forecasting and planning for autonomous vehicles. For example, a plurality of future traffic scenarios are determined by modeling a joint distribution of actor trajectories for a plurality of actors, as opposed to an approach that models actors individually. As another example, a diversity objective is evaluated that rewards sampling of the future traffic scenarios that require distinct reactions from the autonomous vehicle. An estimated probability for the plurality of future traffic scenarios can be determined and used to generate a contingency plan for motion of the autonomous vehicle. The contingency plan can include at least one initial short-term trajectory intended for immediate action of the AV and a plurality of subsequent long-term trajectories associated with the plurality of future traffic scenarios.
-
-
-
-
-
-
-
-
-