Semiconductor Laser and Optical Amplifier Photonic Package

    公开(公告)号:US20250035752A1

    公开(公告)日:2025-01-30

    申请号:US18794534

    申请日:2024-08-05

    Abstract: A light detection and ranging (LIDAR) device includes a first wafer layer, a laser assembly disposed on the first wafer layer, a capping layer, a second wafer layer, and a photonic integrated circuit (PIC). The capping layer is coupled to the first wafer layer and configured to seal the laser assembly. The second wafer layer is at least partially coupled to the first wafer layer. The PIC is formed on the second wafer layer. The second wafer includes an exit feature configured to outcouple laser light from the laser assembly.

    Sparse convolutional neural networks

    公开(公告)号:US12210344B2

    公开(公告)日:2025-01-28

    申请号:US18513119

    申请日:2023-11-17

    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.

    Perception validation for autonomous vehicles

    公开(公告)号:US12202512B1

    公开(公告)日:2025-01-21

    申请号:US18628336

    申请日:2024-04-05

    Abstract: An example method includes (a) obtaining an object detection from a perception system that describes an object in an environment of the autonomous vehicle; (b) obtaining, from a reference dataset, a label that describes a reference position of the object in the environment; (c) determining a plurality of component divergence values respectively for a plurality of divergence metrics, wherein a respective divergence value characterizes a respective difference between the object detection and the label; (d) providing the plurality of component divergence values to a machine-learned model to generate a score that indicates an aggregate divergence between the object detection and the label, wherein the machine-learned model includes a plurality of learned parameters defining an influence of the plurality of component divergence values on the score; (e) evaluating a quality of a match between the object detection and the label based on the score.

    System and Method for Identifying Travel Way Features for Autonomous Vehicle Motion Control

    公开(公告)号:US20240427022A1

    公开(公告)日:2024-12-26

    申请号:US18672986

    申请日:2024-05-23

    Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation. And, the method can include identifying one or more travel way features based at least in part on the enhanced three-dimensional segmentation.

    Systems and Methods for Sensor Data Processing and Object Detection and Motion Prediction for Robotic Platforms

    公开(公告)号:US20240369977A1

    公开(公告)日:2024-11-07

    申请号:US18656210

    申请日:2024-05-06

    Abstract: Systems and methods are disclosed for detecting and predicting the motion of objects within the surrounding environment of a system such as an autonomous vehicle. For example, an autonomous vehicle can obtain sensor data from a plurality of sensors comprising at least two different sensor modalities (e.g., RADAR, LIDAR, camera) and fused together to create a fused sensor sample. The fused sensor sample can then be provided as input to a machine learning model (e.g., a machine learning model for object detection and/or motion prediction). The machine learning model can have been trained by independently applying sensor dropout to the at least two different sensor modalities. Outputs received from the machine learning model in response to receipt of the fused sensor samples are characterized by improved generalization performance over multiple sensor modalities, thus yielding improved performance in detecting objects and predicting their future locations, as well as improved navigation performance.

    Systems and Methods for Motion Forecasting and Planning for Autonomous Vehicles

    公开(公告)号:US20240367688A1

    公开(公告)日:2024-11-07

    申请号:US18658674

    申请日:2024-05-08

    Abstract: Systems and methods are disclosed for motion forecasting and planning for autonomous vehicles. For example, a plurality of future traffic scenarios are determined by modeling a joint distribution of actor trajectories for a plurality of actors, as opposed to an approach that models actors individually. As another example, a diversity objective is evaluated that rewards sampling of the future traffic scenarios that require distinct reactions from the autonomous vehicle. An estimated probability for the plurality of future traffic scenarios can be determined and used to generate a contingency plan for motion of the autonomous vehicle. The contingency plan can include at least one initial short-term trajectory intended for immediate action of the AV and a plurality of subsequent long-term trajectories associated with the plurality of future traffic scenarios.

Patent Agency Ranking