STEERING CONTROL FOR VEHICLES
    12.
    发明申请

    公开(公告)号:US20210053616A1

    公开(公告)日:2021-02-25

    申请号:US16949642

    申请日:2020-11-09

    Applicant: Zoox, Inc.

    Abstract: Model-based control of dynamical systems typically requires accurate domain-specific knowledge and specifications system components. Generally, steering actuator dynamics can be difficult to model due to, for example, an integrated power steering control module, proprietary black box controls, etc. Further, it is difficult to capture the complex interplay of non-linear interactions, such as power steering, tire forces, etc. with sufficient accuracy. To overcome this limitation, a recurring neural network can be employed to model the steering dynamics of an autonomous vehicle. The resulting model can be used to generate feedforward steering commands for embedded control. Such a neural network model can be automatically generated with less domain-specific knowledge, can predict steering dynamics more accurately, and perform comparably to a high-fidelity first principle model when used for controlling the steering system of a self-driving vehicle.

    PREDICTION ON TOP-DOWN SCENES BASED ON ACTION DATA

    公开(公告)号:US20210004611A1

    公开(公告)日:2021-01-07

    申请号:US16504147

    申请日:2019-07-05

    Applicant: Zoox, Inc.

    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.

    Graph neural networks with vectorized object representations in autonomous vehicle systems

    公开(公告)号:US12233901B2

    公开(公告)日:2025-02-25

    申请号:US17187170

    申请日:2021-02-26

    Applicant: Zoox, Inc.

    Abstract: Techniques are discussed herein for generating and using graph neural networks (GNNs) including vectorized representations of map elements and entities within the environment of an autonomous vehicle. Various techniques may include vectorizing map data into representations of map elements, and object data representing entities in the environment of the autonomous vehicle. In some examples, the autonomous vehicle may generate and/or use a GNN representing the environment, including nodes stored as vectorized representations of map elements and entities, and edge features including the relative position and relative yaw between the objects. Machine-learning inference operations may be executed on the GNN, and the node and edge data may be extracted and decoded to predict future states of the entities in the environment.

    Prediction on top-down scenes based on action data

    公开(公告)号:US11631200B2

    公开(公告)日:2023-04-18

    申请号:US17325562

    申请日:2021-05-20

    Applicant: Zoox, Inc.

    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.

    TOP-DOWN SCENE GENERATION
    19.
    发明申请

    公开(公告)号:US20220319057A1

    公开(公告)日:2022-10-06

    申请号:US17218010

    申请日:2021-03-30

    Applicant: Zoox, Inc.

    Abstract: Techniques for top-down scene generation are discussed. A generator component may receive multi-dimensional input data associated with an environment. The generator component may generate, based at least in part on the multi-dimensional input data, a generated top-down scene. A discriminator component receives the generated top-down scene and a real top-down scene. The discriminator component generates binary classification data indicating whether an individual scene in the scene data is classified as generated or classified as real. The binary classification data is provided as a loss to the generator component and the discriminator component.

    PREDICTION ON TOP-DOWN SCENES BASED ON ACTION DATA

    公开(公告)号:US20210271901A1

    公开(公告)日:2021-09-02

    申请号:US17325562

    申请日:2021-05-20

    Applicant: Zoox, Inc.

    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.

Patent Agency Ranking