-
公开(公告)号:US11708093B2
公开(公告)日:2023-07-25
申请号:US16870355
申请日:2020-05-08
Applicant: Zoox, Inc.
Inventor: Kenneth Michael Siebert , Gowtham Garimella , Benjamin Isaac Mattinson , Samir Parikh , Kai Zhenyu Wang
CPC classification number: B60W60/0025 , G01C21/3407 , G01C21/3453 , G01C21/3691 , G06N20/00 , G08G1/20 , B60W2556/45
Abstract: Techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include determining a trajectory of the object, determining an intent of the trajectory, and sending the trajectory and the intent to a vehicle computing system to control an autonomous vehicle. The vehicle computing system may implement a machine learned model to process data such as sensor data and map data. The machine learned model can associate different intentions of an object in an environment with different trajectories. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on object's intentions and trajectories.
-
公开(公告)号:US20210053616A1
公开(公告)日:2021-02-25
申请号:US16949642
申请日:2020-11-09
Applicant: Zoox, Inc.
Inventor: Joseph Funke , Gowtham Garimella , Marin Kobilarov , Chuang Wang
Abstract: Model-based control of dynamical systems typically requires accurate domain-specific knowledge and specifications system components. Generally, steering actuator dynamics can be difficult to model due to, for example, an integrated power steering control module, proprietary black box controls, etc. Further, it is difficult to capture the complex interplay of non-linear interactions, such as power steering, tire forces, etc. with sufficient accuracy. To overcome this limitation, a recurring neural network can be employed to model the steering dynamics of an autonomous vehicle. The resulting model can be used to generate feedforward steering commands for embedded control. Such a neural network model can be automatically generated with less domain-specific knowledge, can predict steering dynamics more accurately, and perform comparably to a high-fidelity first principle model when used for controlling the steering system of a self-driving vehicle.
-
公开(公告)号:US20210004611A1
公开(公告)日:2021-01-07
申请号:US16504147
申请日:2019-07-05
Applicant: Zoox, Inc.
Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
-
14.
公开(公告)号:US12233901B2
公开(公告)日:2025-02-25
申请号:US17187170
申请日:2021-02-26
Applicant: Zoox, Inc.
Inventor: Gowtham Garimella , Andres Guillermo Morales Morales
Abstract: Techniques are discussed herein for generating and using graph neural networks (GNNs) including vectorized representations of map elements and entities within the environment of an autonomous vehicle. Various techniques may include vectorizing map data into representations of map elements, and object data representing entities in the environment of the autonomous vehicle. In some examples, the autonomous vehicle may generate and/or use a GNN representing the environment, including nodes stored as vectorized representations of map elements and entities, and edge features including the relative position and relative yaw between the objects. Machine-learning inference operations may be executed on the GNN, and the node and edge data may be extracted and decoded to predict future states of the entities in the environment.
-
公开(公告)号:US12060082B1
公开(公告)日:2024-08-13
申请号:US17121041
申请日:2020-12-14
Applicant: Zoox, Inc.
Inventor: Gowtham Garimella , Jefferson Bradfield Packer , Aaron Huang
IPC: B60W60/00 , B60W30/18 , G06F18/214 , G06F18/2415 , G06N20/00 , B60W50/00
CPC classification number: B60W60/0027 , B60W30/18159 , B60W30/18163 , G06F18/214 , G06F18/2415 , G06N20/00 , B60W2050/0075 , B60W2554/80 , B60W2556/45
Abstract: Techniques are discussed for interaction probabilities associated with regions of an environment around a vehicle. An interaction probability of a region may indicate a likelihood an object positioned at the region will interact with the vehicle. A top-down multi-channel image representing a top-down view of the environment and objects therein may be generated and input to a machine learned (ML) model. The ML model may output a probability map, a portion of the probability map comprising a region and an interaction probability associated with the region that indicates a likelihood objects positioned at the region will interact with the vehicle. A priority for resource assignment or analysis may be determined based on the interaction probability for an object positioned in the region. Control of the vehicle may be performed based at least in part on the priority for resource assignment or analysis.
-
公开(公告)号:US11810225B2
公开(公告)日:2023-11-07
申请号:US17218010
申请日:2021-03-30
Applicant: Zoox, Inc.
Inventor: Gerrit Bagschik , Andrew Scott Crego , Gowtham Garimella , Michael Haggblade , Andraz Kavalar , Kai Zhenyu Wang
Abstract: Techniques for top-down scene generation are discussed. A generator component may receive multi-dimensional input data associated with an environment. The generator component may generate, based at least in part on the multi-dimensional input data, a generated top-down scene. A discriminator component receives the generated top-down scene and a real top-down scene. The discriminator component generates binary classification data indicating whether an individual scene in the scene data is classified as generated or classified as real. The binary classification data is provided as a loss to the generator component and the discriminator component.
-
公开(公告)号:US20230159060A1
公开(公告)日:2023-05-25
申请号:US17535418
申请日:2021-11-24
Applicant: Zoox, Inc.
Inventor: Gowtham Garimella , Marin Kobilarov , Andres Guillermo Morales Morales , Ethan Miller Pronovost , Kai Zhenyu Wang , Xiaosi Zeng
CPC classification number: B60W60/0027 , G06N3/02 , B60W2556/40 , B60W2554/4041 , B60W2554/4045 , B60W2555/60 , B60W2554/4042 , B60W2554/4043 , B60W2554/4046 , B60W2554/402
Abstract: Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a distribution of predicted positions for the object in the future that meet a criterion, allowing for more efficient sampling. A predicted position of the object in the future may be determined by sampling from the distribution.
-
公开(公告)号:US11631200B2
公开(公告)日:2023-04-18
申请号:US17325562
申请日:2021-05-20
Applicant: Zoox, Inc.
IPC: G06T11/00 , G08G1/04 , G08G1/01 , G06T11/60 , G08G1/052 , G08G1/056 , G05B13/02 , G06V20/56 , G06F18/214 , G06V10/82
Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
-
公开(公告)号:US20220319057A1
公开(公告)日:2022-10-06
申请号:US17218010
申请日:2021-03-30
Applicant: Zoox, Inc.
Inventor: Gerrit Bagschik , Andrew Scott Crego , Gowtham Garimella , Michael Haggblade , Andraz Kavalar , Kai Zhenyu Wang
Abstract: Techniques for top-down scene generation are discussed. A generator component may receive multi-dimensional input data associated with an environment. The generator component may generate, based at least in part on the multi-dimensional input data, a generated top-down scene. A discriminator component receives the generated top-down scene and a real top-down scene. The discriminator component generates binary classification data indicating whether an individual scene in the scene data is classified as generated or classified as real. The binary classification data is provided as a loss to the generator component and the discriminator component.
-
公开(公告)号:US20210271901A1
公开(公告)日:2021-09-02
申请号:US17325562
申请日:2021-05-20
Applicant: Zoox, Inc.
Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
-
-
-
-
-
-
-
-
-