-
公开(公告)号:US20240359709A1
公开(公告)日:2024-10-31
申请号:US18628703
申请日:2024-04-06
Applicant: Aptiv Technologies AG
Inventor: Pascal HOEVEL , Maximilian SCHAEFER , Kun ZHAO
CPC classification number: B60W60/0027 , G06V20/58
Abstract: A method is provided for predicting trajectories of a plurality of road users. For each road user, a set of characteristics detected by a perception system of a vehicle is determined, wherein the set of characteristics includes specific characteristics associated with a predefined class of road users. The set of characteristics is transformed to a set of input features for a prediction algorithm via a processing unit of the vehicle, wherein each set of input data comprises the same predefined number of data elements. At least one respective trajectory for each of the road users is determined by applying the prediction algorithm to the input data.
-
2.
公开(公告)号:US20240351615A1
公开(公告)日:2024-10-24
申请号:US18752907
申请日:2024-06-25
Applicant: TIANJIN UNIVERSITY
CPC classification number: B60W60/0027 , B60W50/0097 , B60W60/001 , B60W2050/0031 , B60W2050/0052 , B60W2552/10 , B60W2554/4041 , B60W2554/4046 , B60W2720/106
Abstract: Disclosed is a method for cooperative decision-making on lane-changing behavior of an autonomous vehicle based on Bayesian game. On one hand, intelligent networked road perception and big data analysis are utilized to infer statistical characteristics of driving styles of a side vehicle under different time periods and traffic flow states, serving as prior predictions of driving styles of the side vehicle. On the other hand, the dynamic interaction behaviors during lane-changing processes of a specified vehicle and the side vehicle are continuously observed and posterior corrections are made to the driving styles of the side vehicle. When the specified vehicle generates a lane-changing willingness, probabilities of driving styles are iteratively predicted using Bayesian game principles. Comprehensive consideration of style and willingness probabilities yields a lane-changing probability of the specified vehicle. Once the lane-changing probability exceeds a threshold, a lane-changing activation instruction is issued.
-
公开(公告)号:US12122428B2
公开(公告)日:2024-10-22
申请号:US17078561
申请日:2020-10-23
Applicant: FORD GLOBAL TECHNOLOGIES, LLC
Inventor: Basel Alghanem , Arsenii Saranin , G. Peter K. Carr
IPC: B60W60/00 , B60W30/095 , G06T7/187
CPC classification number: B60W60/0027 , B60W30/095 , G06T7/187 , B60W2420/408 , B60W2540/229
Abstract: Systems and methods for object detection. Object detection may be used to control autonomous vehicle(s). For example, the methods comprise: obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of the autonomous vehicle; and using, by the computing device, the LiDAR dataset and image(s) to detect an object that is in proximity to the autonomous vehicle. The object being is detected by: computing a distribution of object detections that each point of the LiDAR dataset is likely to be in; creating a plurality of segments of LiDAR data points using the distribution of object detections; merging the plurality of segments of LiDAR data points to generate merged segments; and detecting the object in a point cloud defined by the LiDAR dataset based on the merged segments. The object detection may be used by the computing device to facilitate at least one autonomous driving operation.
-
公开(公告)号:US20240326873A1
公开(公告)日:2024-10-03
申请号:US18190178
申请日:2023-03-27
Inventor: Rohit Gupta , Amr Abdelraouf , Kyungtae Han
CPC classification number: B60W60/0027 , B60W50/0097 , B60W50/14 , G06F16/2255 , B60W30/18163 , B60W2050/146 , B60W2540/221 , B60W2540/225 , B60W2554/4041 , B60W2554/80
Abstract: An example operation includes one or more of receiving position data of an ego vehicle and of one or more surrounding vehicles of the ego vehicle via sensors of the ego vehicle while the ego vehicle and the surrounding vehicles are travelling along a road, predicting, via a machine learning model, a trajectory of the ego vehicle and trajectories of the one or more surrounding vehicles based on the position data, generating a routing instruction for the ego vehicle based on the predicted trajectory of the ego vehicle and the predicted trajectories of the one or more surrounding vehicles, and transmitting the routing instruction to a display associated with the ego vehicle.
-
公开(公告)号:US12103561B2
公开(公告)日:2024-10-01
申请号:US17966037
申请日:2022-10-14
Applicant: Zoox, Inc.
Inventor: Pengfei Duan , James William Vaisey Philbin , Cooper Stokes Sloan , Sarah Tariq , Feng Tian , Chuang Wang , Kai Zhenyu Wang , Yi Xu
CPC classification number: B60W60/0025 , B60W60/0027 , G01C21/32 , G01S17/86 , G05D1/0088 , G05D1/0214 , G05D1/0223 , G05D1/0274 , B60W2420/403 , B60W2420/408 , B60W2552/05 , B60W2552/53 , B60W2555/60
Abstract: Techniques relating to monitoring map consistency are described. In an example, a monitoring component associated with a vehicle can receive sensor data associated with an environment in which the vehicle is positioned. The monitoring component can generate, based at least in part on the sensor data, an estimated map of the environment, wherein the estimated map is encoded with policy information for driving within the environment. The monitoring component can then compare first information associated with a stored map of the environment with second information associated with the estimated map to determine whether the estimated map and the stored map are consistent. Component(s) associated with the vehicle can then control the object based at least in part on results of the comparing.
-
公开(公告)号:US20240300540A1
公开(公告)日:2024-09-12
申请号:US18179939
申请日:2023-03-07
Applicant: GM Cruise Holdings LLC
Inventor: Burkay Donderici
CPC classification number: B60W60/0027 , G01S13/862 , G01S13/865 , G01S13/867 , G01S15/86 , G01S17/86 , B60W2420/403 , B60W2420/408 , B60W2420/54 , B60W2556/35
Abstract: Systems and techniques are provided for fusing sensor data from multiple sensors. An example method can include obtaining a first set of sensor data from a first sensor and a second set of sensor data from a second sensor; detecting an object in the first set of sensor data and the object in the second set of sensor data; aligning the object in the first set of sensor data and the object in the second set of sensor data to a common time; and based on the aligned object from the first set of sensor data and the aligned object from the second set of sensor data, fusing the aligned object from the first set of sensor data and the aligned object from the second set of sensor data.
-
公开(公告)号:US12080078B2
公开(公告)日:2024-09-03
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12072443B2
公开(公告)日:2024-08-27
申请号:US17377053
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/00 , G06N3/045 , G06T19/00 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58 , G06V10/10
CPC classification number: G01S7/4802 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V10/25 , G06V10/26 , G06V10/454 , G06V10/764 , G06V10/774 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/58 , G06V20/584 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261 , G06V10/16
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12065171B2
公开(公告)日:2024-08-20
申请号:US17535357
申请日:2021-11-24
Applicant: Zoox, Inc.
Inventor: Gowtham Garimella , Marin Kobilarov , Andres Guillermo Morales Morales , Ethan Miller Pronovost , Kai Zhenyu Wang , Xiaosi Zeng
CPC classification number: B60W60/0027 , G05D1/0274 , G06N3/08 , G06N5/04 , B60W2552/53 , B60W2554/20 , B60W2554/402 , B60W2554/4041 , B60W2554/4042 , B60W2554/4043 , B60W2554/80 , B60W2555/20 , B60W2555/60
Abstract: Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a predicted position of the object at a subsequent timestep. Further, a predicted trajectory of the object may be determined using predicted positions of the object at various timesteps.
-
公开(公告)号:US20240273919A1
公开(公告)日:2024-08-15
申请号:US18647415
申请日:2024-04-26
Applicant: NVIDIA CORPORATION
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
-
-
-
-
-
-
-
-