-
公开(公告)号:US20240362928A1
公开(公告)日:2024-10-31
申请号:US18766127
申请日:2024-07-08
Applicant: NVIDIA Corporation
Inventor: Josh Abbott , Miguel Sainz Serra , Zhaoting Ye , David Nister
CPC classification number: G06V20/588 , G06T7/12 , G06T7/70 , G06T11/20 , G06T2207/20084 , G06T2207/20132 , G06T2207/30256 , G06T2210/12
Abstract: In various examples, object fence corresponding to objects detected by an ego-vehicle may be used to determine overlap of the object fences with lanes on a driving surface. A lane mask may be generated corresponding to the lanes on the driving surface, and the object fences may be compared to the lanes of the lane mask to determine the overlap. Where an object fence is located in more than one lane, a boundary scoring approach may be used to determine a ratio of overlap of the boundary fence, and thus the object, with each of the lanes. The overlap with one or more lanes for each object may be used to determine lane assignments for the objects, and the lane assignments may be used by the ego-vehicle to determine a path or trajectory along the driving surface.
-
72.
公开(公告)号:US20240336285A1
公开(公告)日:2024-10-10
申请号:US18745919
申请日:2024-06-17
Applicant: NVIDIA Corporation
Inventor: Julia Ng , David Nister , Zhenyi Zhang , Yizhou Wang
IPC: B60W60/00 , B60W30/095
CPC classification number: B60W60/00272 , B60W30/0953 , B60W60/0011 , B60W60/0018 , B60W2554/4041 , B60W2554/4042 , B60W2554/80
Abstract: In various examples, systems and methods are disclosed for weighting one or more optional paths based on obstacle avoidance or other safety considerations. In some embodiments, the obstacle avoidance considerations may be computed using a comparison of trajectories representative of safety procedures at present and future projected time steps of an ego-vehicle and other actors to ensure that each actor is capable of implementing their respective safety procedure while avoiding collisions at any point along the trajectory. This comparison may include filtering out a path(s) of an actor at a time step(s)—e.g., using a one-dimensional lookup—based on spatial relationships between the actor and the ego-vehicle at the time step(s). Where a particular path—or point along the path—does not satisfy a collision-free standard, the path may be penalized more negatively with respect to the obstacle avoidance considerations, or may be removed from consideration as a potential path.
-
公开(公告)号:US12080078B2
公开(公告)日:2024-09-03
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12072443B2
公开(公告)日:2024-08-27
申请号:US17377053
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/00 , G06N3/045 , G06T19/00 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58 , G06V10/10
CPC classification number: G01S7/4802 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G05D1/0088 , G06N3/045 , G06T19/006 , G06V10/25 , G06V10/26 , G06V10/454 , G06V10/764 , G06V10/774 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/58 , G06V20/584 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261 , G06V10/16
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20240273919A1
公开(公告)日:2024-08-15
申请号:US18647415
申请日:2024-04-26
Applicant: NVIDIA CORPORATION
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , B60W60/0011 , B60W60/0016 , B60W60/0027 , G01S17/89 , G01S17/931 , G06N3/045 , G06T19/006 , G06V20/58 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20240174219A1
公开(公告)日:2024-05-30
申请号:US18432887
申请日:2024-02-05
Applicant: NVIDIA Corporation
Inventor: David Nister , Hon-Leung Lee , Julia Ng , Yizhou Wang
IPC: B60W30/09 , B60W30/095
CPC classification number: B60W30/09 , B60W30/095 , B60W2520/06 , B60W2520/10 , B60W2520/14 , B60W2520/16 , B60W2520/18 , B60W2554/00
Abstract: In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
-
公开(公告)号:US20240135173A1
公开(公告)日:2024-04-25
申请号:US18343291
申请日:2023-06-27
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Sashank Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
IPC: G06N3/08 , B60W30/14 , B60W60/00 , G06F18/214 , G06V10/762 , G06V20/56
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
-
78.
公开(公告)号:US11960026B2
公开(公告)日:2024-04-16
申请号:US17976581
申请日:2022-10-28
Applicant: NVIDIA Corporation
Inventor: Alexander Popov , Nikolai Smolyanskiy , Ryan Oldja , Shane Murray , Tilman Wekel , David Nister , Joachim Pehserl , Ruchi Bhargava , Sangmin Oh
CPC classification number: G01S7/417 , G01S13/865 , G01S13/89 , G06N3/04 , G06N3/08
Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
-
公开(公告)号:US20240116538A1
公开(公告)日:2024-04-11
申请号:US18545856
申请日:2023-12-19
Applicant: NVIDIA Corporation
Inventor: Zhenyi Zhang , Yizhou Wang , David Nister , Neda Cvijetic
IPC: B60W60/00 , B60W30/095 , B60W30/18 , B60W40/105
CPC classification number: B60W60/0011 , B60W30/0956 , B60W30/18163 , B60W40/105 , B60W2420/403 , B60W2420/408 , B60W2552/53
Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
-
公开(公告)号:US11928822B2
公开(公告)日:2024-03-12
申请号:US17864026
申请日:2022-07-13
Applicant: NVIDIA Corporation
Inventor: Trung Pham , Berta Rodriguez Hervas , Minwoo Park , David Nister , Neda Cvijetic
IPC: G06T7/11 , G05B13/02 , G06F18/21 , G06F18/24 , G06N3/04 , G06N3/08 , G06T5/00 , G06T11/20 , G06V10/26 , G06V10/34 , G06V10/44 , G06V10/82 , G06V20/56 , G06V30/19 , G06V30/262
CPC classification number: G06T7/11 , G05B13/027 , G06F18/21 , G06F18/24 , G06N3/04 , G06N3/08 , G06T3/4046 , G06T5/002 , G06T11/20 , G06V10/267 , G06V10/34 , G06V10/454 , G06V10/82 , G06V20/56 , G06V30/19173 , G06V30/274 , G06T2207/20081 , G06T2207/20084 , G06T2207/30252 , G06T2210/12
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
-
-
-
-
-
-
-
-
-