-
公开(公告)号:US12013244B2
公开(公告)日:2024-06-18
申请号:US16848102
申请日:2020-04-14
Applicant: NVIDIA Corporation
Inventor: Trung Pham , Hang Dou , Berta Rodriguez Hervas , Minwoo Park , Neda Cvijetic , David Nister
IPC: G05D1/00 , G01C21/26 , G06N3/04 , G06N3/08 , G06V10/44 , G06V10/46 , G06V10/764 , G06V10/82 , G06V20/56 , B60W30/18 , B60W60/00 , G06F18/2413 , G06N3/02 , G06N3/044 , G06N3/045 , G06N3/047 , G06N3/048 , G06N3/088 , G06N5/01 , G06N7/01 , G06N20/00 , G06N20/10 , G08G1/16
CPC classification number: G01C21/26 , G05D1/0083 , G05D1/0246 , G06N3/04 , G06N3/08 , G06V10/454 , G06V10/462 , G06V10/764 , G06V10/82 , G06V20/56 , G06F2218/12
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection.
-
公开(公告)号:US11897471B2
公开(公告)日:2024-02-13
申请号:US18162576
申请日:2023-01-31
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
IPC: B60W30/18 , G06N3/08 , G08G1/01 , B60W30/095 , B60W60/00 , B60W30/09 , G06V20/56 , G06V10/25 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/70 , G06V10/75
CPC classification number: B60W30/18154 , B60W30/09 , B60W30/095 , B60W60/0011 , G06N3/08 , G06V10/25 , G06V10/751 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/56 , G06V20/588 , G06V20/70 , G08G1/0125
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US11884294B2
公开(公告)日:2024-01-30
申请号:US17130667
申请日:2020-12-22
Applicant: NVIDIA Corporation
Inventor: Zhenyi Zhang , Yizhou Wang , David Nister , Neda Cvijetic
IPC: B60W60/00 , B60W30/18 , B60W30/095 , B60W40/105
CPC classification number: B60W60/0011 , B60W30/0956 , B60W30/18163 , B60W40/105 , B60W2420/42 , B60W2420/52 , B60W2552/53
Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
-
公开(公告)号:US20230152801A1
公开(公告)日:2023-05-18
申请号:US18151012
申请日:2023-01-06
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Xiaolin Lin , Hae-Jong Seo , David Nister , Neda Cvijetic
IPC: G05D1/00 , G06N3/04 , G06V20/56 , G06F18/214 , G06F18/23 , G06F18/2411 , G06V10/764 , G06V10/776 , G06V10/82 , G06V10/44 , G06V10/48 , G06V10/94
CPC classification number: G05D1/0077 , G05D1/0088 , G06F18/23 , G06F18/2155 , G06F18/2411 , G06N3/0418 , G06V10/48 , G06V10/82 , G06V10/457 , G06V10/764 , G06V10/776 , G06V10/955 , G06V20/588 , G05D2201/0213
Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment - e.g., for updating a world model - in a variety of autonomous machine applications.
-
35.
公开(公告)号:US20230012645A1
公开(公告)日:2023-01-19
申请号:US17952881
申请日:2022-09-26
Applicant: NVIDIA Corporation
Inventor: Hae-Jong Seo , Abhishek Bajpayee , David Nister , Minwoo Park , Neda Cvijetic
Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
-
公开(公告)号:US11436837B2
公开(公告)日:2022-09-06
申请号:US16911007
申请日:2020-06-24
Applicant: NVIDIA Corporation
Inventor: Trung Pham , Berta Rodriguez Hervas , Minwoo Park , David Nister , Neda Cvijetic
IPC: G06V20/56 , G06N3/04 , G06T5/00 , G06N3/08 , G05B13/02 , G06T3/40 , G06T7/11 , G06T11/20 , G06K9/62 , G06V30/262
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
-
37.
公开(公告)号:US20210063578A1
公开(公告)日:2021-03-04
申请号:US17005788
申请日:2020-08-28
Applicant: NVIDIA Corporation
Inventor: Tilman Wekel , Sangmin Oh , David Nister , Joachim Pehserl , Neda Cvijetic , Ibrahim Eden
IPC: G01S17/894 , G01S17/931 , G01S7/481
Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20200341466A1
公开(公告)日:2020-10-29
申请号:US16848102
申请日:2020-04-14
Applicant: NVIDIA Corporation
Inventor: Trung Pham , Hang Dou , Berta Rodriguez Hervas , Minwoo Park , Neda Cvijetic , David Nister
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection.
-
-
-
-
-
-
-