-
31.
公开(公告)号:US10997435B2
公开(公告)日:2021-05-04
申请号:US16535440
申请日:2019-08-08
Applicant: NVIDIA Corporation
Inventor: Josh Abbott , Miguel Sainz Serra , Zhaoting Ye , David Nister
Abstract: In various examples, object fence corresponding to objects detected by an ego-vehicle may be used to determine overlap of the object fences with lanes on a driving surface. A lane mask may be generated corresponding to the lanes on the driving surface, and the object fences may be compared to the lanes of the lane mask to determine the overlap. Where an object fence is located in more than one lane, a boundary scoring approach may be used to determine a ratio of overlap of the boundary fence, and thus the object, with each of the lanes. The overlap with one or more lanes for each object may be used to determine lane assignments for the objects, and the lane assignments may be used by the ego-vehicle to determine a path or trajectory along the driving surface.
-
公开(公告)号:US20200293796A1
公开(公告)日:2020-09-17
申请号:US16814351
申请日:2020-03-10
Applicant: NVIDIA Corporation
Inventor: Sayed Mehdi Sajjadi Mohammadabadi , Berta Rodriguez Hervas , Hang Dou , Igor Tryndin , David Nister , Minwoo Park , Neda Cvijetic , Junghyun Kwon , Trung Pham
Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
-
公开(公告)号:US20200249684A1
公开(公告)日:2020-08-06
申请号:US16781893
申请日:2020-02-04
Applicant: NVIDIA Corporation
Inventor: Davide Marco Onofrio , Hae-Jong Seo , David Nister , Minwoo Park , Neda Cvijetic
Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.
-
34.
公开(公告)号:US20200090322A1
公开(公告)日:2020-03-19
申请号:US16570187
申请日:2019-09-13
Applicant: NVIDIA Corporation
Inventor: Hae-Jong Seo , Abhishek Bajpayee , David Nister , Minwoo Park , Neda Cvijetic
Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
-
公开(公告)号:US20190266736A1
公开(公告)日:2019-08-29
申请号:US16051263
申请日:2018-07-31
Applicant: Nvidia Corporation
Inventor: Ishwar Kulkarni , Ibrahim Eden , Michael Kroepfl , David Nister
Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.
-
公开(公告)号:US20240410981A1
公开(公告)日:2024-12-12
申请号:US18810728
申请日:2024-08-21
Applicant: NVIDIA CORPORATION
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/81 , G06N3/045 , G06T19/00 , G06V10/10 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US12159417B2
公开(公告)日:2024-12-03
申请号:US17678835
申请日:2022-02-23
Applicant: NVIDIA Corporation
Inventor: David Nister , Soohwan Kim , Yue Wu , Minwoo Park , Cheng-Chieh Yang
IPC: G06T7/60 , G06T7/215 , G06V10/422
Abstract: In various examples, an ego-machine may analyze sensor data to identify and track features in the sensor data using. Geometry of the tracked features may be used to analyze motion flow to determine whether the motion flow violates one or more geometrical constraints. As such, tracked features may be identified as dynamic features when the motion flow corresponding to the tracked features violates the one or more static constraints for static features. Tracked features that are determined to be dynamic features may be clustered together according to their location and feature track. Once features have been clustered together, the system may calculate a detection bounding shape for the clustered features. The bounding shape information may then be used by the ego-machine for path planning, control decisions, obstacle avoidance, and/or other operations.
-
公开(公告)号:US12093824B2
公开(公告)日:2024-09-17
申请号:US18343291
申请日:2023-06-28
Applicant: NVIDIA Corporation
Inventor: Yilin Yang , Bala Siva Sashank Jujjavarapu , Pekka Janis , Zhaoting Ye , Sangmin Oh , Minwoo Park , Daniel Herrera Castro , Tommi Koivisto , David Nister
IPC: G06K9/00 , B60W30/14 , B60W60/00 , G06F18/214 , G06N3/08 , G06V10/762 , G06V20/56
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
-
39.
公开(公告)号:US12073325B2
公开(公告)日:2024-08-27
申请号:US18337854
申请日:2023-06-20
Applicant: NVIDIA Corporation
Inventor: Junghyun Kwon , Yilin Yang , Bala Siva Sashank Jujjavarapu , Zhaoting Ye , Sangmin Oh , Minwoo Park , David Nister
IPC: G06K9/00 , B60W30/14 , B60W60/00 , G06F18/214 , G06N3/08 , G06V10/762 , G06V20/56
CPC classification number: G06N3/08 , B60W30/14 , B60W60/0011 , G06F18/2155 , G06V10/763 , G06V20/56
Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN.
-
公开(公告)号:US12051332B2
公开(公告)日:2024-07-30
申请号:US17940664
申请日:2022-09-08
Applicant: NVIDIA Corporation
Inventor: Davide Marco Onofrio , Hae-Jong Seo , David Nister , Minwoo Park , Neda Cvijetic
CPC classification number: G08G1/167 , G05D1/0088 , G05D1/0214 , G05D1/0219 , G05D1/0223 , G06F18/23 , G06N3/08 , G06V20/588
Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections.
-
-
-
-
-
-
-
-
-