-
公开(公告)号:US12172667B2
公开(公告)日:2024-12-24
申请号:US17452747
申请日:2021-10-28
Applicant: NVIDIA Corporation
Inventor: Kang Wang , Yue Wu , Minwoo Park , Gang Pan
Abstract: In various examples, a 3D surface structure such as the 3D surface structure of a road (3D road surface) may be observed and estimated to generate a 3D point cloud or other representation of the 3D surface structure. Since the estimated representation may be sparse, a deep neural network (DNN) may be used to predict values for a dense representation of the 3D surface structure from the sparse representation. For example, a sparse 3D point cloud may be projected to form a sparse projection image (e.g., a sparse 2D height map), which may be fed into the DNN to predict a dense projection image (e.g., a dense 2D height map). The predicted dense representation of the 3D surface structure may be provided to an autonomous vehicle drive stack to enable safe and comfortable planning and control of the autonomous vehicle.
-
公开(公告)号:US12100230B2
公开(公告)日:2024-09-24
申请号:US17452752
申请日:2021-10-28
Applicant: NVIDIA Corporation
Inventor: Kang Wang , Yue Wu , Minwoo Park , Gang Pan
IPC: G06V20/64 , G01S17/89 , G01S17/931 , G06F18/214 , G06V20/58 , B60G17/0165 , B60K31/00 , B60W60/00
CPC classification number: G06V20/64 , G01S17/89 , G01S17/931 , G06F18/214 , G06V20/58 , B60G17/0165 , B60K31/00 , B60W60/001 , B60W2420/408
Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated from real-world data. For example, one or more vehicles may collect image data and LiDAR data while navigating through a real-world environment. To generate input training data, 3D surface structure estimation may be performed on captured image data to generate a sparse representation of a 3D surface structure of interest (e.g., a 3D road surface). To generate corresponding ground truth training data, captured LiDAR data may be smoothed, subject to outlier removal, subject to triangulation to filling missing values, accumulated from multiple LiDAR sensors, aligned with corresponding frames of image data, and/or annotated to identify 3D points on the 3D surface of interest, and the identified 3D points may be projected to generate a dense representation of the 3D surface structure.
-
公开(公告)号:US20240078695A1
公开(公告)日:2024-03-07
申请号:US18504916
申请日:2023-11-08
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Michael Grabner , Cheng-Chieh Yang
CPC classification number: G06T7/60 , G06T7/579 , G06V20/588 , G06T2200/08 , G06T2207/10028 , G06T2207/30256
Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
-
公开(公告)号:US11900629B2
公开(公告)日:2024-02-13
申请号:US18174770
申请日:2023-02-27
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Michael Grabner , Cheng-Chieh Yang
CPC classification number: G06T7/60 , G06T7/579 , G06V20/588 , G06T2200/08 , G06T2207/10028 , G06T2207/30256
Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
-
公开(公告)号:US11840238B2
公开(公告)日:2023-12-12
申请号:US17456835
申请日:2021-11-29
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Cheng-Chieh Yang
IPC: B60W40/02 , H04N13/271 , G06V20/58 , B60W60/00 , G06T7/30 , G06T7/80 , H04N13/296 , G06T15/10 , G06T7/593
CPC classification number: B60W40/02 , B60W60/001 , G06T7/30 , G06T7/593 , G06T7/85 , G06T15/10 , G06V20/58 , H04N13/271 , H04N13/296 , B60W2420/42 , G06T2207/10012 , G06T2207/20081 , G06T2207/20084 , G06T2207/20132 , G06T2207/20228 , G06T2207/30244 , G06T2207/30261
Abstract: In various examples, systems and methods are disclosed that detect hazards on a roadway by identifying discontinuities between pixels on a depth map. For example, two synchronized stereo cameras mounted on an ego-machine may generate images that may be used extract depth or disparity information. Because a hazard's height may cause an occlusion of the driving surface behind the hazard from a perspective of a camera(s), a discontinuity in disparity values may indicate the presence of a hazard. For example, the system may analyze pairs of pixels on the depth map and, when the system determines that a disparity between a pair of pixels satisfies a disparity threshold, the system may identify the pixel nearest the ego-machine as a hazard pixel.
-
公开(公告)号:US20230230273A1
公开(公告)日:2023-07-20
申请号:US18174770
申请日:2023-02-27
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Michael Grabner , Cheng-Chieh Yang
CPC classification number: G06T7/60 , G06T7/579 , G06V20/588 , G06T2207/10028 , G06T2207/30256 , G06T2200/08
Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
-
公开(公告)号:US20230135088A1
公开(公告)日:2023-05-04
申请号:US17452747
申请日:2021-10-28
Applicant: NVIDIA Corporation
Inventor: Kang Wang , Yue Wu , Minwoo Park , Gang Pan
Abstract: In various examples, a 3D surface structure such as the 3D surface structure of a road (3D road surface) may be observed and estimated to generate a 3D point cloud or other representation of the 3D surface structure. Since the estimated representation may be sparse, a deep neural network (DNN) may be used to predict values for a dense representation of the 3D surface structure from the sparse representation. For example, a sparse 3D point cloud may be projected to form a sparse projection image (e.g., a sparse 2D height map), which may be fed into the DNN to predict a dense projection image (a dense 21) height map). The predicted dense representation of the 3D surface structure may be provided to an autonomous vehicle drive stack to enable safe and comfortable planning and control of the autonomous vehicle.
-
公开(公告)号:US20230122119A1
公开(公告)日:2023-04-20
申请号:US18067176
申请日:2022-12-16
Applicant: NVIDIA Corporation
Inventor: Yue Wu , Pekka Janis , Xin Tong , Cheng-Chieh Yang , Minwoo Park , David Nister
Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
-
公开(公告)号:US20220301186A1
公开(公告)日:2022-09-22
申请号:US17678835
申请日:2022-02-23
Applicant: NVIDIA Corporation
Inventor: David Nister , Soohwan Kim , Yue Wu , Minwoo Park , Cheng-Chieh Yang
IPC: G06T7/215 , G06T7/60 , G06V10/422
Abstract: In various examples, an ego-machine may analyze sensor data to identify and track features in the sensor data using. Geometry of the tracked features may be used to analyze motion flow to determine whether the motion flow violates one or more geometrical constraints. As such, tracked features may be identified as dynamic features when the motion flow corresponding to the tracked features violates the one or more static constraints for static features. Tracked features that are determined to be dynamic features may be clustered together according to their location and feature track. Once features have been clustered together, the system may calculate a detection bounding shape for the clustered features. The bounding shape information may then be used by the ego-machine for path planning, control decisions, obstacle avoidance, and/or other operations.
-
公开(公告)号:US12159417B2
公开(公告)日:2024-12-03
申请号:US17678835
申请日:2022-02-23
Applicant: NVIDIA Corporation
Inventor: David Nister , Soohwan Kim , Yue Wu , Minwoo Park , Cheng-Chieh Yang
IPC: G06T7/60 , G06T7/215 , G06V10/422
Abstract: In various examples, an ego-machine may analyze sensor data to identify and track features in the sensor data using. Geometry of the tracked features may be used to analyze motion flow to determine whether the motion flow violates one or more geometrical constraints. As such, tracked features may be identified as dynamic features when the motion flow corresponding to the tracked features violates the one or more static constraints for static features. Tracked features that are determined to be dynamic features may be clustered together according to their location and feature track. Once features have been clustered together, the system may calculate a detection bounding shape for the clustered features. The bounding shape information may then be used by the ego-machine for path planning, control decisions, obstacle avoidance, and/or other operations.
-
-
-
-
-
-
-
-
-