-
公开(公告)号:US20240161342A1
公开(公告)日:2024-05-16
申请号:US18166121
申请日:2023-02-08
Applicant: NVIDIA Corporation
Inventor: Ayon Sen , Gang Pan , Cheng-Chieh Yang , Yue Wu
IPC: G06T7/80 , G01S17/86 , G01S17/89 , G01S17/931 , H04N17/00
CPC classification number: G06T7/80 , G01S17/86 , G01S17/89 , G01S17/931 , H04N17/002 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30244
Abstract: In various examples, sensor configuration for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that may use image feature correspondences between camera images along with an assumption that image features are locally planar to determine parameters for calibrating an image sensor with a LiDAR sensor and/or another image sensor. In some examples, an optimization problem is constructed that attempts to minimize a geometric loss function, where the geometric loss function encodes the notion that corresponding image features are views of a same point on a locally planar surface (e.g., a surfel or mesh) that is constructed from LiDAR data generated using a LiDAR sensor. In some examples, performing such processes to determine the calibration parameters may remove structure estimation from the optimization problem.
-
公开(公告)号:US11854401B2
公开(公告)日:2023-12-26
申请号:US18067176
申请日:2022-12-16
Applicant: NVIDIA Corporation
Inventor: Yue Wu , Pekka Janis , Xin Tong , Cheng-Chieh Yang , Minwoo Park , David Nister
IPC: G08G1/16 , G06V10/82 , G06V20/58 , G06V20/10 , G06F18/214 , G05D1/00 , G05D1/02 , G06N3/04 , G06T7/20
CPC classification number: G08G1/166 , G05D1/0088 , G05D1/0289 , G06F18/214 , G06N3/0418 , G06T7/20 , G06V10/82 , G06V20/10 , G06V20/58 , G05D2201/0213
Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
-
3.
公开(公告)号:US20230351638A1
公开(公告)日:2023-11-02
申请号:US17733497
申请日:2022-04-29
Applicant: NVIDIA Corporation
Inventor: Yue WU , Liwen Lin , Cheng-Chieh Yang , Gang Pan
IPC: G06T7/00 , G06V20/56 , G06V10/762 , G06V10/764 , G06V10/75 , G06V10/25
CPC classification number: G06T7/97 , G06V20/56 , G06V10/762 , G06V10/764 , G06V10/751 , G06V10/25 , G06T2207/10012 , G06T2207/20021 , G06T2207/20228 , G06T2207/30252
Abstract: In various examples, system and methods for stereo disparity based hazard detection for autonomous machine applications are presented. Example embodiments may assist an ego-machine in detecting hazards within its path of travel. The systems and methods may use disparity between a stereo pair of images to generate a baseline path disparity model and further identify hazards from detected disparities that deviate from that path disparity model. A disparity map for the image pair is constructed in which each pixel represents a disparity for a corresponding element of the image captured. Blockwise division may be optionally used to subdivide the disparity map into a plurality of smaller disparity maps, each corresponding to a block of pixels of the disparity map. A V-space disparity map, where a first axis corresponds to disparity values and the second axis corresponds to pixel rows, may be used to simplify estimation of the path disparity model.
-
公开(公告)号:US20200293064A1
公开(公告)日:2020-09-17
申请号:US16514404
申请日:2019-07-17
Applicant: NVIDIA Corporation
Inventor: Yue Wu , Pekka Janis , Xin Tong , Cheng-Chieh Yang , Minwoo Park , David Nister
Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
-
公开(公告)号:US12288363B2
公开(公告)日:2025-04-29
申请号:US18166118
申请日:2023-02-08
Applicant: NVIDIA Corporation
Inventor: Ayon Sen , Gang Pan , Cheng-Chieh Yang , Yue Wu
IPC: G06T7/80 , G01S17/86 , G01S17/89 , G01S17/931 , H04N17/00
Abstract: In various examples, sensor configuration for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that may use image feature correspondences between camera images along with an assumption that image features are locally planar to determine parameters for calibrating an image sensor with a LiDAR sensor and/or another image sensor. In some examples, an optimization problem is constructed that attempts to minimize a geometric loss function, where the geometric loss function encodes the notion that corresponding image features are views of a same point on a locally planar surface (e.g., a surfel or mesh) that is constructed from LiDAR data generated using a LiDAR sensor. In some examples, performing such processes to determine the calibration parameters may remove structure estimation from the optimization problem.
-
公开(公告)号:US20240078695A1
公开(公告)日:2024-03-07
申请号:US18504916
申请日:2023-11-08
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Michael Grabner , Cheng-Chieh Yang
CPC classification number: G06T7/60 , G06T7/579 , G06V20/588 , G06T2200/08 , G06T2207/10028 , G06T2207/30256
Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
-
公开(公告)号:US11900629B2
公开(公告)日:2024-02-13
申请号:US18174770
申请日:2023-02-27
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Michael Grabner , Cheng-Chieh Yang
CPC classification number: G06T7/60 , G06T7/579 , G06V20/588 , G06T2200/08 , G06T2207/10028 , G06T2207/30256
Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
-
公开(公告)号:US11840238B2
公开(公告)日:2023-12-12
申请号:US17456835
申请日:2021-11-29
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Cheng-Chieh Yang
IPC: B60W40/02 , H04N13/271 , G06V20/58 , B60W60/00 , G06T7/30 , G06T7/80 , H04N13/296 , G06T15/10 , G06T7/593
CPC classification number: B60W40/02 , B60W60/001 , G06T7/30 , G06T7/593 , G06T7/85 , G06T15/10 , G06V20/58 , H04N13/271 , H04N13/296 , B60W2420/42 , G06T2207/10012 , G06T2207/20081 , G06T2207/20084 , G06T2207/20132 , G06T2207/20228 , G06T2207/30244 , G06T2207/30261
Abstract: In various examples, systems and methods are disclosed that detect hazards on a roadway by identifying discontinuities between pixels on a depth map. For example, two synchronized stereo cameras mounted on an ego-machine may generate images that may be used extract depth or disparity information. Because a hazard's height may cause an occlusion of the driving surface behind the hazard from a perspective of a camera(s), a discontinuity in disparity values may indicate the presence of a hazard. For example, the system may analyze pairs of pixels on the depth map and, when the system determines that a disparity between a pair of pixels satisfies a disparity threshold, the system may identify the pixel nearest the ego-machine as a hazard pixel.
-
公开(公告)号:US20230230273A1
公开(公告)日:2023-07-20
申请号:US18174770
申请日:2023-02-27
Applicant: NVIDIA Corporation
Inventor: Minwoo Park , Yue Wu , Michael Grabner , Cheng-Chieh Yang
CPC classification number: G06T7/60 , G06T7/579 , G06V20/588 , G06T2207/10028 , G06T2207/30256 , G06T2200/08
Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
-
公开(公告)号:US20230122119A1
公开(公告)日:2023-04-20
申请号:US18067176
申请日:2022-12-16
Applicant: NVIDIA Corporation
Inventor: Yue Wu , Pekka Janis , Xin Tong , Cheng-Chieh Yang , Minwoo Park , David Nister
Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
-
-
-
-
-
-
-
-
-