-
公开(公告)号:US20200050208A1
公开(公告)日:2020-02-13
申请号:US16534515
申请日:2019-08-07
Applicant: THE TORO COMPANY
Inventor: Alexander Steven Frick , Jason Thomas Kraft , Ryan Douglas Ingvalson , Christopher Charles Osterwood , David Arthur LaRose , Zachary Irvin Parker , Adam Richard Williams , Stephen Paul Elizondo Landers , Michael Jason Ramsay , Brian Daniel Beyer
Abstract: Autonomous machine navigation techniques may generate a three-dimensional point cloud that represents at least a work region based on feature data and matching data. Pose data associated with points of the three-dimensional point cloud may be generated that represents poses of an autonomous machine. A boundary may be determined using the pose data for subsequent navigation of the autonomous machine in the work region. Non-vision-based sensor data may be used to determine a pose. The pose may be updated based on the vision-based pose data. The autonomous machine may be navigated within the boundary of the work region based on the updated pose. The three-dimensional point cloud may be generated based on data captured during a touring phase. Boundaries may be generated based on data captured during a mapping phase.
-
公开(公告)号:US20240248484A1
公开(公告)日:2024-07-25
申请号:US18564173
申请日:2022-06-17
Applicant: THE TORO COMPANY
Inventor: Alexander Steven Frick , David Arthur LaRose , Stephen Paul Elizondo Landers , Eckhard Schwendemann
IPC: G05D1/246 , A01D34/00 , A01D101/00 , G05D1/648 , G05D105/15 , G05D111/10 , G06T7/73
CPC classification number: G05D1/2462 , A01D34/008 , G05D1/648 , G06T7/73 , A01D2101/00 , G05D2105/15 , G05D2111/10 , G06T2207/10028
Abstract: An autonomous work vehicle generates a localization image of a part of a scene surrounding the autonomous work vehicle and generates a check image of a different part of the scene. A controller of the vehicle performs a localization process that involves generating the localization image and the check image. The localization image is used to determine an estimated pose of the autonomous work vehicle within the work region via a stored 3D point cloud (3DPC). The estimated pose and the 3DPC are used to determine predicted features within the check image. A comparison between the predicted features and corresponding features in the check image validates the estimated pose based on the comparison satisfying a threshold.
-
公开(公告)号:US20230225241A1
公开(公告)日:2023-07-20
申请号:US18011259
申请日:2021-06-23
Applicant: Alexander Steven FRICK , Michael Jason RAMSAY , David Arthur LAROSE , Stephen Paul Elizondo LANDERS , Zachary Irvin PARKER , David Ian ROBINSON , Christopher Charles OSTERWOOD , THE TORO COMPANY
Inventor: Alexander Steven Frick , Michael Jason Ramsay , David Arthur LaRose , Stephen Paul Elizondo Landers , Zachary Irvin Parker , David Ian Robinson , Christopher Charles Osterwood
CPC classification number: A01D34/008 , G05D1/0253 , G06T7/215 , G06T7/246 , A01D2101/00
Abstract: Vision systems for autonomous machines and methods of using same during machine localization are provided. Exemplary systems and methods may reduce computing resources needed to perform vision-based localization by selecting the most appropriate camera from two or more cameras, and optionally selecting only a portion of the selected camera's field of view, from which to perform vision-based location correction. Other embodiments may provide camera lens coverings that maintain optical clarity while operating within debris-filled environments.
-
公开(公告)号:US20220253063A1
公开(公告)日:2022-08-11
申请号:US17732255
申请日:2022-04-28
Applicant: THE TORO COMPANY
Inventor: Alexander Steven Frick , Jason Thomas Kraft , Ryan Douglas Ingvalson , Christopher Charles Osterwood , David Arthur LaRose , Zachary Irvin Parker , Adam Richard Williams , Stephen Paul Elizondo Landers , Michael Jason Ramsay , Brian Daniel Beyer
Abstract: Autonomous machine navigation techniques may generate a three-dimensional point cloud that represents at least a work region based on feature data and matching data. Pose data associated with points of the three-dimensional point cloud may be generated that represents poses of an autonomous machine. A boundary may be determined using the pose data for subsequent navigation of the autonomous machine in the work region. Non-vision-based sensor data may be used to determine a pose. The pose may be updated based on the vision-based pose data. The autonomous machine may be navigated within the boundary of the work region based on the updated pose. The three-dimensional point cloud may be generated based on data captured during a touring phase. Boundaries may be generated based on data captured during a mapping phase.
-
公开(公告)号:US20220151144A1
公开(公告)日:2022-05-19
申请号:US17439465
申请日:2020-04-09
Applicant: THE TORO COMPANY
Inventor: Michael Jason Ramsay , David Arthur LaRose , Zachary Irvin Parker , Matthew John Alvarado , Stephen Paul Elizondo Landers , David Ian Robinson
Abstract: Autonomous machine (100) navigation techniques include using simulation to configure camera (133) capture parameters. A method may include capturing image data of a scene, generating irradiance image data, determining at least one test camera capture parameter, determining a simulated scene parameter, and generating at least one updated camera capture parameter. Image data for camera capture configuration may be captured while the autonomous machine is moving. Camera (133) captures parameters may be used to capture images while the autonomous machine (100) is slowed or stopped, particularly in lowlight conditions.
-
公开(公告)号:US11334082B2
公开(公告)日:2022-05-17
申请号:US16534515
申请日:2019-08-07
Applicant: THE TORO COMPANY
Inventor: Alexander Steven Frick , Jason Thomas Kraft , Ryan Douglas Ingvalson , Christopher Charles Osterwood , David Arthur LaRose , Zachary Irvin Parker , Adam Richard Williams , Stephen Paul Elizondo Landers , Michael Jason Ramsay , Brian Daniel Beyer
Abstract: Autonomous machine navigation techniques may generate a three-dimensional point cloud that represents at least a work region based on feature data and matching data. Pose data associated with points of the three-dimensional point cloud may be generated that represents poses of an autonomous machine. A boundary may be determined using the pose data for subsequent navigation of the autonomous machine in the work region. Non-vision-based sensor data may be used to determine a pose. The pose may be updated based on the vision-based pose data. The autonomous machine may be navigated within the boundary of the work region based on the updated pose. The three-dimensional point cloud may be generated based on data captured during a touring phase. Boundaries may be generated based on data captured during a mapping phase.
-
-
-
-
-