-
公开(公告)号:US11461958B2
公开(公告)日:2022-10-04
申请号:US17216719
申请日:2021-03-30
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Xi Luo , Mingguo Zhao , Youjun Xiong
Abstract: A scene data obtaining method as well as a model training method and a computer readable storage medium using the same are provided. The method includes: building a virtual simulation scene corresponding to an actual scene, where the virtual simulation scene is three-dimensional; determining a view frustum corresponding to preset view angles in the virtual simulation scene; collecting one or more two-dimensional images in the virtual simulation scene and ground truth object data associated with the one or more two-dimensional images using the view frustum corresponding to the preset view angles; and using all the two-dimensional images and the ground truth object data associated with the one or more two-dimensional images as scene data corresponding to the actual scene. In this manner, the data collection does not require manual annotation, and the obtained data can be used for training deep learning-based perceptual models.
-
公开(公告)号:US20220139027A1
公开(公告)日:2022-05-05
申请号:US17216719
申请日:2021-03-30
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Xi Luo , Mingguo Zhao , Youjun Xiong
Abstract: A scene data obtaining method as well as a model training method and a computer readable storage medium using the same are provided. The method includes: building a virtual simulation scene corresponding to an actual scene, where the virtual simulation scene is three-dimensional; determining a view frustum corresponding to preset view angles in the virtual simulation scene; collecting one or more two-dimensional images in the virtual simulation scene and ground truth object data associated with the one or more two-dimensional images using the view frustum corresponding to the preset view angles; and using all the two-dimensional images and the ground truth object data associated with the one or more two-dimensional images as scene data corresponding to the actual scene. In this manner, the data collection does not require manual annotation, and the obtained data can be used for training deep learning-based perceptual models.
-
公开(公告)号:US11067997B2
公开(公告)日:2021-07-20
申请号:US16236364
申请日:2018-12-29
Applicant: UBTECH Robotics Corp
Inventor: Youjun Xiong , Xi Luo , Sotirios Stasinopoulos
Abstract: The present disclosure provides a map generation method, localization method, and simultaneous localization and mapping method. The method includes: recognizing fiducial markers in the motion area; taking a position as origin of a global coordinate system of the robot, and obtaining pose information of the fiducial markers; the robot moving to a next position, recognizing the fiducial markers with determined coordinate information and underdetermined coordinate information respectively, and obtaining pose information of the fiducial marker of the undetermined coordinate information with respect to the origin based on that of the determined coordinate information; repeating the previous step until the pose information of all the fiducial markers are obtained; and generating a marker map associated with coordinate information of all fiducial markers. The method is capable of generating a map of the motion area through the fiducial markers and further realizing autonomous localization.
-
公开(公告)号:US20190278288A1
公开(公告)日:2019-09-12
申请号:US16236364
申请日:2018-12-29
Applicant: UBTECH Robotics Corp
Inventor: Youjun Xiong , Xi Luo , Sotirios Stasinopoulos
Abstract: The present disclosure provides a map generation method, localization method, and simultaneous localization and mapping method. The method includes: recognizing fiducial markers in the motion area; taking a position as origin of a global coordinate system of the robot, and obtaining pose information of the fiducial markers; the robot moving to a next position, recognizing the fiducial markers with determined coordinate information and underdetermined coordinate information respectively, and obtaining pose information of the fiducial marker of the undetermined coordinate information with respect to the origin based on that of the determined coordinate information; repeating the previous step until the pose information of all the fiducial markers are obtained; and generating a marker map associated with coordinate information of all fiducial markers. The method is capable of generating a map of the motion area through the fiducial markers and further realizing autonomous localization.
-
-
-