-
公开(公告)号:US20220281113A1
公开(公告)日:2022-09-08
申请号:US17192517
申请日:2021-03-04
Applicant: X Development LLC
Inventor: Ammar Husain
Abstract: A method includes receiving, from a sensor on a robotic device, a captured image representative of an environment of the robotic device when the robotic device is at a location in the environment. The method also includes determining, based at least on the location of the robotic device, a rendered image representative of the environment of the robotic device. The method further includes determining, by applying at least one pre-trained machine learning model to at least the captured image and the rendered image, a property of one or more portions of the captured image.
-
公开(公告)号:US20230084774A1
公开(公告)日:2023-03-16
申请号:US17932271
申请日:2022-09-14
Applicant: X Development LLC
Inventor: Ammar Husain , Mikael Persson
Abstract: A method includes determining, for a robotic device that comprises a perception system, a robot planner state representing at least one future path for the robotic device in an environment. The method also includes determining a perception system trajectory by inputting at least the robot planner state into a machine learning model trained based on training data comprising at least a plurality of robot planner states corresponding to a plurality of operator-directed perception system trajectories. The method further includes controlling, by the robotic device, the perception system to move through the determined perception system trajectory.
-
公开(公告)号:US20210316448A1
公开(公告)日:2021-10-14
申请号:US16720498
申请日:2019-12-19
Applicant: X Development LLC
Inventor: Ammar Husain , Joerg Mueller
Abstract: Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.
-
公开(公告)号:US20210080970A1
公开(公告)日:2021-03-18
申请号:US16580714
申请日:2019-09-24
Applicant: X Development LLC
Inventor: Ammar Husain , Mikael Persson
Abstract: Implementations set forth herein relate to a robot that employs a stereo camera and LIDAR for generating point cloud data while the robot is traversing an area. The point cloud data can characterize spaces within the area as occupied, unoccupied, or uncategorized. For instance, an uncategorized space can refer to a point in three-dimensional (3D) space where occupancy of the space is unknown and/or where no observation has been made by the robot—such as in circumstances where a blind spot is located at or near a base of the robot. In order to efficiently traverse certain areas, the robot can estimate resource costs of either sweeping the stereo camera indiscriminately between spaces and/or specifically focusing the stereo camera on uncategorized space(s) during the route. Based on such resource cost estimations, the robot can adaptively maneuver the stereo camera during routes while also minimizing resource consumption by the robot.
-
-
-