Abstract:
The invention is related to methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefinders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. Certain embodiments contemplate improvements to the front-end processing in a SLAM-based system. Particularly, certain of these embodiments contemplate a novel landmark matching process. Certain of these embodiments also contemplate a novel landmark creation process. Certain embodiments contemplate improvements to the back-end processing in a SLAM-based system. Particularly, certain of these embodiments contemplate algorithms for modifying the SLAM graph in real-time to achieve a more efficient structure.
Abstract:
Apparatus and methods for carpet drift estimation are disclosed. In certain implementations, a robotic device includes an actuator system to move the body across a surface. A first set of sensors can sense an actuation characteristic of the actuator system. For example, the first set of sensors can include odometry sensors for sensing wheel rotations of the actuator system. A second set of sensors can sense a motion characteristic of the body. The first set of sensors may be a different type of sensor than the second set of sensors. A controller can estimate carpet drift based at least on the actuation characteristic sensed by the first set of sensors and the motion characteristic sensed by the second set of sensors.
Abstract:
A method includes constructing a map of an environment based on mapping data produced by an autonomous cleaning robot in the environment during a first cleaning mission. Constructing the map includes providing a label associated with a portion of the mapping data. The method includes causing a remote computing device to present a visual representation of the environment based on the map, and a visual indicator of the label. The method includes causing the autonomous cleaning robot to initiate a behavior associated with the label during a second cleaning mission.
Abstract:
Described herein are systems, devices, and methods for maintaining a valid semantic map of an environment for a mobile robot. A mobile robot comprises a drive system, a sensor circuit to sense occupancy information, a memory, a controller circuit, and a communication system. The controller circuit can generate a first semantic map corresponding to a first robot mission using first occupancy information and first semantic annotations, transfer the first semantic annotations to a second semantic map corresponding to a subsequent second robot mission. The control circuit can generate the second semantic map that includes second semantic annotations generated based on the transferred first semantic annotations. User feedback on the first or the second semantic map can be received via a communication system. The control circuit can update first semantic map and use it to navigate the mobile robot in a future mission.
Abstract:
Apparatus and methods for carpet drift estimation are disclosed. In certain implementations, a robotic device includes an actuator system to move the body across a surface. A first set of sensors can sense an actuation characteristic of the actuator system. For example, the first set of sensors can include odometry sensors for sensing wheel rotations of the actuator system. A second set of sensors can sense a motion characteristic of the body. The first set of sensors may be a different type of sensor than the second set of sensors. A controller can estimate carpet drift based at least on the actuation characteristic sensed by the first set of sensors and the motion characteristic sensed by the second set of sensors.
Abstract:
A method includes constructing a map of an environment based on mapping data produced by an autonomous cleaning robot in the environment during a first cleaning mission. Constructing the map includes providing a label associated with a portion of the mapping data. The method includes causing a remote computing device to present a visual representation of the environment based on the map, and a visual indicator of the label. The method includes causing the autonomous cleaning robot to initiate a behavior associated with the label during a second cleaning mission.
Abstract:
A system and method for mapping parameter data acquired by a robot mapping system is disclosed. Parameter data characterizing the environment is collected while the robot localizes itself within the environment using landmarks. Parameter data is recorded in a plurality of local grids, i.e., sub-maps associated with the robot position and orientation when the data was collected. The robot is configured to generate new grids or reuse existing grids depending on the robot's current pose, the pose associated with other grids, and the uncertainty of these relative pose estimates. The pose estimates associated with the grids are updated over time as the robot refines its estimates of the locations of landmarks from which determines its pose in the environment. Occupancy maps or other global parameter maps may be generated by rendering local grids into a comprehensive map indicating the parameter data in a global reference frame extending the dimensions of the environment.
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a controller circuit that directs a drive of the mobile robot to navigate the mobile robot through an environment using camera-based navigation system and a camera including optics defining a camera field of view and a camera optical axis, where the camera is positioned within the recessed structure and is tilted so that the camera optical axis is aligned at an acute angle of above a horizontal plane in line with the top surface and is aimed in a forward drive direction of the robot body, and the camera is configured to capture images of the operating environment of the mobile robot.
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.
Abstract:
A system and method for mapping parameter data acquired by a robot mapping system is disclosed. Parameter data characterizing the environment is collected while the robot localizes itself within the environment using landmarks. Parameter data is recorded in a plurality of local grids, i.e., sub-maps associated with the robot position and orientation when the data was collected. The robot is configured to generate new grids or reuse existing grids depending on the robot's current pose, the pose associated with other grids, and the uncertainty of these relative pose estimates. The pose estimates associated with the grids are updated over time as the robot refines its estimates of the locations of landmarks from which determines its pose in the environment. Occupancy maps or other global parameter maps may be generated by rendering local grids into a comprehensive map indicating the parameter data in a global reference frame extending the dimensions of the environment.