Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Abstract:
A robotic device may be operated by a learning controller comprising a feature learning configured to determine control signal based on sensory input. An input may be analyzed in order to determine occurrence of one or more features. Features in the input may be associated with the control signal during online supervised training. During training, learning process may be adapted based on training input and the predicted output. A combination of the predicted and the target output may be provided to a robotic device to execute a task. Feature determination may comprise online adaptation of input, sparse encoding transformations. Computations related to learning process adaptation and feature detection may be performed on board by the robotic device in real time thereby enabling autonomous navigation by trained robots.
Abstract:
Apparatus and methods for training of robotic devices. Robotic devices may be trained by a user guiding the robot along target trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control commands based on one or more of the user guidance, sensory input, and/or performance measure. Training may comprise a plurality of trials. During first trial, the user input may be sufficient to cause the robot to complete the trajectory. During subsequent trials, the user and the robot's controller may collaborate so that user input may be reduced while the robot control may be increased. Individual contributions from the user and the robot controller during training may be may be inadequate (when used exclusively) to complete the task. Upon learning, user's knowledge may be transferred to the robot's controller to enable task execution in absence of subsequent inputs from the user
Abstract:
Systems and methods for training a robot to autonomously travel a route. In one embodiment, a robot can detect an initial placement in an initialization location. Beginning from the initialization location, the robot can create a map of a navigable route and surrounding environment during a user-controlled demonstration of the navigable route. After the demonstration, the robot can later detect a second placement in the initialization location, and then autonomously navigate the navigable route. The robot can then subsequently detect errors associated with the created map. Methods and systems associated with the robot are also disclosed.
Abstract:
Systems and methods for robotic path planning are disclosed. In some implementations of the present disclosure, a robot can generate a cost map associated with an environment of the robot. The cost map can comprise a plurality of pixels each corresponding to a location in the environment, where each pixel can have an associated cost. The robot can further generate a plurality of masks having projected path portions for the travel of the robot within the environment, where each mask comprises a plurality of mask pixels that correspond to locations in the environment. The robot can then determine a mask cost associated with each mask based at least in part on the cost map and select a mask based at least in part on the mask cost. Based on the projected path portions within the selected mask, the robot can navigate a space.
Abstract:
Systems and methods for dynamic route planning in autonomous navigation are disclosed. In some exemplary implementations, a robot can have one or more sensors configured to collect data about an environment including detected points on one or more objects in the environment. The robot can then plan a route in the environment, where the route can comprise one or more route poses. The route poses can include a footprint indicative at least in part of a pose, size, and shape of the robot along the route. Each route pose can have a plurality of points therein. Based on forces exerted on the points of each route pose by other route poses, objects in the environment, and others, each route poses can reposition. Based at least in part on interpolation performed on the route poses (some of which may be repositioned), the robot can dynamically route.
Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Abstract:
Systems and methods for robotic mapping are disclosed In some exemplary implementations, a robot can travel in an environment. From travelling in the environment, the robot can create a graph comprising a plurality of nodes, wherein each node corresponds to a scan taken by a sensor of the robot at a location in the environment. In some exemplary implementations, the robot can generate a map of the environment from the graph. In some cases, to facilitate map generation, the robot can constrain the graph to start and end at a substantially similar location. The robot can also perform scan matching on extended scan groups, determined from identifying overlap between scans, to further determine the location of features in a map.
Abstract:
Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
Abstract:
The safe operation and navigation of robots is an active research topic for many real-world applications, such as the automation of large industrial equipment. This technological field often requires heavy machines with arbitrary shapes to navigate very close to obstacles, a challenging and largely unsolved problem. To address this issue, a new planning architecture is developed that allows wheeled vehicles to navigate safely and without human supervision in cluttered environments. The inventive methods and systems disclosed herein belong to the Model Predictive Control (MPC) family of local planning algorithms. The technological features disclosed herein works in the space of two-dimensional (2D) occupancy grids and plans in motor command space using a black box forward model for state inference. Compared to the conventional methods and systems, the inventive methods and systems disclosed herein include several properties that make it scalable and applicable to a production environment. The inventive concepts disclosed herein are at least deterministic, computationally efficient, run in constant time and can be deployed in many common non-holonomic systems.