Abstract:
A machine vision system for a controllable robotic device proximal to a workspace includes an image acquisition sensor arranged to periodically capture vision signal inputs each including an image of a field of view including the workspace. A controller operatively couples to the robotic device and includes a non-transitory memory component including an executable vision perception routine. The vision perception routine includes a focus loop control routine operative to dynamically track a focus object in the workspace and a background loop control routine operative to monitor a background of the workspace. The focus loop control routine executes simultaneously asynchronously in parallel with the background loop control routine to determine a combined resultant including the focus object and the background based upon the periodically captured vision signal inputs. The controller is operative to control the robotic device to manipulate the focus object based upon the focus loop control routine.
Abstract:
A machine vision system for a controllable robotic device proximal to a workspace includes an image acquisition sensor arranged to periodically capture vision signal inputs each including an image of a field of view including the workspace. A controller operatively couples to the robotic device and includes a non-transitory memory component including an executable vision perception routine. The vision perception routine includes a focus loop control routine operative to dynamically track a focus object in the workspace and a background loop control routine operative to monitor a background of the workspace. The focus loop control routine executes simultaneously asynchronously in parallel with the background loop control routine to determine a combined resultant including the focus object and the background based upon the periodically captured vision signal inputs. The controller is operative to control the robotic device to manipulate the focus object based upon the focus loop control routine.
Abstract:
A method for calibrating an articulable end effector of a robotic arm employing a digital camera includes commanding the end effector to achieve a plurality of poses. At each commanded end effector pose, an image of the end effector with the digital camera is captured and a scene point cloud including the end effector is generated based upon the captured image of the end effector. A synthetic point cloud including the end effector is generated based upon the commanded end effector pose, and a first position of the end effector is based upon the synthetic point cloud, and a second position of the end effector associated with the scene point cloud is determined. A position of the end effector is calibrated based upon the first position of the end effector and the second position of the end effector for the plurality of commanded end effector poses.
Abstract:
A method of training a robot to autonomously execute a robotic task includes moving an end effector through multiple states of a predetermined robotic task to demonstrate the task to the robot in a set of n training demonstrations. The method includes measuring training data, including at least the linear force and the torque via a force-torque sensor while moving the end effector through the multiple states. Key features are extracted from the training data, which is segmented into a time sequence of control primitives. Transitions between adjacent segments of the time sequence are identified. During autonomous execution of the same task, a controller detects the transitions and automatically switches between control modes. A robotic system includes a robot, force-torque sensor, and a controller programmed to execute the method.
Abstract:
A method for calibrating an articulable end effector of a robotic arm employing a digital camera includes commanding the end effector to achieve a plurality of poses. At each commanded end effector pose, an image of the end effector with the digital camera is captured and a scene point cloud including the end effector is generated based upon the captured image of the end effector. A synthetic point cloud including the end effector is generated based upon the commanded end effector pose, and a first position of the end effector is based upon the synthetic point cloud, and a second position of the end effector associated with the scene point cloud is determined. A position of the end effector is calibrated based upon the first position of the end effector and the second position of the end effector for the plurality of commanded end effector poses.
Abstract:
A method for localizing and estimating a pose of a known object in a field of view of a vision system is described, and includes developing a processor-based model of the known object, capturing a bitmap image file including an image of the field of view including the known object, extracting features from the bitmap image file, matching the extracted features with features associated with the model of the known object, localizing an object in the bitmap image file based upon the extracted features, clustering the extracted features of the localized object, merging the clustered extracted features, detecting the known object in the field of view based upon a comparison of the merged clustered extracted features and the processor-based model of the known object, and estimating a pose of the detected known object in the field of view based upon the detecting of the known object.
Abstract:
A method for localizing and estimating a pose of a known object in a field of view of a vision system is described, and includes developing a processor-based model of the known object, capturing a bitmap image file including an image of the field of view including the known object, extracting features from the bitmap image file, matching the extracted features with features associated with the model of the known object, localizing an object in the bitmap image file based upon the extracted features, clustering the extracted features of the localized object, merging the clustered extracted features, detecting the known object in the field of view based upon a comparison of the merged clustered extracted features and the processor-based model of the known object, and estimating a pose of the detected known object in the field of view based upon the detecting of the known object.
Abstract:
A method of training a robot to autonomously execute a robotic task includes moving an end effector through multiple states of a predetermined robotic task to demonstrate the task to the robot in a set of n training demonstrations. The method includes measuring training data, including at least the linear force and the torque via a force-torque sensor while moving the end effector through the multiple states. Key features are extracted from the training data, which is segmented into a time sequence of control primitives. Transitions between adjacent segments of the time sequence are identified. During autonomous execution of the same task, a controller detects the transitions and automatically switches between control modes. A robotic system includes a robot, force-torque sensor, and a controller programmed to execute the method.
Abstract:
A robotic system includes a robot, sensors which measure status information including a position and orientation of the robot and an object within the workspace, and a controller. The controller, which visually debugs an operation of the robot, includes a simulator module, action planning module, and graphical user interface (GUI). The simulator module receives the status information and generates visual markers, in response to marker commands, as graphical depictions of the object and robot. An action planning module selects a next action of the robot. The marker generator module generates and outputs the marker commands to the simulator module in response to the selected next action. The GUI receives and displays the visual markers, selected future action, and input commands. Via the action planning module, the position and/or orientation of the visual markers are modified in real time to change the operation of the robot.