Abstract:
Automatic scanning and representing an environment with collision avoidance includes, for example, obtaining a first representation of the environment using a first scanning path, determining a second scanning path based on the first representation of the environment operable to avoid contact with the environment when obtaining a second representation of the environment, obtaining the second representation of the environment based on the second scanning path, and wherein the second representation of the environment is different from the first representation of the environment. The method may be employed in imaging and/or representing a rock wall having a plurality of spaced-apart holes for receiving charges for mining.
Abstract:
Apparatus and method is disclosed for determining position of a robot relative to objects in a workspace which includes the use of a camera, scanner, or other suitable device in conjunction with object recognition. The camera, etc is used to receive information from which a point cloud can be developed about the scene that is viewed by the camera. The point cloud will be appreciated to be in a camera centric frame of reference. Information about a known datum is used and compared to the point cloud through object recognition. For example, a link from a robot could be the identified datum so that, when recognized, the coordinates of the point cloud can be converted to a robot centric frame of reference since the position of the datum would be known relative to the robot.
Abstract:
Apparatus and method is disclosed for determining position of a robot relative to objects in a workspace which includes the use of a camera, scanner, or other suitable device in conjunction with object recognition. The camera, etc is used to receive information from which a point cloud can be developed about the scene that is viewed by the camera. The point cloud will be appreciated to be in a camera centric frame of reference. Information about a known datum is used and compared to the point cloud through object recognition. For example, a link from a robot could be the identified datum so that, when recognized, the coordinates of the point cloud can be converted to a robot centric frame of reference since the position of the datum would be known relative to the robot.
Abstract:
A calibration article is provided for calibrating a robot and 3D camera. The calibration article includes side surfaces that are angled inward toward a top surface The robot and camera are calibrated by capturing positional data of the calibration article relative to the robot and the camera. The captured data is used to generate correlation data between the robot and the camera. The correlation data is used by the controller to align the robot with the camera during operational use of the robot and camera.
Abstract:
Automatic scanning and representing an environment having a plurality of features, for example, includes scanning the environment along a scanning path, interspersing a plurality of localized scanning of the plurality of features in the environment during the scanning along the scanning path of the environment wherein the interspersed localized scanning of the plurality of features in the environment being different from the scanning the environment along the scanning path, and obtaining a representation of at least a portion of the environment based on the scanning of the environment and the interspersed localized scanning of the plurality of features in the environment.
Abstract:
Robot positioning is facilitated by obtaining, for each time of a first sampling schedule, a respective indication of a pose of a camera system of a robot relative to a reference coordinate frame, the respective indication of the pose of the camera system being based on a comparison of multiple three-dimensional images of a scene of an environment, the obtaining providing a plurality of indications of poses of the camera system; obtaining, for each time of a second sampling schedule, a respective indication of a pose of the robot, the obtaining providing a plurality of indications of poses of the robot; and determining, using the plurality of indications of poses of the camera system and the plurality of indications of poses of the robot, an indication of the reference coordinate frame and an indication of a reference point of the camera system relative to pose of the robot.
Abstract:
In one embodiment, the present disclosure provides a robot automated mining method. In one embodiment, a method includes a robot positioning a charging component for entry into a drill hole. In one embodiment, a method includes a robot moving a charging component within a drill hole. In one embodiment, a method includes a robot filling a drill hole with explosive material. In one embodiment, a method includes operating a robot within a mining environment.
Abstract:
A method for performing an inspection inside a machine includes inserting a remote controlled vehicle into the machine, wherein the remote controlled vehicle includes at least one camera. The remote controlled vehicle is directed along a path inside the machine. A plurality of images are captured inside the machine with the at least one camera. The images are processed to generate a panoramic image and/or a 3D image. The panoramic image and/or the 3D image are displayed on a display.
Abstract:
A machine has at least one actuated mechanism is remotely located from a control station. A two way real-time communication link connects the machine location with the control station. A controller at the machine location has program code that is configured to determine from data from one or more sensors at the machine location if an actual fault has occurred in the machine when the machine is performing its predetermined function and to determine for an actual fault one or more types for the fault and transmit the one or more fault types to the control station for analysis. The code in the controller is configured to be a preprogrammed trap routine specific to the machine function that is automatically executed when an error in machine operation is detected at the machine location. The controller also has a default trap routine that is executed when specific routine does not exist.
Abstract:
Unique systems, methods, techniques and apparatuses of a robotic control system are disclosed. One exemplary embodiment is a robotic control system comprising a robot including a memory device, a sensor, a processing device, and a communication device. The processing device is structured to receive data from the sensor, compare the data from the sensor to the first 3D model, and identify differences between the data from the sensor and the first 3D model. The robotic control system also comprises a remote computing device located in a second workspace including a communication device, a memory device, a processing device structured to update the second 3D model of the workspace using the identified differences, and a user interface structured to receive user input and display the updated 3D model. The robot is structured to move in response to receiving the user input by way of the remote computing device.