Abstract:
Disclosed are methods and devices for transitioning a mixed-mode autonomous vehicle from a human driven mode to an autonomously driven mode. Transitioning may include stopping a vehicle on a predefined landing strip and detecting a reference indicator. Based on the reference indicator, the vehicle may be able to know its exact position. Additionally, the vehicle may use the reference indictor to obtain an autonomous vehicle instruction via a URL. After the vehicle knows its precise location and has an autonomous vehicle instruction, it can operate in autonomous mode.
Abstract:
The present teachings provide an autonomous mobile robot that includes a drive configured to maneuver the robot over a ground surface within an operating environment; a camera mounted on the robot having a field of view including the floor adjacent the mobile robot in the drive direction of the mobile robot; a frame buffer that stores image frames obtained by the camera while the mobile robot is driving; and a memory device configured to store a learned data set of a plurality of descriptors corresponding to pixel patches in image frames corresponding to portions of the operating environment and determined by mobile robot sensor events.
Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Abstract:
A random k-nearest neighbors (RKNN) approach may be used for regression/classification model wherein the input includes the k closest training examples in the feature space. The RKNN process may utilize video images as input in order to predict motor command for controlling navigation of a robot. In some implementations of robotic vision based navigation, the input space may be highly dimensional and highly redundant. When visual inputs are augmented with data of another modality that is characterized by fewer dimensions (e.g., audio), the visual data may overwhelm lower-dimension data. The RKNN process may partition available data into subsets comprising a given number of samples from the lower-dimension data. Outputs associated with individual subsets may be combined (e.g., averaged). Selection of number of neighbors, subset size and/or number of subsets may be used to trade-off between speed and accuracy of the prediction.
Abstract:
A global navigation satellite sensor system (GNSS) and gyroscope control system for vehicle steering control comprising a GNSS receiver and antennas at a fixed spacing to determine a vehicle position, velocity and at least one of a heading angle, a pitch angle and a roll angle based on carrier phase position differences. The system also includes a control system configured to receive the vehicle position, heading, and at least one of roll and pitch, and configured to generate a steering command to a vehicle steering system. The system includes gyroscopes for determining system attitude change with respect to multiple axes for integrating with GNSS-derived positioning information to determine vehicle position, velocity, rate-of-turn, attitude and other operating characteristics. Relative orientations and attitudes between motive and working components can be determined using optical sensors and cameras. The system can also be used to guide multiple vehicles in relation to each other.
Abstract:
A global navigation satellite system (GNSS) and gyroscope control system for vehicle steering control comprising a GNSS receiver and antennas at a fixed spacing to determine a vehicle position, velocity and at least one of a heading angle, a pitch angle and a roll angle based on carrier phase position differences. The system also includes a control system configured to receive the vehicle position, heading, and at least one of roll and pitch, and configured to generate a steering command to a vehicle steering system. A vehicle control method includes the steps of computing a position and a heading for the vehicle using GNSS positioning and a rate gyro for determining vehicle attitude, which is used for generating a steering command. Relative orientations and attitudes between tractors and implements can be determined using optical sensors and cameras. Laser detectors and rangefinders can also be used.
Abstract:
A computer-implemented method and system for controlling operation of an autonomous driverless vehicle in response to detection of a hazard in the path of the vehicle.
Abstract:
Disclosed are methods and devices for transitioning a mixed-mode autonomous vehicle from a human driven mode to an autonomously driven mode. Transitioning may include stopping a vehicle on a predefined landing strip and detecting a reference indicator. Based on the reference indicator, the vehicle may be able to know its exact position. Additionally, the vehicle may use the reference indictor to obtain an autonomous vehicle instruction via a URL. After the vehicle knows its precise location and has an autonomous vehicle instruction, it can operate in autonomous mode.
Abstract:
Devices and methods for a low latency data telecommunication system and method for video, audio control data and other data for use with one or more robots and remote controls are disclosed. The data transmission can be digital. The data telecommunication system can enable the use of multiple robots and multiple remote controls in the same location with encrypted data transmission.
Abstract:
Disclosed are methods and devices for transitioning a mixed-mode autonomous vehicle from a human driven mode to an autonomously driven mode. Transitioning may include stopping a vehicle on a predefined landing strip and detecting a reference indicator. Based on the reference indicator, the vehicle may be able to know its exact position. Additionally, the vehicle may use the reference indictor to obtain an autonomous vehicle instruction via a URL. After the vehicle knows its precise location and has an autonomous vehicle instruction, it can operate in autonomous mode.