Abstract:
According to one embodiment, an information processor includes a memory and processing circuitry. The circuitry receives area information indicating a second area in a first area around a movable body apparatus and third areas in the first area, wherein the movable body apparatus is movable in the second area and an object is present in each of the third areas. The circuitry receives movement information including at least one of a velocity, a movement direction or an acceleration of the apparatus. The circuitry acquires evaluation values each indicative of a damage to be caused when the apparatus collides with each object in the third areas, and determines, based on the evaluation values, a position corresponding to a first object which causes a least damage.
Abstract:
An apparatus and methods for training and/or operating a robotic device to follow a trajectory. A robotic vehicle may utilize a camera and stores the sequence of images of a visual scene seen when following a trajectory during training in an ordered buffer. Motor commands associated with a given image may be stored. During autonomous operation, an acquired image may be compared with one or more images from the training buffer in order to determine the most likely match. An evaluation may be performed in order to determine if the image may correspond to a shifted (e.g., left/right) version of a stored image as previously observed. If the new image is shifted left, right turn command may be issued. If the new image is shifted right then left turn command may be issued.
Abstract:
A method, apparatus, and/or system for providing an action with respect to a mobile device using a robotic device that tracks the user. In accordance with at least one embodiment, a request to perform an action with respect to an electronic device is received. Information may be sent to one or more robotic devices within a proximity of the electronic device. A robotic device of the one or more robotic devices may be selected to perform the action. An indication may be received from the robotic device that indicates that the user has interacted with the robotic device. Instructions may be sent to the robotic device to perform the action with respect to the electronic device. A location of the user may be tracked while charging is performed by the robotic device. The robotic device may be instructed to follow the user at a threshold distance from the user.
Abstract:
A random k-nearest neighbors (RKNN) approach may be used for regression/classification model wherein the input includes the k closest training examples in the feature space. The RKNN process may utilize video images as input in order to predict motor command for controlling navigation of a robot. In some implementations of robotic vision based navigation, the input space may be highly dimensional and highly redundant. When visual inputs are augmented with data of another modality that is characterized by fewer dimensions (e.g., audio), the visual data may overwhelm lower-dimension data. The RKNN process may partition available data into subsets comprising a given number of samples from the lower-dimension data. Outputs associated with individual subsets may be combined (e.g., averaged). Selection of number of neighbors, subset size and/or number of subsets may be used to trade-off between speed and accuracy of the prediction.
Abstract:
Robotic devices may be operated by users remotely. A learning controller apparatus may detect remote transmissions comprising user control instructions. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The learning apparatus may monitor one or more wavelength (infrared light, radio channel) and detect transmissions from user remote control device to the robot during its operation by the user. The learning apparatus may be configured to develop associations between the detected user remote control instructions and actions of the robot for given context. When a given sensory context occurs, the learning controller may automatically provide control instructions to the robot that may be associated with the given context. The provision of control instructions to the robot by the learning controller may obviate the need for user remote control of the robot thereby enabling autonomous operation by the robot.
Abstract:
An aircraft is provided and includes a frame, drive elements configured to drive movements of the frame and a computer configured to receive mission planning and manual commands and to control operations of the drive elements to operate in a safe mode in which mission commands are accepted but manual commands are refused, a manual mode in which mission commands are refused but manual commands are accepted and an enroute mode. The computer is further configured to only allow mode transitions between the safe and manual modes and between the safe and enroute modes.
Abstract:
The present invention provides a work vehicle including a control system that can switch between a first driving mode for allowing the work vehicle to travel in a manned state and a second driving mode for allowing the work vehicle to travel in an unmanned state, wherein the control system controls such that, during an execution of the second driving mode, a number of types of information exchanged by communication in the control system becomes less than that during the first driving mode, or a communication interval of information exchanged by the communication in the control system becomes longer than that during the first driving mode.
Abstract:
A method, apparatus, and/or system for providing an action with respect to a mobile device using a robotic device that tracks the user and that interacts with a charging management engine. In accordance with at least one embodiment, a request to perform an action with respect to an electronic device is received. Information may be sent to one or more robotic devices within a proximity of the electronic device. A robotic device of the one or more robotic devices may be selected to perform the action. An indication may be received from the robotic device that indicates that the user has interacted with the robotic device. Instructions may be sent to the robotic device to perform the action with respect to the electronic device.
Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Abstract:
A robotic device may be operated by a learning controller comprising a feature learning configured to determine control signal based on sensory input. An input may be analyzed in order to determine occurrence of one or more features. Features in the input may be associated with the control signal during online supervised training. During training, learning process may be adapted based on training input and the predicted output. A combination of the predicted and the target output may be provided to a robotic device to execute a task. Feature determination may comprise online adaptation of input, sparse encoding transformations. Computations related to learning process adaptation and feature detection may be performed on board by the robotic device in real time thereby enabling autonomous navigation by trained robots.