Abstract:
Apparatus and methods for training and controlling of e.g., robotic devices. In one implementation, a robot may be utilized to perform a target task characterized by a target trajectory. The robot may be trained by a user using supervised learning. The user may interface to the robot, such as via a control apparatus configured to provide a teaching signal to the robot. The robot may comprise an adaptive controller comprising a neuron network, which may be configured to generate actuator control commands based on the user input and output of the learning process. During one or more learning trials, the controller may be trained to navigate a portion of the target trajectory. Individual trajectory portions may be trained during separate training trials. Some portions may be associated with robot executing complex actions and may require additional training trials and/or more dense training input compared to simpler trajectory actions.
Abstract:
Robotic devices may be trained by a trainer guiding the robot along a target trajectory using physical contact with the robot. The robot may comprise an adaptive controller configured to generate control commands based on one or more of the trainer input, sensory input, and/or performance measure. The trainer may observe task execution by the robot. Responsive to observing a discrepancy between the target behavior and the actual behavior, the trainer may provide a teaching input via a haptic action. The robot may execute the action based on a combination of the internal control signal produced by a learning process of the robot and the training input. The robot may infer the teaching input based on a comparison of a predicted state and actual state of the robot. The robot's learning process may be adjusted in accordance with the teaching input so as to reduce the discrepancy during a subsequent trial.
Abstract:
Framework may be implemented for transferring knowledge from an external agent to a robotic controller. In an obstacle avoidance/target approach application, the controller may be configured to determine a teaching signal based on a sensory input, the teaching signal conveying information associated with target action consistent with the sensory input, the sensory input being indicative of the target/obstacle. The controller may be configured to determine a control signal based on the sensory input, the control signal conveying information associated with target approach/avoidance action. The controller may determine a predicted control signal based on the sensory input and the teaching signal, the predicted control conveying information associated with the target action. The control signal may be combined with the predicted control in order to cause the robotic apparatus to execute the target action.
Abstract:
Apparatus and methods for processing inputs by one or more neurons of a network. The neuron(s) may generate spikes based on receipt of multiple inputs. Latency of spike generation may be determined based on an input magnitude. Inputs may be scaled using for example a non-linear concave transform. Scaling may increase neuron sensitivity to lower magnitude inputs, thereby improving latency encoding of small amplitude inputs. The transformation function may be configured compatible with existing non-scaling neuron processes and used as a plug-in to existing neuron models. Use of input scaling may allow for an improved network operation and reduce task simulation time.
Abstract:
Systems and methods for robotic detection of escalators are disclosed herein. According to at least one non-limiting exemplary embodiment, a robot may navigate a learned route and utilize one or more methods of detecting an escalator using data from its sensors. The robot may subsequently avoid the area comprising the escalator.
Abstract:
Systems and methods for a universal connection interface between a robot and a plurality of modular attachments are disclosed. The connection interface includes a data connection and a dynamic amplifier configured to adjust output of at least one electromechanically coupled mechanical output; and a processor configured to control gain of the dynamic amplifier.
Abstract:
An optical object detection apparatus and associated methods. The apparatus may comprise a lens (e.g., fixed-focal length wide aperture lens) and an image sensor. The fixed focal length of the lens may correspond to a depth of field area in front of the lens. When an object enters the depth of field area (e.g., sue to a relative motion between the object and the lens) the object representation on the image sensor plane may be in-focus. Objects outside the depth of field area may be out of focus. In-focus representations of objects may be characterized by a greater contrast parameter compared to out of focus representations. One or more images provided by the detection apparatus may be analyzed in order to determine useful information (e.g., an image contrast parameter) of a given image. Based on the image contrast meeting one or more criteria, a detection indication may be produced.
Abstract:
Systems and methods for automatic detection of spills are disclosed. In some exemplary implementations, a robot can have a spill detector comprising at least one optical imaging device configured to capture at least one image of a scene containing a spill while the robot moves between locations. The robot can process the at least one image by segmentation. Once the spill has been identified, the robot can then generate an alert indicative at least in part of a recognition of the spill.
Abstract:
Apparatus and methods for a modular robotic device with artificial intelligence that is receptive to training controls. In one implementation, modular robotic device architecture may be used to provide all or most high cost components in an autonomy module that is separate from the robotic body. The autonomy module may comprise controller, power, actuators that may be connected to controllable elements of the robotic body. The controller may position limbs of the toy in a target position. A user may utilize haptic training approach in order to enable the robotic toy to perform target action(s). Modular configuration of the disclosure enables users to replace one toy body (e.g., the bear) with another (e.g., a giraffe) while using hardware provided by the autonomy module. Modular architecture may enable users to purchase a single AM for use with multiple robotic bodies, thereby reducing the overall cost of ownership.
Abstract:
Apparatus and methods for training and operating of robotic devices. Robotic controller may comprise a predictor apparatus configured to generate motor control output. The predictor may be operable in accordance with a learning process based on a teaching signal comprising the control output. An adaptive controller block may provide control output that may be combined with the predicted control output. The predictor learning process may be configured to learn the combined control signal. Predictor training may comprise a plurality of trials. During initial trial, the control output may be capable of causing a robot to perform a task. During intermediate trials, individual contributions from the controller block and the predictor may be inadequate for the task. Upon learning, the control knowledge may be transferred to the predictor so as to enable task execution in absence of subsequent inputs from the controller. Control output and/or predictor output may comprise multi-channel signals.