Abstract:
Apparatus and methods for salient feature detection by a spiking neuron network. The network may comprise feature-specific units capable of responding to different objects (red and green color). The plasticity mechanism of the network may be configured based on difference between two similarity measures related to activity of different unit types obtained during network training. One similarity measure may be based on activity of units of the same type (red). Another similarity measure may be based on activity of units of one type (red) and another type (green). Similarity measures may comprise a cross-correlogram and/or mutual information determined over an activity window. During network operation, the activity based plasticity mechanism may be used to potentiate connections between units of the same type (red-red). The plasticity mechanism may be used to depress connections between units of different types (red-green). The plasticity mechanism may effectuate detection of salient features in the input.
Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Abstract:
Apparatus and methods for training and controlling of, for instance, robotic devices. In one implementation, a robot may be trained by a user using supervised learning. The user may be unable to control all degrees of freedom of the robot simultaneously. The user may interface to the robot via a control apparatus configured to select and operate a subset of the robot's complement of actuators. The robot may comprise an adaptive controller comprising a neuron network. The adaptive controller may be configured to generate actuator control commands based on the user input and output of the learning process. Training of the adaptive controller may comprise partial set training. The user may train the adaptive controller to operate first actuator subset. Subsequent to learning to operate the first subset, the adaptive controller may be trained to operate another subset of degrees of freedom based on user input via the control apparatus.
Abstract:
Apparatus and methods for activity based plasticity in a spiking neuron network adapted to process sensory input. In one approach, the plasticity mechanism of a connection may comprise a causal potentiation portion and an anti-causal portion. The anti-causal portion, corresponding to the input into a neuron occurring after the neuron response, may be configured based on the prior activity of the neuron. When the neuron is in low activity state, the connection, when active, may be potentiated by a base amount. When the neuron activity increases due to another input, the efficacy of the connection, if active, may be reduced proportionally to the neuron activity. Such functionality may enable the network to maintain strong, albeit inactive, connections available for use for extended intervals.
Abstract:
Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
Abstract:
Apparatus and methods for training and controlling of e.g., robotic devices. In one implementation, a robot may be utilized to perform a target task characterized by a target trajectory. The robot may be trained by a user using supervised learning. The user may interface to the robot, such as via a control apparatus configured to provide a teaching signal to the robot. The robot may comprise an adaptive controller comprising a neuron network, which may be configured to generate actuator control commands based on the user input and output of the learning process. During one or more learning trials, the controller may be trained to navigate a portion of the target trajectory. Individual trajectory portions may be trained during separate training trials. Some portions may be associated with robot executing complex actions and may require additional training trials and/or more dense training input compared to simpler trajectory actions.
Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Abstract:
Apparatus and methods for a modular robotic device with artificial intelligence that is receptive to training controls. In one implementation, modular robotic device architecture may be used to provide all or most high cost components in an autonomy module that is separate from the robotic body. The autonomy module may comprise controller, power, actuators that may be connected to controllable elements of the robotic body. The controller may position limbs of the toy in a target position. A user may utilize haptic training approach in order to enable the robotic toy to perform target action(s). Modular configuration of the disclosure enables users to replace one toy body (e.g., the bear) with another (e.g., a giraffe) while using hardware provided by the autonomy module. Modular architecture may enable users to purchase a single AM for use with multiple robotic bodies, thereby reducing the overall cost of ownership.
Abstract:
Apparatus and methods for a modular robotic device with artificial intelligence that is receptive to training controls. In one implementation, modular robotic device architecture may be used to provide all or most high cost components in an autonomy module that is separate from the robotic body. The autonomy module may comprise controller, power, actuators that may be connected to controllable elements of the robotic body. The controller may position limbs of the toy in a target position. A user may utilize haptic training approach in order to enable the robotic toy to perform target action(s). Modular configuration of the disclosure enables users to replace one toy body (e.g., the bear) with another (e.g., a giraffe) while using hardware provided by the autonomy module. Modular architecture may enable users to purchase a single AM for use with multiple robotic bodies, thereby reducing the overall cost of ownership.
Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.