Abstract:
Apparatus and methods for training of robotic devices. Robotic devices may be trained by a user guiding the robot along target trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control commands based on one or more of the user guidance, sensory input, and/or performance measure. Training may comprise a plurality of trials. During first trial, the user input may be sufficient to cause the robot to complete the trajectory. During subsequent trials, the user and the robot's controller may collaborate so that user input may be reduced while the robot control may be increased. Individual contributions from the user and the robot controller during training may be may be inadequate (when used exclusively) to complete the task. Upon learning, user's knowledge may be transferred to the robot's controller to enable task execution in absence of subsequent inputs from the user.
Abstract:
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Abstract:
Apparatus and methods for a modular robotic device with artificial intelligence that is receptive to training controls. In one implementation, modular robotic device architecture may be used to provide all or most high cost components in an autonomy module that is separate from the robotic body. The autonomy module may comprise controller, power, actuators that may be connected to controllable elements of the robotic body. The controller may position limbs of the toy in a target position. A user may utilize haptic training approach in order to enable the robotic toy to perform target action(s). Modular configuration of the disclosure enables users to replace one toy body (e.g., the bear) with another (e.g., a giraffe) while using hardware provided by the autonomy module. Modular architecture may enable users to purchase a single AM for use with multiple robotic bodies, thereby reducing the overall cost of ownership.
Abstract:
A robotic vehicle may be operated by a learning controller comprising a trainable convolutional network configured to determine control signal based on sensory input. An input network layer may be configured to transfer sensory input into a hidden layer data using a filter convolution operation. Input layer may be configured to transfer sensory input into hidden layer data using a filter convolution. Output layer may convert hidden layer data to a predicted output using data segmentation and a fully connected array of efficacies. During training, efficacy of network connections may be adapted using a measure determined based on a target output provided by a trainer and an output predicted by the network. A combination of the predicted and the target output may be provided to the vehicle to execute a task. The network adaptation may be configured using an error back propagation method. The network may comprise an input reconstruction.
Abstract:
Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
Abstract:
An optical object detection apparatus and associated methods. The apparatus may comprise a lens (e.g., fixed-focal length wide aperture lens) and an image sensor. The fixed focal length of the lens may correspond to a depth of field area in front of the lens. When an object enters the depth of field area (e.g., sue to a relative motion between the object and the lens) the object representation on the image sensor plane may be in-focus. Objects outside the depth of field area may be out of focus. In-focus representations of objects may be characterized by a greater contrast parameter compared to out of focus representations. One or more images provided by the detection apparatus may be analyzed in order to determine useful information (e.g., an image contrast parameter) of a given image. Based on the image contrast meeting one or more criteria, a detection indication may be produced.
Abstract:
Optical flow for a moving platform may be encoded into pulse output. Optical flow contribution induced due to the platform self-motion may be cancelled. The cancellation may be effectuated by (i) encoding the platform motion into pulse output; and (ii) inhibiting pulse generation by neurons configured to encode optical flow component optical flow that occur based on self-motion. The motion encoded may be coupled to the optical flow encoder via one or more connections. Connection propagation delay may be configured during encoder calibration in the absence of obstacles so as to provide system specific delay matrix. The inhibition may be based on a coincident arrival of the motion spiking signal via the calibrated connections to the optical flow encoder neurons. The coincident motion pulse arrival may be utilized in order to implement an addition of two or more vector properties.
Abstract:
Apparatus and methods for a modular robotic device with artificial intelligence that is receptive to training controls. In one implementation, modular robotic device architecture may be used to provide all or most high cost components in an autonomy module that is separate from the robotic body. The autonomy module may comprise controller, power, actuators that may be connected to controllable elements of the robotic body. The controller may position limbs of the toy in a target position. A user may utilize haptic training approach in order to enable the robotic toy to perform target action(s). Modular configuration of the disclosure enables users to replace one toy body (e.g., the bear) with another (e.g., a giraffe) while using hardware provided by the autonomy module. Modular architecture may enable users to purchase a single AM for use with multiple robotic bodies, thereby reducing the overall cost of ownership.
Abstract:
Sensory encoder may be implemented. Visual encoder apparatus may comprise spiking neuron network configured to receive photodetector input. Excitability of neurons may be adjusted and output spike may be generated based on the input. When neurons generate spiking response, spiking threshold may be dynamically adapted to produce desired output rate. The encoder may dynamically adapt its input range to match statistics of the input and to produce output spikes at an appropriate rate and/or latency. Adaptive input range adjustment and/or spiking threshold adjustment collaborate to enable recognition of features in sensory input of varying dynamic range.
Abstract:
Apparatus and methods for arbitration of control signals for robotic devices. A robotic device may comprise an adaptive controller comprising a plurality of predictors configured to provide multiple predicted control signals based on one or more of the teaching input, sensory input, and/or performance. The predicted control signals may be configured to cause two or more actions that may be in conflict with one another and/or utilize a shared resource. An arbitrator may be employed to select one of the actions. The selection process may utilize a WTA, reinforcement, and/or supervisory mechanisms in order to inhibit one or more predicted signals. The arbitrator output may comprise target state information that may be provided to the predictor block. Prior to arbitration, the predicted control signals may be combined with inputs provided by an external control entity in order to reduce learning time.