Abstract:
Robotic devices may be operated by users remotely. A learning controller apparatus may detect remote transmissions comprising user control instructions. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The learning apparatus may monitor one or more wavelength (infrared light, radio channel) and detect transmissions from user remote control device to the robot during its operation by the user. The learning apparatus may be configured to develop associations between the detected user remote control instructions and actions of the robot for given context. When a given sensory context occurs, the learning controller may automatically provide control instructions to the robot that may be associates with the given context. The provision of control instructions to the robot by the learning controller may obviate the need for user remote control of the robot thereby enabling autonomous operation by the robot.
Abstract:
Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.
Abstract:
Apparatus and methods for salient feature detection by a spiking neuron network. The network may comprise feature-specific units capable of responding to different objects (red and green color). The plasticity mechanism of the network may be configured based on difference between two similarity measures related to activity of different unit types obtained during network training. One similarity measure may be based on activity of units of the same type (red). Another similarity measure may be based on activity of units of one type (red) and another type (green). Similarity measures may comprise a cross-correlogram and/or mutual information determined over an activity window. During network operation, the activity based plasticity mechanism may be used to potentiate connections between units of the same type (red-red). The plasticity mechanism may be used to depress connections between units of different types (red-green). The plasticity mechanism may effectuate detection of salient features in the input.
Abstract:
Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.
Abstract:
Computerized appliances may be operated by users remotely. A learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.
Abstract:
Computerized appliances may be operated by users remotely. In one exemplary implementation, a learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.
Abstract:
A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. The software and hardware engines are optimized to take into account short-term and long-term synaptic plasticity in the form of LTD, LTP, and STDP.
Abstract:
Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.
Abstract:
Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.
Abstract:
Computerized appliances may be operated by users remotely. A learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.