-
公开(公告)号:US20190392321A1
公开(公告)日:2019-12-26
申请号:US16265212
申请日:2019-02-01
申请人: Juyang Weng , Zejia Zheng , Xiang Wu
发明人: Juyang Weng , Zejia Zheng , Xiang Wu
IPC分类号: G06N3/08
摘要: This invention includes a new type of neural network that is able to automatically and incrementally generate an internal hierarchy without a need to handcraft a static hierarchy of network areas and a static number of levels and the static number of neurons in each network area or level. This capability is achieved by enabling each neuron to have its own dynamic inhibitory zone using neuron-specific inhibitory connections.
-
公开(公告)号:US20230034287A1
公开(公告)日:2023-02-02
申请号:US17379344
申请日:2021-07-19
申请人: Juyang Weng , Xiang Wu
发明人: Juyang Weng , Xiang Wu
摘要: Traditionally, learning speech synthesis and speech recognition were investigated as two separate tasks. This separation hinders incremental development for concurrent synthesis and recognition, where partially-learned synthesis and partially-learned recognition must help each other throughout lifelong learning. This invention is a paradigm shift—we treat synthesis and recognition as two intertwined aspects of a lifelong learning robot. Furthermore, in contrast to existing recognition or synthesis systems, babies do not need their mothers to directly supervise their vocal tracts at every moment during the learning. We argue that self-generated non-symbolic states/actions at fine-grained time level help such a learner as necessary temporal contexts. Here, we approach a new and challenging problem—how to enable an autonomous learning system to develop an artificial motor for generating temporally-dense (e.g., frame-wise) actions on the fly without human handcrafting a set of symbolic states. Here the artificial motor corresponds to a combination of a multiplicity of robotic effectors, including, but not limited to, speaking, singing, dancing, riding a bike, swimming, and driving a car. The self-generated states/actions are Muscles-like, High-dimensional, Temporally-dense and Globally-smooth (MHTG), so that these states/actions are directly attended for concurrent synthesis and recognition for each time frame. Human teachers are relieved from supervising learner's motor ends. The Candid Covariance-free Incremental (CCI) Principal Component Analysis (PCA) is applied to develop such an artificial speaking motor where PCA features drive the motor. Since each life must develop normally, each Developmental Network-2 (DN-2) reaches the same network (maximum likelihood, ML) regardless of randomly initialized weights, where ML is not just for a function approximator but rather an emergent Turing Machine. The machine-synthesized sounds are evaluated by both the neural network and humans with recognition experiments. Our experimental results showed learning-to-synthesize and learning-to-recognize-through-synthesis for phonemes. This invention corresponds to a key step toward our goal to close a great gap toward fully autonomous machine learning directly from the physical world.
-
公开(公告)号:US20200257503A1
公开(公告)日:2020-08-13
申请号:US16270553
申请日:2019-02-07
申请人: Juyang Weng , Zejia Zheng , Xiang Wu , Juan Castro-Garcia , Shengjie Zhu
发明人: Juyang Weng , Zejia Zheng , Xiang Wu , Juan Castro-Garcia , Shengjie Zhu
摘要: This invention presents a method and an apparatus for auto-programming for general purposes as well as a new kind of operating system that uses a general-purpose learning engine to learn any open-ended practical tasks or applications. Experimental systems of the method are applied to vision, audition, and natural language understanding.
-
4.
公开(公告)号:US20220339781A1
公开(公告)日:2022-10-27
申请号:US17511525
申请日:2021-10-26
申请人: Juyang Weng
发明人: Juyang Weng
摘要: This invention presents a new kind of robots that learn in real-time, on the fly, without a need for either annotation of sensed images or annotation of motor images. Therefore, during the process of learning, such annotation-free robots are always conscious throughout its lifetime. This invention grew from the prior art called Developmental Networks that has already supported by its Emergent Turing Machine under-pinning and the maximum-likelihood property. These key properties make it practical to close the loop—from 3D world to 2D sensory images and motor images and back to 3D world. This invention seems to be the first algorithmic-level, holistic, and neural network model for developing machine consciousness. Furthermore, this model is through conscious learning and freedom from annotations of sensory images and motor images. This invention appears to be also the first to model animal-like discovery through general-purpose imitation.
-
-
-