Apparatus and method for statistical memory network

    公开(公告)号:US11526732B2

    公开(公告)日:2022-12-13

    申请号:US16260637

    申请日:2019-01-29

    摘要: Provided are an apparatus and method for a statistical memory network. The apparatus includes a stochastic memory, an uncertainty estimator configured to estimate uncertainty information of external input signals from the input signals and provide the uncertainty information of the input signals, a writing controller configured to generate parameters for writing in the stochastic memory using the external input signals and the uncertainty information and generate additional statistics by converting statistics of the external input signals, a writing probability calculator configured to calculate a probability of a writing position of the stochastic memory using the parameters for writing, and a statistic updater configured to update stochastic values composed of an average and a variance of signals in the stochastic memory using the probability of a writing position, the parameters for writing, and the additional statistics.

    MOBILE COMMUNICATION TERMINAL AND OPERATING METHOD THEREOF
    3.
    发明申请
    MOBILE COMMUNICATION TERMINAL AND OPERATING METHOD THEREOF 有权
    移动通信终端及其操作方法

    公开(公告)号:US20140221043A1

    公开(公告)日:2014-08-07

    申请号:US14018068

    申请日:2013-09-04

    IPC分类号: H04W8/22 H04M1/02

    摘要: Provided is a mobile communication terminal including: a camera module which captures an image of a set area; a microphone module which, when a sound including a voice of a user is input, extracts a sound level corresponding to the sound and a sound generating position; and a control module which estimates a position of a lip of the user from the image, extracts a voice level from the sound level corresponding to the position of the lip of the user and a voice generating position from the sound generating position, and recognizes the voice of the user based on at least one of the voice level and the voice generating position.

    摘要翻译: 提供了一种移动通信终端,包括:相机模块,其捕获设置区域的图像; 麦克风模块,当输入包括用户的声音的声音时,提取与声音和声音产生位置相对应的声级; 以及控制模块,其从图像估计用户的嘴唇的位置,从与声音产生位置的用户的嘴唇的位置和语音产生位置相对应的声级提取语音电平,并且识别出 基于语音电平和语音产生位置中的至少一个的用户的语音。

    Sentence embedding method and apparatus based on subword embedding and skip-thoughts

    公开(公告)号:US11423238B2

    公开(公告)日:2022-08-23

    申请号:US16671773

    申请日:2019-11-01

    摘要: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning. A proposed model minimizes additional training parameters based on sentence embedding such that most training results may be accumulated in a subword embedding parameter.

    Self-learning based dialogue apparatus and method for incremental dialogue knowledge

    公开(公告)号:US10332033B2

    公开(公告)日:2019-06-25

    申请号:US15405425

    申请日:2017-01-13

    摘要: An incremental self-learning based dialogue apparatus for dialogue knowledge includes a dialogue processing unit configured to determine a intention of a user utterance by using a knowledge base and perform processing or a response suitable for the user intention, a dialogue establishment unit configured to automatically learn a user intention stored in a intention annotated learning corpus, store information about the learned user intention in the knowledge base, and edit and manage the knowledge base and the intention annotated learning corpus, and a self-knowledge augmentation unit configured to store a log of a dialogue performed by the dialogue processing unit, detect and classify an error in the stored dialogue log, automatically tag a user intention for the detected and classified error, and store the tagged user intention in the intention annotated learning corpus.

    Apparatus and method for linearly approximating deep neural network model

    公开(公告)号:US10789332B2

    公开(公告)日:2020-09-29

    申请号:US16121836

    申请日:2018-09-05

    IPC分类号: G06F17/17 G06N3/04

    摘要: Provided are an apparatus and method for linearly approximating a deep neural network (DNN) model which is a non-linear function. In general, a DNN model shows good performance in generation or classification tasks. However, the DNN fundamentally has non-linear characteristics, and therefore it is difficult to interpret how a result from inputs given to a black box model has been derived. To solve this problem, linear approximation of a DNN is proposed. The method for linearly approximating a DNN model includes 1) converting a neuron constituting a DNN into a polynomial, and 2) classifying the obtained polynomial as a polynomial of input signals and a polynomial of weights.

    APPARATUS AND METHOD FOR LINEARLY APPROXIMATING DEEP NEURAL NETWORK MODEL

    公开(公告)号:US20190272309A1

    公开(公告)日:2019-09-05

    申请号:US16121836

    申请日:2018-09-05

    IPC分类号: G06F17/17 G06N3/04

    摘要: Provided are an apparatus and method for linearly approximating a deep neural network (DNN) model which is a non-linear function. In general, a DNN model shows good performance in generation or classification tasks. However, the DNN fundamentally has non-linear characteristics, and therefore it is difficult to interpret how a result from inputs given to a black box model has been derived. To solve this problem, linear approximation of a DNN is proposed. The method for linearly approximating a DNN model includes 1) converting a neuron constituting a DNN into a polynomial, and 2) classifying the obtained polynomial as a polynomial of input signals and a polynomial of weights.