Abstract:
A controlling method of a wearable electronic apparatus includes: receiving, by an IMU sensor, a bone conduction signal corresponding to vibration in the user's face, while the wearable electronic apparatus is operated in an ANC mode; identifying a presence or an absence of the user's voice based on the bone conduction signal; based on the identifying the presence of the user's voice, controlling an operation mode of the wearable electronic apparatus to be a different operation mode from the ANC mode; while the wearable electronic apparatus is operated in the different operation mode, identifying presence or absence of the user's voice based on the bone conduction signal; and based on the absence of the user's voice being identified for a predetermined time while the wearable electronic apparatus is operated in the different operation mode, controlling the different operation mode to return to the ANC mode.
Abstract:
An electronic apparatus which acquires input data to be input into a TTS module for outputting a voice through the TTS module, acquires a voice signal corresponding to the input data through the TTS module, detects an error in the acquired voice signal based on the input data, corrects the input data based on the detection result, and acquires a corrected voice signal corresponding to the corrected input data through the TTS module.
Abstract:
An audio coding terminal and method is provided. The terminal includes a coding mode setting unit to set an operation mode, from plural operation modes, for input audio coding by a codec, configured to code the input audio based on the set operation mode such that when the set operation mode is a high frame erasure rate (FER) mode the codec codes a current frame of the input audio according to a select frame erasure concealment (FEC) mode of one or more FEC modes. Upon the setting of the operation mode to be the High FER mode, the one FEC mode is selected, from the one or more FEC modes predetermined for the High FER mode, to control the codec by incorporating of redundancy within a coding of the input audio or as separate redundancy information separate from the coded input audio according to the selected one FEC mode.
Abstract:
An audio coding terminal and method is provided. The terminal includes a coding mode setting unit to set an operation mode, from plural operation modes, for input audio coding by a codec, configured to code the input audio based on the set operation mode such that when the set operation mode is a high frame erasure rate (FER) mode the codec codes a current frame of the input audio according to a select frame erasure concealment (FEC) mode of one or more FEC modes. Upon the setting of the operation mode to be the High FER mode, the one FEC mode is selected, from the one or more FEC modes predetermined for the High FER mode, to control the codec by incorporating of redundancy within a coding of the input audio or as separate redundancy information separate from the coded input audio according to the selected one FEC mode.
Abstract:
A method and apparatus to search a codebook including pulses that model a predetermined component of a speech signal. The method includes the operations of selecting a predetermined number of paths corresponding to a predetermined number of pulse locations that are most consistent with the predetermined component, from among paths corresponding to pulse locations of a predetermined pulse location set allocated to at least one branch that connects one state of a predetermined Trellis structure to another state, performing the path selecting operation on each of states other than the one state, and selecting a path corresponding to pulse locations that are most consistent with the predetermined component, from among paths including the selected paths, wherein each path corresponds to a union of plural tracks of an Algebraic codebook. Accordingly, the number of calculations required during a codebook search is reduced.
Abstract:
An apparatus and method for concealing frame erasure and a voice decoding apparatus and method using the same. The frame erasure concealment apparatus includes: a parameter extraction unit determining whether there is an erased frame in a voice packet, and extracting an excitement signal parameter and a line spectrum pair parameter of a previous good frame; and an erasure frame concealment unit, if there is an erased frame, restoring the excitement signal and line spectrum pair parameter of the erased frame by using a regression analysis from the excitement signal and line spectrum pair parameter of the previous good frame. According to the method and apparatus, by predicting and restoring the parameter of the erased frame through the regression analysis, the quality of the restored voice signal can be enhanced and the algorithm can be simplified.
Abstract:
An electronic apparatus, based on a text sentence being input, obtains prosody information of the text sentence, segments the text sentence into a plurality of sentence elements, obtains a speech in which prosody information is reflected to each of the plurality of sentence elements in parallel by inputting the plurality of sentence elements and the prosody information of the text sentence to a text to speech (TTS) module, and merges the speech for the plurality of sentence elements that are obtained in parallel to output speech for the text sentence.
Abstract:
An electronic device and a controlling method of the electronic device are provided. The electronic device acquires text to respond on a received user's speech, acquires a plurality of pieces of parameter information for determining a style of an output speech corresponding to the text based on information on a type of a plurality of text-to-speech (TTS) databases and the received user's speech, identifies a TTS database corresponding to the plurality of pieces of parameter information among the plurality of TTS databases, identifies a weight set corresponding to the plurality of pieces of parameter information among a plurality of weight sets acquired through a trained artificial intelligence model, adjusts information on the output speech stored in the TTS database based on the weight set, synthesizes the output speech based on the adjusted information on the output speech, and outputs the output speech corresponding to the text.
Abstract:
A system for synthesising expressive speech includes: an interface configured to receive an input text for conversion to speech; a memory; and at least one processor coupled to the memory. The processor is configured to generate, using an expressivity characterisation module, a plurality of expression vectors, wherein each expression vector is a representation of prosodic information in a reference audio style file, and synthesise expressive speech from the input text, using an expressive acoustic model comprising a deep convolutional neural network that is conditioned by at least one of the plurality of expression vectors.