METHOD AND DEVICE FOR FOCUSING SOUND SOURCE

    公开(公告)号:US20210096810A1

    公开(公告)日:2021-04-01

    申请号:US16703768

    申请日:2019-12-04

    Abstract: Disclosed are a sound source focus method and device in which the sound source focus device, in a 5G communication environment by amplifying and outputting a sound source signal of a user's object of interest extracted from an acoustic signal included in video content by executing a loaded artificial intelligence (AI) algorithm and/or machine learning algorithm. The sound source focus method includes playing video content including a video signal including at least one moving object and the acoustic signal in which sound sources output by the object are mixed, determining the user's object of interest from the video signal, acquiring unique sound source information about the user's object of interest, extracting an actual sound source for the user's object of interest corresponding to the unique sound source information from the acoustic signal, and outputting the actual sound source extracted for the user's object of interest.

    ARTIFICIAL INTELLIGENCE APPARATUS FOR CORRECTING SYNTHESIZED SPEECH AND METHOD THEREOF

    公开(公告)号:US20200058290A1

    公开(公告)日:2020-02-20

    申请号:US16660947

    申请日:2019-10-23

    Abstract: Disclosed herein is an artificial intelligence apparatus includes a memory configured to store learning target text and human speech of a person who pronounces the text, a processor configured to generate synthesized speech in which the text is pronounced by synthesized sound and extract a synthesized speech feature set including information on a feature pronounced in the synthesized speech and a human speech feature set including information on a feature pronounced in the human speech, and a learning processor configured to train a speech correction model for outputting a corrected speech feature set to allow predetermined synthesized speech to be corrected based on a human pronunciation feature when a synthesized speech feature set extracted from predetermined synthesized speech is input, based on the synthesized speech feature set and the human speech feature set.

    METHOD AND APPARATUS FOR PERFORMING MULTI-LANGUAGE COMMUNICATION

    公开(公告)号:US20200043495A1

    公开(公告)日:2020-02-06

    申请号:US16601787

    申请日:2019-10-15

    Abstract: A method for performing multi-language communication includes receiving an utterance, identifying a language of the received utterance, determining whether the identified language matches a preset reference language, applying, to the received utterance, an interpretation model interpreting the identified language into the reference language when the identified language does not match the reference language, changing, to text, speech data which is outputted in the reference language as a result of applying the interpretation model, generating a response message responding to the text of the speech data, and outputting the response message. Here, the interpretation model may be a deep neural network model generated through machine learning, and the interpretation model may be stored in an edge device or provided through a server in an Internet of things environment through a 5G network.

    SPEECH SYNTHESIS METHOD AND APPARATUS BASED ON EMOTION INFORMATION

    公开(公告)号:US20200035215A1

    公开(公告)日:2020-01-30

    申请号:US16593161

    申请日:2019-10-04

    Abstract: A speech synthesis method and apparatus based on emotion information are disclosed. A speech synthesis method based on emotion information extracts speech synthesis target text from received data and determines whether the received data includes situation explanation information. First metadata corresponding to first emotion information is generated on the basis of the situation explanation information. When the extracted data does not include situation explanation information, second metadata corresponding to second emotion information generated on the basis of semantic analysis and context analysis is generated. One of the first metadata and the second metadata is added to the speech synthesis target text to synthesize speech corresponding to the extracted data. A speech synthesis apparatus of this disclosure may be associated with an artificial intelligence module, drone (unmanned aerial vehicle, UAV), robot, augmented reality (AR) devices, virtual reality (VR) devices, devices related to 5G services, and the like.

    GATHERING USER'S SPEECH SAMPLES
    10.
    发明申请

    公开(公告)号:US20210134301A1

    公开(公告)日:2021-05-06

    申请号:US17028527

    申请日:2020-09-22

    Abstract: Disclosed is gathering a user's speech samples. According to an embodiment of the disclosure, a method of gathering learning samples may gather a speaker's speech data obtained while talking on a mobile terminal and text data generated from the speech data and gather training data for generating a speech synthesis model. According to the disclosure, the method of gathering learning samples may be related to artificial intelligence (AI) modules, unmanned aerial vehicles (UAVs), robots, augmented reality (AR) devices, virtual reality (VR) devices, and 5G service-related devices.

Patent Agency Ranking