-
1.
公开(公告)号:US20230154463A1
公开(公告)日:2023-05-18
申请号:US17989595
申请日:2022-11-17
Applicant: SAMSUNG ELECTRONICS CO, LTD.
Inventor: Jisun CHOI , Seolhee KIM , Jaeyung YEO
CPC classification number: G10L15/22 , G10L15/1815 , G10L2015/088
Abstract: An electronic device includes a processor, and memory that stores instructions. The processor executes the instructions to acquire utterance data of a user, the utterance data including a quick command and an edit command for editing a task, identify a plurality of tasks associated with the quick command by using the quick command, edit the tasks associated with the quick command by excluding one task from among the plurality of tasks or adding a new task to the plurality of tasks based on the edit command, and perform the edited plurality of tasks.
-
公开(公告)号:US20230197066A1
公开(公告)日:2023-06-22
申请号:US18113306
申请日:2023-02-23
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Sungbin JIN , Seolhee KIM , Jaeyung YEO
IPC: G10L15/18 , G10L15/22 , G10L15/30 , G06F16/638
CPC classification number: G10L15/1815 , G10L15/22 , G10L15/30 , G06F16/638 , G10L2015/223
Abstract: An electronic device includes: a memory including instructions and a processor electrically connected to the memory and configured to execute the instructions. When the instructions are executed by the processor, the processor is configured to: receive a voice signal corresponding to an utterance; match the voice signal with at least one of a plurality of defined intents; and when the at least one and the voice signal do not match, provide a response corresponding to the utterance based on an extended database (DB) that includes a result of analyzing responses provided in response to utterances matched to the at least one.
-
公开(公告)号:US20240071363A1
公开(公告)日:2024-02-29
申请号:US18372898
申请日:2023-09-26
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Jisun CHOI , Seolhee KIM , Kyungtae KIM , Hoseon SHIN
Abstract: Disclosed are an electronic device and a method of controlling a text-to-speech (TTS) rate. An electronic device may include a processor, and a memory configured to store instructions to be executed by the processor. The processor may receive a voice signal of a user. The processor may calculate a speaking rate of the voice signal based on the voice signal. The processor may generate an output text to be output to the user based on the voice signal. The processor may determine a TTS rate of the output text based on the speaking rate. The processor may convert the output text into voice data based on the TTS rate and output the voice data.
-
公开(公告)号:US20230197079A1
公开(公告)日:2023-06-22
申请号:US18094694
申请日:2023-01-09
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Kichul KIM , Seolhee KIM , Sungbin JIN , Jisun CHOI , Eunchung NOH , Sunghwan BAEK , Jaeyung YEO , Changyong JEONG
CPC classification number: G10L15/22 , G10L15/30 , G10L15/1815 , G10L2015/223
Abstract: An electronic device is disclosed that, in response to at least one of an utterance intent and a control target device not being identified from utterance data, classifies a situation factor based on the utterance data, determines one or more external devices that match the classified situation factor, and generates and presents to the user terminal one or more action scenarios for one or more external devices determined for the classified situation factor.
-
公开(公告)号:US20220284197A1
公开(公告)日:2022-09-08
申请号:US17664834
申请日:2022-05-24
Applicant: Samsung Electronics Co., Ltd.
Inventor: Jooyong BYEON , Seolhee KIM
Abstract: An apparatus for processing voice commands includes: a memory configured to store computer-executable instructions, and a processor configured to execute the computer-executable instructions. When executed, the instructions cause the processor to perform: receiving an utterance of a user in an input language set by the user, determining an utterance intent of the utterance by analyzing the utterance with the input language, determining a standard utterance in the input language corresponding to the utterance of the user based on the determined utterance intent, determining whether the input language and an output language are different languages, extracting a standard utterance in the output language corresponding to the determined standard utterance in the input language when the input language and the output language are different, generating an output response in the output language based on the extracted standard utterance in the output language, and outputting the output response.
-
-
-
-