-
公开(公告)号:US20220375361A1
公开(公告)日:2022-11-24
申请号:US17682924
申请日:2022-02-28
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Junkyu Han
Abstract: According to an embodiment of the disclosure, a method of analyzing and evaluating content at a display device, includes obtaining a speech input of a user of the display device. The method further includes determining an intent of the user based on a result of interpreting the speech input. The method further includes obtaining reference data based on the intent of the user. The method further includes obtaining submission content from an external device connected to the display device. The method further includes determining at least one target object to be compared with the reference data from among objects included in the submission content. The method further includes evaluating the submission content by comparing the at least one target object with the reference data.
-
公开(公告)号:US12124809B2
公开(公告)日:2024-10-22
申请号:US17248934
申请日:2021-02-12
Applicant: Samsung Electronics Co., Ltd.
Inventor: Junkyu Han , Kyuho Jo
IPC: G06F40/40 , G06F16/34 , G06F16/93 , G06F40/279
CPC classification number: G06F40/40 , G06F16/345 , G06F16/93 , G06F40/279
Abstract: An electronic apparatus is disclosed. The electronic apparatus includes a memory configured to store at least one instruction. The electronic apparatus also includes a processor, connected to the memory, and configured to control the electronic apparatus. The processor is further configured to identify a type corresponding to each of a plurality of sentences included in a document. The processor is also configured to group the plurality of sentences into a plurality of sentence groups based on the identified type, and summarize at least one sentence included in each of the plurality of sentence groups based on a user's preference for types of each of the plurality of sentence groups.
-
公开(公告)号:US11847827B2
公开(公告)日:2023-12-19
申请号:US17372276
申请日:2021-07-09
Applicant: Samsung Electronics Co., Ltd.
Inventor: Junkyu Han , Pius Lee , Hyunuk Tak
Abstract: A method for generating a summary video includes generating a user emotion graph of a user watching a first video. The method also includes obtaining a character emotion graph for a second video, by analyzing an emotion of a character in a second video that is a target of summarization. The method further includes obtaining an object emotion graph for an object in the second video, based on an object appearing in the second video. Additionally the method includes obtaining an image emotion graph for the second video, based on the character emotion graph and the object emotion graph. The method also includes selecting at least one first scene in the second video by comparing the user emotion graph with the image emotion graph. The method further includes generating the summary video of the second video, based on the at least one first scene.
-
-