-
公开(公告)号:US20180189571A1
公开(公告)日:2018-07-05
申请号:US15428247
申请日:2017-02-09
发明人: Yong Seok SEO , Dong Hyuck IM , Won Young YOO , Jee Hyun PARK , Jung Hyun KIM , Young Ho SUH
IPC分类号: G06K9/00
CPC分类号: G06K9/00744 , G06K9/00288 , G06K9/6268
摘要: A signature actor determination method for video identification includes setting a list of actors who appear in each of a plurality of videos, generating a plurality of subsets including the actors, and determining that an actor included in a single final set indicating a first video among the plurality of subsets is a signature actor of the first video. Accordingly, video identification is possible by using just a little information.
-
公开(公告)号:US20230169709A1
公开(公告)日:2023-06-01
申请号:US17899947
申请日:2022-08-31
发明人: Dong Hyuck IM , Jung Hyun KIM , Hye Mi KIM , Jee Hyun PARK , Yong Seok SEO , Won Young YOO
IPC分类号: G06T11/60 , G06T11/40 , G06V40/16 , G06F3/04842 , G06F3/04845
CPC分类号: G06T11/60 , G06T11/40 , G06V40/161 , G06F3/04842 , G06F3/04845
摘要: Provided are a face de-identification method and system and a graphical user interface (GUI) provision method for face de-identification employing facial image generation. According to the face de-identification method and system and the GUI provision method, a facial area including eyes, a nose, and a mouth in a face of a person detected in an input image is replaced with a de-identified facial area generated through deep learning to maintain the face in a natural shape while protecting the person's portrait right. Accordingly, qualitative degradation of content is prevented, and viewers' concentration on the image is increased.
-
公开(公告)号:US20210120355A1
公开(公告)日:2021-04-22
申请号:US17032995
申请日:2020-09-25
发明人: Hye Mi KIM , Jung Hyun KIM , Jee Hyun PARK , Yong Seok SEO , Dong Hyuck IM , Won Young YOO
IPC分类号: H04S5/00 , G10L19/008 , G10L25/30
摘要: A method for receiving a mono sound source audio signal including phase information as an input, and separating into a plurality of signals may comprise performing initial convolution and down-sampling on the inputted mono sound source audio signal; generating an encoded signal by encoding the inputted signal using at least one first dense block and at least one down-transition layer; generating a decoded signal by decoding the encoded signal using at least one second dense block and at least one up-transition layer; and performing final convolution and resize on the decoded signal.
-
4.
公开(公告)号:US20240177507A1
公开(公告)日:2024-05-30
申请号:US18499717
申请日:2023-11-01
发明人: Dong Hyuck IM , Jung Hyun KIM , Hye Mi KIM , Jee Hyun PARK , Yong Seok SEO , Won Young YOO
IPC分类号: G06V20/70 , G06F40/40 , G06T11/00 , G06V10/774 , G06V10/82
CPC分类号: G06V20/70 , G06F40/40 , G06T11/00 , G06V10/774 , G06V10/82
摘要: An apparatus for generating text from an image may comprise: a memory configured to store at least one instruction; and a processor configured to execute the at least one instruction, wherein the processor is further configured to generate encoding information for an image based on the image and extract text information related to content of the image based on a degree of association with the encoding information.
-
公开(公告)号:US20210103721A1
公开(公告)日:2021-04-08
申请号:US16696354
申请日:2019-11-26
发明人: Dong Hyuck IM , Jung Hyun KIM , Hye Mi KIM , Jee Hyun PARK , Yong Seok SEO , Won Young YOO
摘要: Disclosed are a learning data generation method and apparatus needed to learn animation characters on the basis of deep learning. The learning data generation method needed to learn animation characters on the basis of deep learning may include collecting various images from an external source using wired/wireless communication, acquiring character images from the collected images using a character detection module, clustering the acquired character images, selecting learning data from among the clustered images, and inputting the selected learning data to an artificial neural network for character recognition.
-
6.
公开(公告)号:US20190278978A1
公开(公告)日:2019-09-12
申请号:US15992398
申请日:2018-05-30
发明人: Jee Hyun PARK , Jung Hyun KIM , Yong Seok SEO , Won Young YOO , Dong Hyuck IM
摘要: A method for determining a video-related emotion and a method of generating data for learning video-related emotions include separating an input video into a video stream and an audio stream; analyzing the audio stream to detect a music section; extracting at least one video clip matching the music section; extracting emotion information from the music section; tagging the video clip with the extracted emotion information and outputting the video clip; learning video-related emotions by using the at least one video clip tagged with the emotion information to generate a video-related emotion classification model; and determining an emotion related to an input query video by using the video-related emotion classification model to provide the emotion.
-
公开(公告)号:US20230153351A1
公开(公告)日:2023-05-18
申请号:US17681416
申请日:2022-02-25
发明人: Jee Hyun PARK , Jung Hyun KIM , Hye Mi KIM , Yong Seok SEO , Dong Hyuck IM , Won Young YOO
IPC分类号: G06F16/683 , G10H1/00 , G06F16/61
CPC分类号: G06F16/683 , G10H1/0008 , G10H1/0041 , G06F16/61 , G10H2240/075 , G10H2210/031 , G10H2240/095 , G10H2240/141 , G10H2240/135 , G10H2250/311
摘要: The present invention relates to an apparatus and method for identifying music in a content, The present invention includes extracting and storing a fingerprint of an original audio in an audio fingerprint DB; extracting a first fingerprint of a first audio in the content; and searching for a fingerprint corresponding to the fingerprint of the first audio in the audio fingerprint DB, wherein the first audio is audio data in a music section detected from the content.
-
公开(公告)号:US20190213279A1
公开(公告)日:2019-07-11
申请号:US15904596
申请日:2018-02-26
申请人: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE , GANGNEUNG-WONJU NATIONAL UNIVERSITY INDUSTRY ACADEMY COOPERATION GROUP
发明人: Jung Hyun KIM , Jee Hyun PARK , Yong Seok SEO , Won Young YOO , Dong Hyuck IM , Jin Soo SEO
IPC分类号: G06F17/30
摘要: An apparatus and method of analyzing and identifying a song with high performance identify a subject song in which global and local characteristics of a feature vector are reflected, and quickly identify a cover song in which changes in tempo and key are reflected by using a feature vector extracting part, a feature vector condensing part, and a feature vector comparing part, and by condensing a feature vector sequence into global and local characteristics in which a melody characteristic is reflected.
-
公开(公告)号:US20190179960A1
公开(公告)日:2019-06-13
申请号:US15880763
申请日:2018-01-26
发明人: Dong Hyuck IM , Yong Seok SEO , Jung Hyun KIM , Jee Hyun PARK , Won Young YOO
摘要: An apparatus for recognizing a person includes a content separator configured to receive contents and separate the contents into video content and audio content; a video processor configured to recognize a face from an image in the video content received from the content separator and obtain information on a face recognition section by analyzing the video content; an audio processor configured to recognize a speaker from voice data in the audio content received from the content separator and obtain information on a speaker recognition section by analyzing the audio content; and a person recognized section information provider configured to provide information on a section of the contents in which a person appears based on the information on the face recognition section and the information on the speaker recognition section.
-
-
-
-
-
-
-
-