-
公开(公告)号:US11900607B2
公开(公告)日:2024-02-13
申请号:US17743448
申请日:2022-05-13
Inventor: Junyong Noh , Jung Eun Yoo , Kwanggyoon Seo , Sanghun Park , Jaedong Kim , Dawon Lee
CPC classification number: G06T7/11 , G06T7/62 , G06T15/205
Abstract: Provided is a method of framing a three dimensional (3D) target object for generation of a virtual camera layout. The method may include analyzing a reference video image to extract a framing rule for at least one reference object in the reference video image, generating a framing rule for at least one 3D target object using the framing rule for the at least one reference object in the reference video image, and using the framing rule for the at least one 3D target object for generation of a virtual camera layout.
-
公开(公告)号:US12182918B2
公开(公告)日:2024-12-31
申请号:US17847472
申请日:2022-06-23
Inventor: Junyong Noh , Ha Young Chang , Kwanggyoon Seo , Jung Eun Yoo
Abstract: A method of training a deep neural network for generating a cinemagraph is disclosed. The method may include preparing a foreground layer input by using an input video, preparing a background layer input by using the input video, providing a foreground layer output and a background layer output from the DNN by inputting, to the DNN, the foreground layer input and the background layer input, providing an output video by synthesizing the foreground layer output with the background layer output, and updating intrinsic parameters of the DNN a plurality of times, based on the input video, the foreground layer input, the foreground layer output, the background layer output, and the output video.
-
3.
公开(公告)号:US20180053304A1
公开(公告)日:2018-02-22
申请号:US15291814
申请日:2016-10-12
Inventor: Jun Yong Noh , Jae Dong S. Kim , Hyung Goog Seo , Sang Hun Park , Seung Hoon Cha , Jung Eun Yoo
CPC classification number: H04N13/296 , G06T7/33 , G06T7/344 , G06T7/75 , G06T2207/10028 , G06T2207/30244 , H04N13/204 , H04N13/246 , H04N13/254 , H04N13/271
Abstract: Disclosed herein are a method and apparatus for detecting a relative camera position based on a skeleton data, wherein the method may include receiving skeleton information obtained using a plurality of depth cameras; detecting a position relationship between corresponding joints from the received skeleton information; and obtaining a relative position and a rotation information between the depth cameras in such a way to use a position relationship between the detected joints.
-
-