-
1.
公开(公告)号:US20180181802A1
公开(公告)日:2018-06-28
申请号:US15392597
申请日:2016-12-28
Applicant: Adobe Systems Incorporated
Inventor: ZHILI CHEN , DUYGU CEYLAN , BYUNGMOON KIM , LIWEN HU , JIMEI YANG
CPC classification number: G06K9/00369 , G06K9/00201 , G06K9/4628 , G06K9/6267 , G06T7/50 , G06T7/73 , G06T2207/10012 , G06T2207/20081 , G06T2207/20084
Abstract: Certain embodiments involve recognizing combinations of body shape, pose, and clothing in three-dimensional input images. For example, synthetic training images are generated based on user inputs. These synthetic training images depict different training figures with respective combinations of a body pose, a body shape, and a clothing item. A machine learning algorithm is trained to recognize the pose-shape-clothing combinations in the synthetic training images and to generate feature descriptors describing the pose-shape-clothing combinations. The trained machine learning algorithm is outputted for use by an image manipulation application. In one example, an image manipulation application uses a feature descriptor, which is generated by the machine learning algorithm, to match an input figure in an input image to an example image based on a correspondence between a pose-shape-clothing combination of the input figure and a pose-shape-clothing combination of an example figure in the example image.
-
公开(公告)号:US20180234669A1
公开(公告)日:2018-08-16
申请号:US15433333
申请日:2017-02-15
Applicant: ADOBE SYSTEMS INCORPORATED
Inventor: ZHILI CHEN , DUYGU CEYLAN AKSIT , JINGWEI HUANG , HAILIN JIN
CPC classification number: H04N13/117 , G06F3/012 , G06T15/20 , H04N5/23238 , H04N13/144 , H04N13/207 , H04N13/221 , H04N13/344 , H04N13/366 , H04N13/373 , H04N13/376 , H04N13/378 , H04N13/38
Abstract: Systems and methods provide for providing a stereoscopic six-degree of freedom viewing experience with a monoscopic 360-degree video are provided. A monoscopic 360-degree video of a subject scene can be preprocessed by analyzing each frame to recover a three-dimensional geometric representation of the subject scene, and further recover a camera motion path that includes various parameters associated with the camera, such as orientation, translational movement, and the like, as evidenced by the recording. Utilizing the recovered three-dimensional geometric representation of the subject scene and recovered camera motion path, a dense three-dimensional geometric representation of the subject scene is generated utilizing random assignment and propagation operations. Once preprocessing is complete, the processed video can be provided for stereoscopic display via a device, such as a head-mounted display. As user motion data is detected and received, novel viewpoints can be stereoscopically synthesized for presentation to the user in real time, so as to provide an immersive virtual reality experience to the user based on the original monoscopic 360-degree video and the user's detected movement(s).
-