Abstract:
A user terminal apparatus is provided. The user terminal apparatus includes a camera; a storage; a display; and a controller configured to control the camera to photograph an object, identify an object image from an image photographed by the camera, generate image metadata used to change a feature part of the object image based on the identified image, control the storage to store by matching a background image with the object image and the image metadata, and control, in response to receiving a user command, the display to display by overlapping the object with the background image and change the feature part of the object image based on the image metadata.
Abstract:
Provided is a display device including: left and right displays that are spaced apart from each other; left and right lens groups respectively located at the rear of the left display and the rear of the right display; left and right display housings configured to surround the left and right displays and the left and right lens groups; a main frame provided between the left and right display housings; a connection structure capable of adjusting an angle between the main frame and a connector electrically connected to an external electronic device; and electronic components provided inside the main frame.
Abstract:
A method of processing an image by a device obtaining one or more images including captured images of objects in a target space, generating metadata including information about mapping between the one or more images and a three-dimensional (3D) mesh model used to generate a virtual reality (VR) image of the target space, and transmitting the one or more images and the metadata to a terminal.
Abstract:
A display apparatus is provided. The display apparatus includes: an inputter configured to receive a user's face shape and voice; a voice processor configured to analyze the input voice and extract translated data, and convert the translated data into translated voice; an image processor configured to detect information related to a mouth area of the user's face shape which corresponds to the translated data, and create a changed shape of a user's face based on the detected information related to the mouth area; and an outputter configured to output the translated voice and the changed shape of the user's face.
Abstract:
A method of processing an image in a device, and the device thereof are provided. The method includes determining a distortion correction ratio of each of a plurality of vertices included in a source image, based on information about a lens through which the source image is projected, determining corrected location information of pixels located between the plurality of vertices, based on the distortion correction ratio of each of the plurality of vertices and interpolation ratios of the pixels, and rendering a distortion-corrected image including pixels determined as a result of performing interpolation on the plurality of vertices based on the corrected location information.
Abstract:
A user terminal apparatus is provided. The user terminal apparatus includes a camera unit configured to photograph an object, a controller configured to detect an object image from an image of the object photographed by the camera unit, generate image metadata used to change a feature part of the object image, and generate an image file by matching the object image with the image metadata, a storage configured to store the image file, and a display configured to, in response to selecting the image file, display the object image in which the feature part is changedbased on the image metadata.
Abstract:
A method of processing an image by a device obtaining one or more images including captured images of objects in a target space, generating metadata including information about mapping between the one or more images and a three-dimensional (3D) mesh model used to generate a virtual reality (VR) image of the target space, and transmitting the one or more images and the metadata to a terminal.
Abstract:
Provided is a method of processing an image, the method including: obtaining rotation information with respect to each of a plurality of regions included in a 360-degree image; determining representative rotation information indicating movement of a capturing device, the movement occurring when capturing the 360-degree image, based on the rotation information of each of the plurality of regions; and correcting distortion of the 360-degree image based on the determined representative rotation information.
Abstract:
A virtual reality display apparatus and display method thereof are provided. The display method includes displaying a virtual reality image; acquiring object information regarding a real-world object based on a binocular view of the user; and displaying the acquired object information together with the virtual reality image.