Abstract:
The present disclosure relates to an electronic device for capturing a plurality of images using a plurality of cameras, generating a left-eye-view spherical image and a right-eye-view spherical image by classifying each of the plurality of images as a left-eye-view image or a right-eye-view image, obtaining depth information using the generated left-eye-view spherical image and right-eye-view spherical image, and generating a 360 degree three-dimensional image, wherein the three-dimensional effect thereof is controlled using the obtained depth information, and an image processing method therefor.
Abstract:
Provided are a display apparatus, a control method thereof, a server, and a control method thereof. The display apparatus includes: a processor which processes a signal; a display which displays an image based on the processed signal; a first command receiver which receives a voice command; a storage which stores a plurality of voice commands said by a user; a second command receiver which receives a user's manipulation command; and a controller which, upon receiving the voice command, displays a list of the stored plurality of voice commands, selects one of the plurality of voice commands of the list according to the received user's manipulation command and controls the processor to process based on the selected voice command.
Abstract:
A method of processing an image in a device, and the device thereof are provided. The method includes determining a distortion correction ratio of each of a plurality of vertices included in a source image, based on information about a lens through which the source image is projected, determining corrected location information of pixels located between the plurality of vertices, based on the distortion correction ratio of each of the plurality of vertices and interpolation ratios of the pixels, and rendering a distortion-corrected image including pixels determined as a result of performing interpolation on the plurality of vertices based on the corrected location information.
Abstract:
A server is disclosed. A server for providing a content to a user terminal device providing a virtual reality service comprises: a communication unit for performing communication with at least one source device and the user terminal device; and a processor for, when a content transmission request for a preconfigured location is received from the user terminal device, receiving a content photographed in a real time from a source device of the preconfigured location on the basis of location information received from at least one source device, and providing the content to the user terminal device.
Abstract:
Disclosed is an electronic device. The electronic device includes a display, a communicator comprising communication circuitry configured to communicate with an advertisement server, a storage configured to store image data corresponding to identification information of an advertisement received from the advertisement server and image data corresponding to the identification information, and a processor configured to transmit event occurrence information to the advertisement server, in response to an event to display a first screen on the display occurring, and to extract image data corresponding to the received identification information in response to identification information of an advertisement corresponding to the event occurrence information being received from the advertisement server, and provide the data on the first screen.
Abstract:
The display apparatus includes a display; a user input to receive a user input; and a controller to transmit a search query input through the user input to the server, to classify search result data on the search query received from the server into a plurality of categories based on a content type and to display the search result data on the display according to the classified categories, wherein the controller determines at least one of a display order of the classified categories and a number of displayed search results of each category based on a relevance to a content currently displayed on the display.
Abstract:
Apparatuses and methods related to a voice recognition system, a voice recognition server and a control method of a, display apparatus, are provided. More particularly, apparatuses and methods relate to a voice recognition system which performs a voice recognition function by using at least one of a current usage status with respect to the display apparatus and a function that is currently performed by the display apparatus. A voice recognition system includes: a voice receiver which receives a voice command and a controller which determines at least one from among a current usage status with respect to a display apparatus and a function currently performed by the display apparatus, determines an operation corresponding to the received voice command by using at least one from among the determined current usage status and the function currently performed by the display apparatus, and performs the determined operation.
Abstract:
A method of processing an image by a device obtaining one or more images including captured images of objects in a target space, generating metadata including information about mapping between the one or more images and a three-dimensional (3D) mesh model used to generate a virtual reality (VR) image of the target space, and transmitting the one or more images and the metadata to a terminal.
Abstract:
A display apparatus includes a display, a voice collector configured to collect a user's voice, a communication interface configured to provide the collected voice and the filtering information of the display apparatus to the interactive server, and a controller configured to receive response information corresponding to the voice and to the filtering information from the interactive server, and to control the display to display the response information.
Abstract:
An image transformation apparatus includes a detection unit which is configured to detect, from each of a user image and a reference image, feature points of a face and angle information of the face, a feature points adjusting unit which is configured to adjust the feature points of the user image or the reference image by using the detected angle information, a face analysis unit which is configured to compare facial features contained in the user image and the reference image by using the adjusted feature points, and an image transformation unit which is configured to transform the user image by using a result of the comparison of the facial features from the face analysis unit.