Abstract:
A method for controlling a video segmentation apparatus is provided. The method includes receiving an image corresponding to a frame of a video; estimating a motion of an object in the received image to be extracted from the received image, determining a plurality of positions of windows corresponding to the object; adjusting at least one of a size and a spacing of at least one window located at a position of the plurality of determined positions of the windows based on an image characteristic; and extracting the object from the received image based on the at least one window of which the at least one of the size and the spacing is adjusted.
Abstract:
A method and apparatus are provided for processing information about an omni-directional image. The method includes generating a first two-dimensional (2D) image projected from a first omni-directional image, by setting points on the first omni-directional image, which intersect a straight line passing through a first position that is a center of the first omni-directional image and a second position that is a center of a second omni-directional image, to a first pole and a second pole, generating a second 2D image projected from the second omni-directional image, by setting points on the second omni-directional image, which intersect the straight line passing through the first position and the second position, to a third pole and a fourth pole, and generating a third 2D image corresponding to a 2D image projected from a third omni-directional image centered in a third position between the first position and the second position, based on the first 2D image and the second 2D image.
Abstract:
A method for processing virtual reality (VR) content by a content providing device includes identifying cartesian coordinates of a first position on the VR content, estimating a movement of a user of the content providing device, identifying cartesian coordinates of a second position by applying a matrix representing the estimated movement of the user to the cartesian coordinates of the first position, converting the cartesian coordinates of the second position into spherical coordinates of the second position, and providing an area corresponding to the spherical coordinates of the second position to the user.
Abstract:
A master device providing an image to a slave device providing a virtual reality service is provided. The master device includes: a content input configured to receive an input stereoscopic image; a communicator configured to perform communication with the slave device providing the virtual reality service; and a processor configured to determine a viewpoint region corresponding to a motion state of the corresponding slave device in the input stereoscopic image on the basis of motion information received from the slave device and control the communicator to transmit an image of the identified viewpoint region to the slave device.
Abstract:
A method and apparatus are provided for processing information about an omni-directional image. The method includes generating a first two-dimensional (2D) image projected from a first omni-directional image, by setting points on the first omni-directional image, which intersect a straight line passing through a first position that is a center of the first omni-directional image and a second position that is a center of a second omni-directional image, to a first pole and a second pole, generating a second 2D image projected from the second omni-directional image, by setting points on the second omni-directional image, which intersect the straight line passing through the first position and the second position, to a third pole and a fourth pole, and generating a third 2D image corresponding to a 2D image projected from a third omni-directional image centered in a third position between the first position and the second position, based on the first 2D image and the second 2D image.
Abstract:
A wearable device that is configured to be worn on a body of a user and a control method thereof are provided. The wearable device includes an image projector configured to project a virtual user interface (UI) screen, a camera configured to capture an image, and a processor configured to detect a target area from the image captured by the camera, control the image projector to project the virtual UI screen, which corresponds to at least one of a shape and a size of the target area, onto the target area, and perform a function corresponding to a user interaction that is input through the virtual UI screen.
Abstract:
A photographing device includes a photographing unit, an image processor which separates an object from a first photographing image obtained by the photographing unit, a display which displays a background live view obtained by superimposing the separated object on a live view of a background, and a controller which obtains a second photographing image corresponding to the live view of the background when a command to shoot the background is input and generates a composite image based on the separated object and the second photographing image.
Abstract:
Provided is a method for transmitting data about an omnidirectional image by a server. The method comprises the steps of: receiving, from a terminal, information about a viewport of the terminal; selecting, on the basis of information about the viewport and the respective qualities of a plurality of tracks associated with the omnidirectional image, at least one track among the plurality of tracks; and transmitting data about the selected at least one track to the terminal.
Abstract:
Methods and apparatuses are provided for transmitting information about an omni-directional image based on user motion information by a server. Motion parameters are received from an apparatus worn by a user for displaying an omni-directional image. User motion information is generated based on the received motion parameters. First packing information corresponding to a user position is generated based on the user motion information. Second packing information corresponding to a position in close proximity to the user position is generated based on the user motion information. Third packing information is generated based on the first packing information and the second packing information At least one of the first packing information, the second packing information, and the third packing information is transmitted to the apparatus.
Abstract:
A method for processing virtual reality (VR) content by a content providing device includes identifying cartesian coordinates of a first position on the VR content, estimating a movement of a user of the content providing device, identifying cartesian coordinates of a second position by applying a matrix representing the estimated movement of the user to the cartesian coordinates of the first position, converting the cartesian coordinates of the second position into spherical coordinates of the second position, and providing an area corresponding to the spherical coordinates of the second position to the user.