Abstract:
A content recommendation method and device for recommending content to a user are disclosed. According to one embodiment, the content recommendation device extracts the features of a user from image data, audio data and the like, and can determine a recognition rate indicating the degree that is recognized as a user model predetermined according to the features of the user. The content recommendation device can determine the recommended content to be provided to the user on the basis of the determined recognition rate.
Abstract:
An apparatus for processing a depth image using a relative angle between an image sensor and a target object includes an object image extractor to extract an object image from the depth image, a relative angle calculator to calculate a relative angle between an image sensor used to photograph the depth image and a target object corresponding to the object image, and an object image rotator to rotate the object image based on the relative angle and a reference angle.
Abstract:
In an apparatus and method for controlling an interface, a user interface (UI) may be controlled using information on a hand motion and a gaze of a user without separate tools such as a mouse and a keyboard. That is, the UI control method provides more intuitive, immersive, and united control of the UI. Since a region of interest (ROI) sensing the hand motion of the user is calculated using a UI object that is controlled based on the hand motion within the ROI, the user may control the UI object in the same method and feel regardless of a distance from the user to a sensor. In addition, since positions and directions of view points are adjusted based on a position and direction of the gaze, a binocular 2D/3D image based on motion parallax may be provided.
Abstract:
A method of generating three-dimensional (3D) volumetric data may be performed by generating a multilayer image, generating volume information and a type of a visible part of an object, based on the generated multilayer image, and generating volume information and a type of an invisible part of the object, based on the generated multilayer image. The volume information and the type of each of the visible part and invisible part may be generated based on the generated multilayered image which may be include at least one of a ray-casting-based multilayer image, a chroma key screen-based multilayer image, and a primitive template-based multilayer image.
Abstract:
An interactive method includes displaying image content received through a television (TV) network, identifying an object of interest of a user among a plurality of regions or a plurality of objects included in the image content, and providing additional information corresponding to the object of interest.
Abstract:
An interactive method includes displaying image content received through a television (TV) network, identifying an object of interest of a user among a plurality of regions or a plurality of objects included in the image content, and providing additional information corresponding to the object of interest.
Abstract:
An object recognition system is provided. The object recognition system for recognizing an object may include an input unit to receive, as an input, a depth image representing an object to be analyzed, and a processing unit to recognize a visible object part and a hidden object part of the object, from the depth image, by using a classification tree. The object recognition system may include a classification tree learning apparatus to generate the classification tree.
Abstract:
A processor-implemented method with video processing includes: determining a first image feature of a first image of video data and a second image feature of a second image that is previous to the first image; determining a time-domain information fusion processing result by performing time-domain information fusion processing on the first image feature and the second image feature; and determining a panoptic segmentation result of the first image based on the time-domain information fusion processing result.
Abstract:
A method and apparatus with image correction is provided. A processor-implemented method includes generating, using a neural network model provided an input image, an illumination map including illumination values dependent on respective color casts by one or more illuminants individually affecting each pixel of the input image, and generating a white-adjusted image by removing at least a portion of the color casts from the input image using the generated illumination map.
Abstract:
A device and method with object recognition is included. In one general aspect, an electronic device includes a camera sensor configured to capture a first image of a scene, the camera sensor is configured to perform at least one type of physical camera motion relative to the electronic device, the at least one type of physical camera motion includes rolling, panning, tilting, or zooming the camera sensor relative to the electronic device, and a processor configured to control the camera sensor to perform a physical motion of the physical camera motion type based on detecting an object in the first image, acquire a second image captured using the camera sensor as adjusted based on the performed physical motion, and recognize the object in the second image.