Abstract:
A wearable device includes a display, one or more cameras, and at least one processor configured to identify information on a target object and a visual object related to an external object corresponding to the target object in at least one image. The at least one processor is configured to identify whether a first image including the visual object is displayed through the display. The at least one processor is configured to change the first image to emphasize the visual object, based on identifying that the first image including the visual object is displayed through the display. The at least one processor is configured to display an affordance for changing the gaze of the user to display the first image by overlapping the second image, based on identifying that a second image, which is distinct from the first image including the visual object, is displayed through the display.
Abstract:
The present disclosure relates to a server, an electronic device and their methods of transmitting or receiving image content. In one embodiment, a method performed by the server includes obtaining sensor information including orientation information detected by the electronic device connected to the server; obtaining state information regarding the server and the electronic device; identifying a size of a first partial image and a position of the first partial image, based on the sensor information and the state information; generating a first frame by encoding the first partial image, based on the identified size of the first partial image or identified position of the first partial image; and transmitting the generated first frame to the electronic device.
Abstract:
A terminal and method for controlling the terminal using spatial gesture are provided. The terminal includes a sensing unit which detects a user gesture moving an object in a certain direction within proximity the terminal, a control unit which determines at least one of movement direction, movement speed, and movement distance of the user gesture and performs a control operation associated with a currently running application according to the determined at least one of movement direction, movement speed, and movement distance of the user gesture, and a display unit which displays an execution screen of the application under the control of the control unit.
Abstract:
A method and an apparatus for operating an object in a user device having a touch screen are provided. The method includes displaying one or more objects on a screen, detecting a hovering input selecting the object, displaying the selected object distinguished from other object in response to the hovering input, detecting a touch contact input related to the object selected by the hovering input, and operating the object selected by the hovering input when the touch contact input satisfies a condition.
Abstract:
A processor of a wearable device is provided. The processor includes obtaining posture information of the wearable device in a space including the wearable device, based on classification information for selecting at least one feature point within pixels based on differences between pixels included in a first frames output from a first camera, identifying resolutions of each of a plurality of areas included in field-of-view (FoV) formed based on a display, based on the number of feature points obtained in each of the plurality of areas by the classification information, and changing resolution, among resolutions identified based on gaze information indicating gaze of user wearing the wearable device, corresponding to a first area, among a plurality of areas, to resolution larger than resolution corresponding to second area.
Abstract:
A terminal and method for controlling the terminal using spatial gesture are provided. The terminal includes a sensing unit which detects a user gesture moving an object in a certain direction within proximity the terminal, a control unit which determines at least one of movement direction, movement speed, and movement distance of the user gesture and performs a control operation associated with a currently running application according to the determined at least one of movement direction, movement speed, and movement distance of the user gesture, and a display unit which displays an execution screen of the application under the control of the control unit.
Abstract:
A semiconductor device includes a substrate having first to fourth regions, first to third active regions and a first dummy active region extending on the first to fourth regions, respectively, a first gate structure intersecting the first active region on the first region and including a first gate conductive layer, a second gate structure intersecting the second active region on the second region and including a second gate conductive layer, a third gate structure intersecting the third active region on the third region find including a third gate conductive layer, a first dummy gate structure intersecting the first dummy active region on the fourth region and including a first dummy gate conductive layer, and source/drain regions on the first to third active regions and on both sides of the first to third gate structures.
Abstract:
A method of transmitting video content by using an edge computing service (e.g., a multi-access edge computing (MEC) service) is provided. The method includes obtaining sensor information including orientation information and pupil position information from an electronic device connected to the edge data network, obtaining a first partial image including a user field-of-view image and an extra field-of-view image, the user field-of-view image corresponding to the orientation information, and the extra field-of-view image corresponding to the pupil position information, generating a first frame by encoding the first partial image, and transmitting the generated first frame to the electronic device.
Abstract:
A method and a terminal for providing a feedback in response to a user input related to a touch panel are provided. The method includes displaying an object on the screen, detecting a hovering of a touch input means with respect to the object, and providing a visual feedback corresponding to a distance between the object and the touch input means in response to the hovering.
Abstract:
A background image is displayed on a touch screen of an electronic device. Overlapped with the background image, a semitransparent layer is displayed. When a touch and drag action is detected from the semitransparent layer, the transparency of a touch and drag region is changed. Transparency of the semitransparent layer may be changed according to temperature or humidity.