Abstract:
A method of processing an image is provided. The method includes obtaining an image including a face, obtaining information about whether a preset condition is satisfied after the image is obtained, obtaining location information of a face part in the image when the preset condition is satisfied, and obtaining a synthesis image by adding an image corresponding to the satisfied condition at a location of the face part to the image.
Abstract:
A method and apparatus for displaying a location of a portable terminal of another subscriber's by the calling party's portable terminal. This display occurs during a call without switching displayed screen windows or applications. The method includes a first portable terminal performing a call mode; driving a camera upon receiving location information of the other subscriber's terminal in the call mode to display an image photographed by the camera; and analyzing location information detected by a location detection unit and the received location information of a second portable terminal of another subscriber to calculate a distance between two terminals and displaying a location of the other subscriber's terminal on the displayed image when a direction of the calling party's terminal detected by a direction detection unit aligns with a direction of the other subscriber's terminal.
Abstract:
A method of providing a multi touch interaction in a portable terminal includes receiving a first touch input, performing a first function corresponding to the received first touch input, receiving a second touch input when the first touch input is maintained, and performing a second function corresponding to the received second touch input while maintaining a movement of at least one specific object selected by the first touch input.
Abstract:
An electronic device is provided. The electronic device includes a display, memory storing one or more computer programs, and one or more processors communicatively coupled to the display and the memory. The one or more computer programs include computer-executable instructions that, when executed by the one or more processors, cause the electronic device to receive a request for changing content acquired based on a shooting input, in response to receiving the request, identify whether the contents are synchronized based on metadata of the content while in a first state of having identified that the contents are synchronized, display a first screen comprising a visual object for receiving a first time section to be used to segment all of the synchronized contents in the display, and while in a second state different from the first state, display a second screen for receiving a second time section to be used to divide any one of the contents in the display, independently to the visual object.
Abstract:
A processor of a wearable device is provided. The processor may identify an external object included in a displaying area of a display by using a camera. The processor may display a first visual object representing an application corresponding to the external object based on the location of the external object in the displaying area. The processor may obtain at least one of first information associated with a path of the external object moved on the plane, or second information including at least one stroke drawn by the external object, while the first visual object is displayed. The processor may display a second visual object for executing the application by using information, selected among the first information or the second information, based on the location, according to the attribute assigned to the external object.
Abstract:
An electronic device is provided. The electronic device includes a camera, a touch panel, a display, a communication circuit, at least one processor operatively connected with the camera, the touch panel, the display, and the communication circuit, and a memory operatively connected with the at least one processor The memory stores one or more instructions, when executed, causing the at least one processor to display a first image captured by the camera and a second image received from an external electronic device through the communication circuit on the display, display first augmented reality content on the first image, display second augmented reality content on the second image, and display an animation effect of the first augmented reality content using the second augmented reality content based on a user input through the touch panel.
Abstract:
An electronic device is provided. The electronic device includes a display, a processor functionally connected with the display, and a memory functionally connected with the processor. The memory stores instructions configured to, when executed, enable the processor to display a first image through the display, display one or more second images through the display while displaying the first image, select a third image from among the one or more second images, identify a value of at least one property of the third image, generate a filter for applying the value of the at least one property to an image, apply the value of the at least one property to the first image using the filter, display the first image, to which the value of the at least one property is applied, through the display, and store the filter in the memory.
Abstract:
Embodiments of the present disclosure describe an electronic device for providing a visual effect corresponding to a gesture input. The electronic device includes a sensor, a wireless communication circuit, a display device, and at least one processor. The at least one processor is configured to sense a gesture input through the sensor. The at least one processor is also configured to control the display device to display a first visual effect corresponding to the gesture input. When information related to the first visual effect is received from another electronic device through the wireless communication circuit, the at least one processor is configured to update the first visual effect displayed on the display device to a second visual effect.
Abstract:
An electronic device is disclosed. The electronic device comprises: a memory including at least one command; and a processor connected to the memory to control the electronic device, wherein by executing the at least one command, the processor obtains an image according to a user's interaction with the electronic device, obtains information about the user's intention according to information about an object obtained from the image and context information obtained during the interaction, and obtains information concerning the obtained object from a knowledge base stored in the memory, according to the information about the user's intention, wherein the knowledge base includes device information about a plurality of electronic devices used during the user's activity, object information about a plurality of objects obtained according to the activity, and intention information corresponding to correlative information, and the processor obtains, from the knowledge base, information concerning the obtained object according to the intention information corresponding to the obtained object.
Abstract:
A background image is displayed on a touch screen of an electronic device. Overlapped with the background image, a semitransparent layer is displayed. When a touch and drag action is detected from the semitransparent layer, the transparency of a touch and drag region is changed. Transparency of the semitransparent layer may be changed according to temperature or humidity.