Abstract:
A mobile terminal including a wireless communication unit configured to perform wireless communication; a touchscreen configured to display information and sense a touch input; and a controller configured to display an object between a first region and a second region of the touchscreen, adjust sizes of the first and second regions based on a shifting of the object, display an input window at the first region, display a virtual keypad at a bottom part of the touchscreen in response to the input window being selected, and display the first region above the virtual keypad.
Abstract:
A mobile terminal includes: a wireless communication unit configured to communicate with an external terminal wirelessly; and a controller configured to extract, when event information is received from the external terminal through the wireless communication unit, sound information related to the event, and transmit the extracted sound information to the external terminal such that the sound information is associated with at least one image related to the event of the terminal.
Abstract:
A mobile terminal including a wireless communication unit configured to perform wireless communication; a touchscreen configured to display information; and a controller that partitions the touchscreen into a first region and second region, displays a chat window for displaying chatting contents included in a chatting session with at least one counterpart terminal in the first region, displays data different than the chat window in the second region, receives a touch input applied to a first point inside the second region and a second point inside the first region, and transmits data in the second region to the at least one counterpart terminal.
Abstract:
Provided are a mobile terminal capable of capturing an image and a control method thereof. The mobile terminal includes: a display unit configured to output an automatic scrap icon and a manual scrap icon for selecting first screen information and partial screen information included in the first screen information; and a control unit configured to, in response to a preset touch input applied to the automatic scrap icon, extract meta data regarding the first screen information, select partial screen information included in the first screen information on the basis of the extracted meta data, generate second screen information including the selected partial screen information, and control the display unit to output the generated second screen information.
Abstract:
Provided is a mobile terminal including a display unit on which multiple divisional screen regions are output, and a controller that, when receiving an input for a division mode in which the display unit is divided into the multiple screen regions, generates a list region including an icon corresponding to an application and divides the displaying unit into a first and second screen regions with the list region in between, in which when a pair icon including a first icon corresponding to a first application and a second icon corresponding to a second application is selected from the list region, the controller executes the first and second applications on one of the first and second screen regions according to positions of the first and second icons arranged on the pair icon, respectively.
Abstract:
The present invention relates to a mobile terminal capable of receiving a memo while recording a video. Specifically, the present invention relates to a mobile terminal including a camera, a touch screen and a controller. When a video is recorded using the camera, the controller is configured to control the touch screen to output a preview screen of the camera. If a touch and drag input is received on the outputted preview screen, the controller is configured to temporarily stop outputting the preview screen and store a touch path of the touch and drag input as a handwriting memo which is included in the recorded video.
Abstract:
The present invention provides a mobile terminal and a method for controlling the same. The mobile terminal comprises: a display unit; a memory in which videos are stored; and a control unit which performs control to allow a first video of the videos stored in the memory to be displayed on the display unit, and to allow an indication corresponding to a second video to be displayed on the display unit, wherein the second video is relevant to the first video with respect to at least one of shooting location, shooting time, shooting direction and subject similarity.
Abstract:
An electronic device and a method for controlling the same are disclosed. The electronic device include a display unit and a controller configured to render a web page, display the rendered web page on the display unit, and detect the body of the web page satisfying a predetermined condition, to which a visual recognition method is applied, based on alignment information of objects of the web page when receiving a specific input through the display unit.
Abstract:
A mobile terminal including a wireless communication unit configured to perform wireless communication; a touchscreen configured to display information and sense a touch input; and a controller configured to display an object between a first region and a second region of the touchscreen, adjust sizes of the first and second regions based on a shifting of the object, display an input window at the first region, display a virtual keypad at a bottom part of the touchscreen in response to the input window being selected, and display the first region above the virtual keypad.
Abstract:
An Intelligent Agent (IA) application, which extracts an image object from content displayed by the other application and provides a search result of the image object, and an image search method using the same are disclosed. The image search method includes executing a first application that displays content including at least one image object, driving a second application that provides an image search result of the image object included in the content displayed by the first application, extracting the at least one image object from the content via the second application, providing, on top of the first application, at least one object interface corresponding to the at least one extracted image object via the second application, receiving a user input of selecting a particular object interface among the at least one object interface, and displaying a search result of the image object corresponding to the particular object interface.