Abstract:
An apparatus and method for recognizing a character based on a photographed image. The apparatus includes an image determining unit, an image effect unit, a binarizing unit and a character recognizing unit. The image determining unit is configured to select, from an input image, a Region Of Interest (ROI) to be used for image analysis when the input image is input, and to analyze the selected ROI to determine a type of the input image. The image effect unit is configured to apply to the input image, an image effect for distinguishing a character region and a background region in a display screen if the type of the input image indicates that the input image is obtained by photographing a display screen. The binarizing unit is configured to binarize the input image or the output of the image effect unit according to the determined type of the input image. The character recognizing unit is configured to recognize a character from the binarized input image.
Abstract:
Methods and apparatuses are provided for transmitting a handwriting animation message. Handwriting animation message data is received from a transmitting mobile terminal. Specifications of a receiving mobile terminal, which is to receive the handwriting animation message data, are analyzed to determine whether the receiving mobile terminal is capable of reproducing the handwriting animation message. If the receiving mobile terminal is not capable of reproducing the handwriting animation message, the handwriting animation message data is converted into a data format which can be reproduced by the receiving mobile terminal, and transmitting the converted format. If the receiving mobile terminal is capable of reproducing the handwriting animation message, the received handwriting animation message data is transmitted.
Abstract:
A method and an apparatus are provided for displaying pictures according to hand motion inputs. An application is executed for displaying a picture from among a sequence of pictures on a display. Groups of skin color blocks corresponding to a hand are detected from among image frames output from a camera. A motion is detected among the groups of skin color blocks. Direction information is obtained on the detected motion. The application is controlled to display a previous picture or a next picture in the sequence of pictures on the display according to the direction information.
Abstract:
Methods and apparatus are provided for transmitting and receiving a calligraphed writing message. Writing data is receiving as input at a transmitting apparatus. The writing data is sampled to generate character frame data having a plurality of point data. Calligraphy control point data is generated using the character frame data. The calligraphed writing message including the calligraphy control point data is generated and transmitted to a receiving apparatus. A calligraphy outline is generated at the receiving apparatus for generation of a calligraphed writing image using the calligraphy control point data from the calligraphed writing message. Graphic processing is performed on the calligraphy outline. The calligraphed writing image is generated and displayed.
Abstract:
An apparatus and method for producing an animated emoticon are provided. The method includes producing a plurality of frames that constitute the animated emoticon; inputting at least one object for each of the plurality of frames; producing object information for the input object; and producing structured animated emoticon data that include each of the plurality frames and the object information.
Abstract:
An apparatus and method for recognizing a character based on a photographed image. The apparatus includes an image determining unit, an image effect unit, a binarizing unit and a character recognizing unit. The image determining unit is configured to select, from an input image, a Region Of Interest (ROI) to be used for image analysis when the input image is input, and to analyse the selected ROI to determine a type of the input image. The image effect unit is configured to apply to the input image, an image effect for distinguishing a character region and a background region in a display screen if the type of the input image indicates that the input image is obtained by photographing a display screen. The binarizing unit is configured to binarize the input image or the output of the image effect unit according to the determined type of the input image. The character recognizing unit is configured to recognize a character from the binarized input image.
Abstract:
A touch input apparatus and method in a user terminal is provided. The apparatus includes a touch input unit for generating a touch input event according to a touch of a user; a single or multi touch input determiner for entering a drawing mode according to the touch input event and determining one of a single touch input and a multi-touch input; a touch point sampling unit for performing a touch point sampling according to a single touch movement and providing a sample point for a drawing when there is a single touch input; and a multi-touch processor for, when there is a multi-touch input, entering a multi-touch mode and performing a multi-touch action including at least one of an enlargement, a reduction, and a movement of a drawing screen according to a multi-touch movement.
Abstract:
An apparatus and a method for recognizing a character based on an input image is provided. The apparatus includes an input unit configured to receive the input image and a controller configured to select, from the input image, a region of image analysis to be used for image analysis, and to analyze the selected region of image analysis to determine a type of the input image, to apply, to the input image, an image effect for distinguishing a character region and a background region in the input image if the type of the input image indicates that the input image is obtained by photographing a display screen, to binarize output of the image effect according to the determined type of the input image, and to recognize a character from the binarized output of the image effect.
Abstract:
An apparatus and method for encoding an image file is provided. The method includes generating an animation object according to a user input; generating animation data including the animation object; generating image data that is reproducible independently from the animation data; generating an integrated image file, which includes the image data and the animation data; and storing the generated integrated image file in a memory.
Abstract:
An apparatus and a method are provided for authenticating a combination code using a Quick Response (QR) code. The apparatus includes a QR code receiver that receives an image frame including a QR code; a QR code recognizer that recognizes the QR code within the image frame; a combination code generator that generates a combination code including the QR code; and a combination code transmitter that transmits the combination code to an authentication server.