Abstract:
Embodiments of the disclosure disclose a method for applying an effect in a video by an electronic device. The method includes: detecting a first object and a second object in an image frame of the video; determining a type of motion of the first object and the second object in video; determining a speed of the motion of the first object and the second object in the video; determining a first effect to be applied to the first object and a second effect to be applied to the second object based on the type of motion of the first object and the second object and the speed of the motion of the first object and the second object; and applying the first effect to the first object and the second effect to the second object.
Abstract:
A device and method are provided for providing a function related to a camera in an electronic device. A method includes acquiring, via a first camera of the electronic device, a plurality of first images; acquiring, via a second camera of the electronic device, one or more second images; generating a plurality of image contents based on at least one of the plurality of first images or the one or more second images; and outputting the plurality of image contents at respective regions of a display of the electronic device.
Abstract:
A method and system for detecting temporal segments of talking faces in a video sequence using visual cues. The system detects talking segments by classifying talking and non-talking segments in a sequence of image frames using visual cues. The present disclosure detects temporal segments of talking faces in video sequences by first localizing face, eyes, and hence, a mouth region. Then, the localized mouth regions across the video frames are encoded in terms of integrated gradient histogram (IGH) of visual features and quantified using evaluated entropy of the IGH. The time series data of entropy values from each frame is further clustered using online temporal segmentation (K-Means clustering) algorithm to distinguish talking mouth patterns from other mouth movements. Such segmented time series data is then used to enhance the emotion recognition system.
Abstract:
A device and method are provided for providing a function related to a camera in an electronic device. The electronic device includes a display device; a first camera; a processor; and a memory configured to store instructions, which when executed, instruct the processor to acquire a plurality of first images having a first attribute and one or more second images having a second attribute through the first camera for a predetermined time, when an input associated with image acquisition is received, generate one or more image content based on the plurality of first images or the one or more second images, and store instructions that cause the one or more image content to be output through the display device.
Abstract:
A method and system for detecting temporal segments of talking faces in a video sequence using visual cues. The system detects talking segments by classifying talking and non-talking segments in a sequence of image frames using visual cues. The present disclosure detects temporal segments of talking faces in video sequences by first localizing face, eyes, and hence, a mouth region. Then, the localized mouth regions across the video frames are encoded in terms of integrated gradient histogram (IGH) of visual features and quantified using evaluated entropy of the IGH. The time series data of entropy values from each frame is further clustered using online temporal segmentation (K-Means clustering) algorithm to distinguish talking mouth patterns from other mouth movements. Such segmented time series data is then used to enhance the emotion recognition system.
Abstract:
Embodiments herein provide a method for detecting a candid moment in an image frame. The method includes: receiving, by an electronic device, image frames; determining, by the electronic device, a candid score of each image frame in the image frames using a Machine Learning (ML) model, wherein the candid score is a quantitative value of candidness present in the image frames; determining, by the electronic device, whether the candid score of the image frame in the image frames meets a threshold candid score; identifying, by the electronic device, that the candid moment is present in the image frame in response to determining that the candid score of the image frame meets the threshold candid score; and displaying, by the electronic device, the image frame comprising the candid moment.
Abstract:
A method includes determining information indicative of at least one facial characteristic associated with at least one face in the source image, processing the source image using a filter based on the determined information, performing wavelet decomposition on each of the filtered image and the source image, determining weightage factors associated with the wavelet decomposition of each of the filtered image and the source image, based on the determined information, obtaining a wavelet image to generate a texture restored image from the wavelet decomposition of each of the filtered image and the source image based on the weightage factors.