Abstract:
A method comprising determining a time domain video image based, at least in part, on a video media item, determining a motion start frame in which object motion begins in the video media item, determining a first frame index point that corresponds with the motion start frame, determining a motion stop frame in which object motion terminates in the video media item, determining a second frame index point that corresponds with the motion stop frame, and causing display of the time domain video image, a representation of the first frame index point, and a representation of the second frame index point is disclosed.
Abstract:
There are disclosed various methods for video processing in a device and an apparatus for video processing. In a method one or more frames of a video are displayed to a user and information on an eye of the user is obtained. The information on the eye of the user is used to determine one or more key frames among the one or more frames of the video; and to determine one or more objects of interest in the one or more key frames. An apparatus comprises a display for displaying one or more frames of a video to a user; an eye tracker for obtaining information on an eye of the user; a key frame selector configured for using the information on the eye of the user to determine one or more key frames among the one or more frames of the video; and an object of interest determiner configured for using the information on the eye of the user to determine one or more objects of interest in the one or more key frames.
Abstract:
A method, apparatus and computer program product are provided in order to construct a visual representation of a video including one or more segmented objects from the frames of the video. In the context of a method, an object that appears in a plurality of frames of a video is identified. The method also includes segmenting the object from one or more frames of the video to create one or more segmented objects. The method further includes constructing a three-dimensional visual representation of the video including a representation of at least one frame of the video that includes the object. The three-dimensional visual representation also includes the one or more segmented objects positioned in a time sequence relative to the at least one frame of the video.
Abstract:
An apparatus comprising: a processor; and a memory including computer program code, the memory and the computer program code configured, with the processor, to cause the apparatus to perform at least the following: receive control element data associated with user interface control elements of a user interface display screen of a controlled electronic device; determine a multi-level hierarchical menu structure of the controlled electronic device user interface control elements using the received control element data; and generate an adapted menu, for use by a controller electronic device to control the controlled electronic device, the adapted menu comprising a subset of levels of the multilevel hierarchical menu structure with corresponding controller electronic device user interface control elements adapted for the display of the controller electronic device.
Abstract:
A method and a technical equipment for people identification. The method comprises detecting a person segment in video frames; extracting feature vector sets for several feature categories from the person segment; generating a person feature model of the extracted feature vectors sets; and transmitting the person feature model to a people identification model pool. The solution can provide more extensive people identification.
Abstract:
A method comprising: causing transfer of a displayable first item to a display of a remote apparatus by causing transfer of data to the remote apparatus, the data defining features of the displayable first item; and enabling remote user-control of interaction, in the display of the remote apparatus, between a second item displayed in the display of the remote apparatus and the transferred displayable first item.
Abstract:
An apparatus, the apparatus comprising at least one processor, and at least one memory including computer program code, the at least one memory and the computer program code configured, with the at least one processor, to cause the apparatus to perform at least the following: based on a detected user position indication of a facial feature associated with a face, provide for anchoring of the position of a corresponding computer generated facial feature so that facial landmark localisation for the corresponding computer generated facial feature can be anchored around the corresponding position on a computer generated image of the face.
Abstract:
A method, apparatus and computer program product for an improved facial recognition system are provided. Some embodiments may utilize a weighted block division of an image and capture a property measurement for pixels residing within a block. The measurements may be converted to vectors, compressed, and compared against compressed vectors of enrolled images to identify a characteristic or an image of a matching subject. Training processes may be utilized in order to optimize block divisions and weights.
Abstract:
In an example embodiment, a method, apparatus and computer program product are provided. The method includes facilitating receipt of a panchromatic image and a color image associated with a scene. The panchromatic image includes a first luminance data, and the color image includes a second luminance data and a first chrominance data. An extended dynamic range luminance image is generated based at least on the first luminance data and the second luminance data. An extended dynamic range color image of the scene is generated based on the extended dynamic range luminance image and the first chrominance data associated with the color image.