Abstract:
The present disclosure discloses a method and a system for processing input fingerprint information, and a mobile terminal thereof. The method includes the following steps: a fingerprint information identification step: when scanning that fingerprint information is input, identifying whether a fingerprint template matching with the input fingerprint information is presented in a fingerprint database, the fingerprint database includes a plurality of pre-registered fingerprint templates; and an operation execution step: upon identifying that a fingerprint template matching with the fingerprint information is presented in the fingerprint database, performing a predetermined associated operation according to a combination of the fingerprint information and a holding manner of a current mobile terminal. In the present disclosure, a plurality of fingerprints are pre-registered as fingerprint templates, and a predetermined associated operation is performed according to a combination of the fingerprint information and a holding manner of a current mobile terminal. In this way, no matter in a screen on state or a screen off state, the associated operation may be started, such that a user may experience more convenient and quicker services.
Abstract:
Techniques for matching a feature of captured visual data are described in various implementations. In one example implementation, a server from among plural servers matches a feature of captured visual data of a physical target received from an electronic device with features of one of a plurality of partitions. Based on the matching, an object is identified that corresponds to the captured visual data.
Abstract:
A spatial relationship is determined between a first image of a first region of a physical space, and a second image of a second region of that space; and the first and second images are thereby stitched together into a composite image comprising first and second areas derived from the first and second images respectively. Further, there is detected an embedded signal having been embedded in light illuminating at least part of the first region of the physical space upon capture of the first image, the embedded signal conveying metadata relating to at least part of the physical space. An effect applied to at least part of the first area of the composite image based on the detected metadata; and also, based on the detected metadata and on the determined spatial relationship between the first and second images, the effect is applied to at least part of the second area of the composite image that extends beyond the first area.
Abstract:
Disclosed herein is an image processing apparatus (100), including a representative face extraction unit (114) configured to detect face images in an image frame that forms part of video image data, and select, from the detected face images, a face image to be used as index information. The representative face extraction unit (114) is configured to calculate a score of each of the face images detected in the image frame based on characteristics of the face image, and select a detected face image whose score is high as an index-use face image.
Abstract:
A method of representing an object appearing in an image, comprises deriving a plurality of view descriptors of the object, each view descriptor corresponding to a different view of the object, the method comprising indicating for each view descriptor whether or not the respective view corresponds to a view of the object appearing in the image, wherein at least one view descriptor comprises a representation of the colour of the object in the respective view.
Abstract:
Disclosed herein is an image processing apparatus (100), including a representative face extraction unit (114) configured to detect face images in an image frame that forms part of video image data, and select, from the detected face images, a face image to be used as index information. The representative face extraction unit (114) is configured to calculate a score of each of the face images detected in the image frame based on characteristics of the face image, and select a detected face image whose score is high as an index-use face image.