Abstract:
Provided herein are methods and systems for detecting counterfeit of a personal object, comprising, analyzing one or more images depicting a personal object to identify one or more wearing marks in the personal object induced by one or more wearing conditions, generating a wearing pattern comprising the one or more wearing marks, comparing between the wearing pattern and one or more previous wearing patterns created for the personal object based on past images of the personal object, and determining whether the personal object is genuine or counterfeit based on the comparison.
Abstract:
An image acquisition unit (2020) acquires a captured image containing an inspection target instrument. An inspection information acquisition unit (2040) acquires inspection information regarding the instrument contained in the captured image. The inspection information is information indicating an inspection item of the instrument. A first display control unit (2060) displays an indication representing an inspection spot corresponding to the inspection item indicated by the inspection information on the display device (10). For example, the first display control unit (2060) displays the indication representing the inspection spot so that the indication is superimposed on the inspection spot on a display device (10). For example, the first display control unit (2060) displays the indication in the inspection spot on the display device (10) or near the instrument.
Abstract:
A checkout apparatus (10) includes an image data acquisition unit (11) that acquires data of an image; an image analysis unit (12) that recognizes a plurality of products in the image using a feature value of an exterior of each of the products registered in a feature value storage unit (14) and the data of the image; a reading necessity or non-necessity check unit (15) that extracts the product for which it is necessary to read a product code from among the recognized products, using an object-to-be-read storage unit (16) in which the product for which it is necessary to read the product code is registered in advance; and a reading unit (17) that reads the product code of the product extracted by the reading necessity or non-necessity check unit (15).
Abstract:
A search object and m-number of first local features respectively constituted by a feature vector of 1 to i dimensions of local areas of m-number of feature points in an image of the search object are stored, feature points are extracted from the image, second local features respectively constituted by a feature vector of 1 dimension to j dimensions are generated with respect to local areas of n-number of feature points, a smaller number of dimensions among the number of dimensions i of the first local features and the number of dimensions j of the second local features is selected, and an existence of the search object in the image in the video is recognized when a prescribed ratio of the m-number of first local features up to the selected number of dimensions corresponds to the n-number of second local features up to the selected number of dimensions.
Abstract:
An object of the present invention is to reduce a size of a feature descriptor while maintaining accuracy of object identification. A local feature descriptor extracting apparatus includes: a feature point detecting unit which detects a plurality of feature points in an image and which outputs feature point information that is information regarding each feature point; a feature point selecting unit which selects a prescribed number of feature points in an order of importance from the plurality of detected feature points, based on the feature point information; a local region acquiring unit which acquires a local region corresponding to each selected feature point; a subregion dividing unit which divides each local region into a plurality of subregions; a subregion feature vector generating unit which generates a feature vector of a plurality of dimensions for each subregion in each local region; and a dimension selecting unit which selects a dimension from the feature vector for each subregion so that a correlation between neighboring subregions is lowered, based on a positional relationship between subregions in each local region and which outputs an element of the selected dimension as a feature descriptor of the local region.
Abstract:
A medical article and m-number of first local features which are respectively constituted by a feature vector of 1 dimension to i dimensions of m-number of feature points in an image of the medical article are stored in association with each other, n-number of feature points are extracted from an image in a captured video, n-number of second local features respectively constituted by a feature vector of 1 dimension to j dimensions are generated, a smaller number of dimensions among the number of dimensions i and the number of dimensions j is selected, and an existence of the medical article in the image in the video is recognized when it is determined that a prescribed ratio or more of the m-number of first local features up to the selected number of dimensions corresponds to the n-number of second local features up to the selected number of dimensions.
Abstract:
The size of a feature descriptor is reduced with the accuracy of object identification maintained. A local feature descriptor extracting apparatus includes a feature point detecting unit configured to detect feature points in an image, a local region acquiring unit configured to acquire a local region for each of the feature points, a subregion dividing unit configured to divide each local region into a plurality of subregions, a subregion feature vector generating unit configured to generate a feature vector with a plurality of dimensions for each of the subregions in each local region, and a dimension selecting unit configured to select dimensions from the feature vector in each subregion so as to reduce a correlation between the feature vectors in proximate subregions based on positional relations among the subregions in each local region and output elements of the selected dimensions as a feature descriptor of the local region.
Abstract:
Advertisement information relating to an object is provided in real time, while capturing images of the object. m first local features which are respectively feature vectors from one dimension to i dimensions are stored in association with an object, n feature points are extracted from a video picture, n second local features which are respectively feature vectors from one dimension to j dimensions are generated, the smaller number of dimensions is selected, of the number of dimensions i and the number of dimensions j, and an object is recognized to be present in the video picture and advertisement information relating to that object is provided when determination is made that at least a prescribed ratio of the m first local features of the selected number of dimensions corresponds to the n second local features of the selected number of dimensions.
Abstract:
A teaching data extending device includes: a relationship acquiring unit that obtains a relationship between a plurality of features included in each of a plurality of teaching data; a feature selecting unit that selects any one or more of the plurality of features based on the relationship; and a teaching data extending unit that generates, for one or more teaching data, new teaching data in which a value of the feature selected by the feature selecting unit is replaced with a value of the feature in another teaching data classified in a same class.
Abstract:
The information processing device 1D mainly includes a state-of-activity estimation unit 31D and a timing determination unit 32D. The state-of-activity estimation unit 31D estimates, based on information detected in a meeting room in which a meeting is being held, a state of activity of the meeting. The timing determination unit 32D determines a timing of mobile sales of a commodity to one or more participants of the meeting based on the state of activity estimated by the state-of-activity estimating unit 31D.