Unsupervised domain adaptation for video classification
Abstract:
A method is provided for unsupervised domain adaptation for video classification. The method learns a transformation for each target video clips taken from a set of target videos, responsive to original features extracted from the target video clips. The transformation corrects differences between a target domain corresponding to target video clips and a source domain corresponding to source video clips taken from a set of source videos. The method adapts the target to the source domain by applying the transformation to the original features extracted to obtain transformed features for the plurality of target video clips. The method converts the original and transformed features of same ones of the target video clips into a single classification feature for each of the target videos. The method classifies a new target video relative to the set of source videos using the single classification feature for each of the target videos.
Public/Granted literature
Information query
Patent Agency Ranking
0/0