Weakly supervised natural language localization networks for video proposal prediction based on a text query
Abstract:
Systems and methods are provided for weakly supervised natural language localization (WSNLL), for example, as implemented in a neural network or model. The WSNLL network is trained with long, untrimmed videos, i.e., videos that have not been temporally segmented or annotated. The WSNLL network or model defines or generates a video-sentence pair, which corresponds to a pairing of an untrimmed video with an input text sentence. According to some embodiments, the WSNLL network or model is implemented with a two-branch architecture, where one branch performs segment sentence alignment and the other one conducts segment selection. These methods and systems are specifically used to predict how a video proposal matches a text query using respective visual and text features.
Public/Granted literature
Information query
Patent Agency Ranking
0/0