DEEP REINFORCEMENT LEARNING FRAMEWORK FOR CHARACTERIZING VIDEO CONTENT

    公开(公告)号:US20210124930A1

    公开(公告)日:2021-04-29

    申请号:US17141028

    申请日:2021-01-04

    摘要: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. One or more scene affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames of data. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated scene affective labels.

    Deep reinforcement learning framework for characterizing video content

    公开(公告)号:US11386657B2

    公开(公告)日:2022-07-12

    申请号:US17141028

    申请日:2021-01-04

    摘要: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. One or more scene affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames of data. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated scene affective labels.

    Deep reinforcement learning framework for characterizing video content

    公开(公告)号:US10885341B2

    公开(公告)日:2021-01-05

    申请号:US16171018

    申请日:2018-10-25

    摘要: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. An environment state for each time step t corresponding to each frame is represented by the video information for time step t and predicted affective information from a previous time step t−1. An action A(t) as taken with an agent controlled by a machine learning algorithm for the frame at step t, wherein an output of the action A(t) represents affective label prediction for the frame at the time step t. A pool of predicted actions is transformed to a predicted affective history at a next time step t+1. The predictive affective history is included as part of the environment state for the next time step t+1. A reward R is generated on predicted actions up to the current time step t, by comparing them against corresponding annotated movie scene affective labels.

    DEEP REINFORCEMENT LEARNING FRAMEWORK FOR SEQUENCE LEVEL PREDICTION OF HIGH DIMENSIONAL DATA

    公开(公告)号:US20220327828A1

    公开(公告)日:2022-10-13

    申请号:US17852602

    申请日:2022-06-29

    摘要: In sequence level prediction of a sequence of frames of high dimensional data one or more affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated affective labels.