METHOD AND SYSTEM FOR GENERATING AUDIO-VISUAL CONTENT FROM VIDEO GAME FOOTAGE

    公开(公告)号:EP3690882A1

    公开(公告)日:2020-08-05

    申请号:EP20154861.7

    申请日:2020-01-31

    摘要: A method of generating audio-visual content from video game footage is provided. The method comprises obtaining a user-selected audio track and obtaining video game footage. Statistical analysis is performed on the audio track so as to determine an excitement level associated with respective portions of the audio track. Statistical analysis is performed on the video game footage so as to determine an excitement level associated with respective portions of the video game footage. Portions of the video game footage are matched with portions of the audio track, based on a correspondence in determined excitement level. Based on said matching, a combined audio-visual content comprising the portions of the video game footage matched to corresponding portions of the audio track is generated. In this way, calm and exciting moments within the video footage are matched to corresponding moments in the audio track. A corresponding system is also provided.

    APPARATUS AND METHOD OF MAPPING A VIRTUAL ENVIRONMENT

    公开(公告)号:EP3593873A1

    公开(公告)日:2020-01-15

    申请号:EP19176223.6

    申请日:2019-05-23

    IPC分类号: A63F13/525

    摘要: A method of mapping a virtual environment comprises the steps of obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and, for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a selected predetermined set of depth values corresponding to a predetermined set of positions within a respective video image; wherein for each pair of depth values and video image positions, a mapping point has a distance from the virtual camera position based upon the depth value, and a position based upon the relative positions of the virtual camera and the respective video image position, thereby obtaining a map dataset of mapping points corresponding to the first sequence of video images.