Abstract:
Spherical video content may be presented on a display. Interaction information may be received during presentation of the spherical content on the display. Interaction information may indicate a user's viewing selections of the spherical video content, including viewing directions for the spherical video content. Display field of view may be determined based on the viewing directions. The display fields of view may define extents of the visual content viewable as a function of progress through the spherical video content. User input to record a custom view of the spherical video content may be received and a playback sequence for the spherical video content may be generated. The playback sequence may mirror at least a portion of the presentation of the spherical video content on the display.
Abstract:
Video information defining video content may be accessed. One or more highlight moments in the video content may be identified. One or more video segments in the video content may be identified based on one or more highlight moments. Derivative video information defining one or more derivative video segments may be generated based on one or more video segments. The derivative video information may be transmitted over a network to a computing device. One or more selections of the derivative video segments may be received from the computing device. Video information defining one or more video segments corresponding to one or more selected derivative video segments may be transmitted to the computing device. The computing device may generate video composition information defining a video composition based on the video information defining one or more video segments corresponding to one or more selected derivative video segments.
Abstract:
Introduced herein are techniques for improving media content production and consumption by utilizing metadata associated with the relevant media content. More specifically, systems and techniques are introduced herein for automatically producing media content (e.g., a video composition) using several inputs uploaded by a filming device (e.g., an unmanned aerial vehicle (UAV) copter or action camera), an operator device, and/or some other computing device. Some or all of these devices may include non-visual sensors that generate sensor data. Interesting segments of raw video recorded by the filming device can be formed into a video composition based on events detected within the non-visual sensor data that are indicative of interesting real world events. For example, substantial variations or significant absolute values in elevation, pressure, acceleration, etc., may be used to identify segments of raw video that are likely to be of interest to a viewer.
Abstract:
In one aspect, a method of managing online media is disclosed. The method comprises receiving a media file from a media source, the media file representative of an event and receiving a user input indicating an occurrence to be identified in the media file. The method also comprises obtaining data related to the media file from one or more sources, wherein the data comprises information describing the media file or a portion thereof and wherein the data is based on the user input and generating a media timeline associated with the media file. The method further comprises identifying the occurrence in the media file and a timestamp for the identified occurrence based on the data, the timestamp identifying a time corresponding to a data timeline of the data and generating an output comprising the occurrence and the timestamp in relation to the timeline.
Abstract:
본 발명은 가상현실 기반 공간 배정 방법에 있어서, 공간이 N개라 할 때, N의 공간을 점유하는 사용자는 식에 의에 각 공간마다 수행해야 할 미션의 수가 결정되고, 식에 의해 미션 수행의 성공(Mission Complete) 및 실패(Mission Fail)가 결정되는 가상현실 기반 공간 배정 방법을 제공한다.
Abstract:
본 발명의 일 실시예에 따른 하나 이상의 프로세서에 의해 실행 가능하며, 상기 하나 이상의 프로세서로 하여금 이하의 동작을 수행하도록 하는 명령들을 포함하는, 다트 게임을 제공하기 위한, 컴퓨터 판독가능 매체에 저장된 컴퓨터 프로그램이 개시된다. 상기 동작들은: 다트 게임과 관련된 사전 결정된 이벤트(event)가 발생하는지 여부를 검출하는 동작; 사전 결정된 이벤트가 검출되는 경우, 다트 핀의 투척 시점을 포함하는 사전 결정된 시간 구간 동안 상기 이벤트에 관련한 영상을 촬영할 것을 결정하는 동작; 및 촬영된 상기 영상을 다트 게임 장치에서 디스플레이할 것을 결정하는 동작;을 포함할 수 있다.
Abstract:
A system and method of video processing are disclosed. In a particular implementation, a device includes a processor configured to generate index data for video content. The index data includes a summary frame and metadata. The summary frame is associated with a portion of the video content and illustrates multiple representations of an object included in the portion of the video content. The metadata includes marker data that indicates a playback position of the video content. The playback position is associated with the summary frame. The device also includes a memory configured to store the index data.
Abstract:
A system and method is provided for generating summaries of video clips and then utilizing a source of data indicative of the consumption by viewers of those video summaries. In particular, summaries of videos are published and audience data is collected regarding the usage of those summaries, including which summaries are viewed, how they are viewed, the duration of viewing and how often. This usage information may be utilized in a variety of ways. In one embodiment, the usage information is fed into a machine learning algorithm that identifies, updates and optimizes groupings of related videos and scores of significant portions of those videos in order to improve the selection of the summary. In this way the usage information is used to find a summary that better engages the audience. In another embodiment usage information is used to predict popularity of videos. In still another embodiment usage information is used to assist in the display of advertising to users.