Abstract:
A device receives a base video that includes an area in frames of the base video reserved for being overlaid with metadata. The device determines a first set of videos to be played next for a particular user. A selection of the first set of videos is based on a second set of videos associated with the particular user. The device receives metadata for the first set of videos and populates an executable presentation template for the base video with a set of extracted metadata. The device plays the base video and synchronizes execution of the populated presentation template with the playing of the base video to overlay the reserved area of the frames with the set of metadata to create a custom interstitial transition video that informs the particular user about the videos to be played next.
Abstract:
Embodiments of the disclosure provide a method and apparatus for recommending video data. In one embodiment, a method is disclosed comprising: retrieving, by a server device, text data and video data; generating, by the server device, a relationship graph, the relationship graph representing a semantic mapping of the text data; generating, by the server device, candidate video segment data based on the video data, the candidate video segment data comprising semantic tag data; acquiring, by the server device, target video data according to the relationship graph and the candidate video segment data; and transmitting, by the server device, the target video data to a client device. The embodiments of the disclosure can screen and select personalized target video data from big video data according to a relationship graph showing a semantic mapping without human assistance during the whole process, greatly improving the video content browsing experience of users and increasing the conversion rate.
Abstract:
Providing enhanced video content includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of a plurality of events and to determine at least one event type for each of the plurality of events. The event type includes an entry in a relationship library detailing a relationship between two visible features. Extracting and indexing a plurality of video cuts from the video feed is performed based on the at least one event type determined by the understanding that corresponds to an event in the plurality of events detectable in the video cuts. Lastly, automatically and under computer control, an enhanced video content data structure is generated using the extracted plurality of video cuts based on the indexing of the extracted plurality of video cuts.
Abstract:
The present invention relates to a system and method for recording and displaying a reaction to a focus in a professional setting, an education setting, and a personal setting. A reaction stream includes instances of reaction data, wherein each of the instances of reaction data includes a user sentiment and a focus point, and wherein the focus point identifies a portion of a session to which the user sentiment is directed. The system may include a plurality of user devices, a supervisor device, and/or a server.
Abstract:
Various networks on which media clips are shared may benefit from appropriate handling of shared media. For example, systems involving video sharing may benefit from methods and devices that support crowdsourced video. A method can include receiving a multimedia element. The method can also include storing the multimedia element as reference content. The method can further include receiving a recording related to the reference content. The method can additionally include storing the recording and a relation between the recording and the reference content. The relationship can include a video-editing relationship to the reference content. The method can also include providing the video-editing relationship upon receiving selection information indicative of at least one of the recording or the reference content.
Abstract:
The disclosed virtual theater system includes capture devices for capturing video and audio of a live even and converting the video and audio into a data stream. The system also includes a production center for receiving the data stream and compressing the data stream to generate a compressed data stream, for determining the number of one or more viewing devices subscribed or reserved to view the live event, and for determining data bandwidth consumption characteristics of the one or more viewing devices. Also included is a server for receiving the compressed data stream, the number of one or more viewing devices, and the data bandwidth consumption characteristics from the production center over a network. The server also duplicates and divides the compressed data stream out to the one or more viewing devices depending on the number of viewing devices subscribed or reserved (cyberseats reserved) to view the live event.