Abstract:
The techniques described herein relate to methods, apparatus, and computer readable media configured to process asset change point locations. A processor or encoder is configured to operate according to a set of constraints that constrain the encoding process for asset changes, wherein the set of constraints configures a set of allowable asset change point locations from a set of possible asset change point locations, wherein the set of allowable asset change point locations is a subset of the set of possible asset change point locations, and configures a set of allowable data access types from a set of possible data access types, wherein the set of allowable data access types is a subset of the set of possible data access types. Video data is encoded based on the set of constraints to generate encoded first video data, such that the encoded first video data comprises a set of asset change point locations and associated data access types in compliance with the set of constraints.
Abstract:
Particular embodiments can refine a seed sentinel frame signature for a seed sentinel frame. The seed sentinel frame may be predictable or partially predictable content that demarks a beginning and/or end of certain content in a video program. The seed sentinel frame may be first used to detect other sentinel frames in the video program. However, other sentinel frames throughout the video program, or in other video programs, may be slightly different from the given sentinel frame due to different reasons. The seed sentinel frame signature may not detect the sentinel frames of a video program with a desired accuracy. Accordingly, particular embodiments may refine the sentinel frame signature to a synthetic sentinel frame signature. The synthetic sentinel frame signature may then be used to analyze the current video program or other video programs. The synthetic sentinel frame signature may more accurately detect the sentinel frames within the video program.
Abstract:
L'invention se rapporte à un procédé (PRC1) de génération d'un flux vidéo à partir d'une diapositive, ledit procédé (PRC1) comportant les étapes suivantes: - fourniture (100) d'une diapositive comportant : · une zone, dite zone d'informations, incluant au moins une information destinée à être affichée et, · une zone, dite zone de commentaires, incluant au moins un commentaire; - identification (101), sur la zone d'informations, d'une information à intégrer dans le flux vidéo; - association (102) de l'information à intégrer à un commentaire de la zone de commentaires; - saisie (103) d'une valeur de conversion; - association (104) de la valeur de conversion au commentaire pour déterminer une durée d'animation de l'information à intégrer; - génération (105) d'un flux vidéo comportant l'information à intégrer animée selon la durée d'animation.
Abstract:
Embodiments of a system and method for emotional tagging are generally described herein. A method may include receiving, at a device, biometric data and a timestamp, analyzing the biometric data to determine an emotional reaction occurred, tagging a portion of content with an emotional content tag based on the emotional reaction, wherein the portion of content was playing during a time corresponding to the timestamp, and sending the portion of content and the emotional content tag to a server. A method may include aggregating content tagged as emotional content, generating an emotional content video segment, and providing the emotional content video segment.
Abstract:
An enhanced form of edited interactive audio and video content delivered through a subscriber based network system and accessed by several unique multimedia interface devices. These unique multimedia interface devices can also create and edit both user generated audio and video content into this enhanced form of edited interactive audio and video content. The present invention will incorporate both voice and manual activation as an embedded technology into all multi-media and user generated audio/video content.
Abstract:
A user receiving device including at least one transceiver module, an output module, and a control module. The at least one transceiver module is configured to receive metadata and a program or video from a first backend device. The metadata indicates where in the program or video a spotted ad is included. The output module is configured to display the program or video on a display. The display is connected to the user receiving device. The at least one transceiver module is configured to receive a request signal from a mobile device. The request signal indicates a viewer of the video has detected the spotted ad. The control module is configured to, based on the request signal, save information pertaining to the request signal, open a dialogue window or initiate a survey.
Abstract:
Encoded video data of a video bitstream (1) partitioned into multiple chunks (10, 12, 14, 16) of encoded video data is received by a video tune-in device (100). Each chunk (10, 12, 14, 16) starts with an I picture. A sub-chunk (24) of encoded video data corresponding to a tune-in point within the video bitstream (1) is downloaded. The sub-chunk (24) corresponds to a sub-portion of a chunk (14), starts with an I picture and has a playback duration that is shorter than the playback duration of the chunk (14). The usage of downloaded sub-chunks (24) enables a low delay solution during video navigation when a user wants to jump to different positions (30) within the video bitstream (1).
Abstract:
스트리밍 서비스를 위한 클라이언트 및 서버의 동작 방법이 개시된다. 일 실시예에 따른 클라이언트는 데이터 요청을 지시하는 매개변수를 포함하는 요청 패킷을 서버로 전송한다. 일 실시예에 따른 서버는 매개변수에 대응하는 주소 범위의 데이터를 포함하는 응답 패킷을 클라이언트로 전송한다.