Abstract:
A method implemented in a computer system for controlling the delivery of data and audio/video content. The method delivers primary content to the subscriber device for viewing by a subscriber. The method also delivers secondary content to the companion device for viewing by the subscriber in parallel with the subscriber viewing the primary content, where the secondary content relates to the primary content. The method extracts attention estimation features from the primary content, and monitors the companion device to determine an interaction measurement for the subscriber viewing the secondary content on the companion device. The method calculates an attention measurement for the subscriber viewing the primary content based on the attention estimation features, and the interaction measurement, and controls the delivery of the secondary content to the companion device based on the attention measurement.
Abstract:
A video processing device includes a histogram generating component, an analyzing component, a comparator and an encoding component. The histogram generating component can generate a histogram for image data of an image frame. The analyzing component can analyze the histogram, can identify an isolated spike in the histogram and can output at least one strobe parameter. The comparator can compare the at least one strobe parameter with at least one predetermined threshold, can output a first instruction signal when the at least one comparison operation is indicative of a strobe and can output a second instruction signal when the at least one comparison operation is not indicative of a strobe. The encoding component can encode the image data in a first manner based on the first instruction signal and can encode the image data in a second manner based on the second instruction signal.
Abstract:
A video processing device includes a histogram generating component, an analyzing component, a comparator and an encoding component. The histogram generating component can generate a histogram for image data of an image frame. The analyzing component can analyze the histogram, can identify an isolated spike in the histogram and can output at least one strobe parameter. The comparator can compare the at least one strobe parameter with at least one predetermined threshold, can output a first instruction signal when the at least one comparison operation is indicative of a strobe and can output a second instruction signal when the at least one comparison operation is not indicative of a strobe. The encoding component can encode the image data in a first manner based on the first instruction signal and can encode the image data in a second manner based on the second instruction signal.
Abstract:
Systems, methods, and devices for an interactive viewing experience by detecting on-screen data are disclosed. One or more frames of video data are analyzed to detect regions in the visual video content that contain text. A character recognition operation can be performed on the regions to generate textual data. Based on the textual data and the regions, a graphical user interface (GUI) definition to can be generated. The GUI definition can be used to generate a corresponding GUI superimposed onto the visual video content to present users with controls and functionality with which to interact with the text or enhance the video content. Context metadata can be determined from external sources or by analyzing the continuity of audio and visual aspects of the video data. The context metadata can then be used to improve the character recognition or inform the generation of the GUI.
Abstract:
A method of classifying the shot type of a video frame, comprising loading a frame, dividing the frame into field pixels and non-field pixels based on a first playfield detection criteria, determining an initial shot type classification using the number of the field pixels and the number of the non-field pixels, partitioning the frame into one or more regions based on the initial classification, determining the status of each of the one or more regions based upon the number of the field pixels and the non-field pixels located within each the region, and determining a shot type classification for the frame based upon the status of each the region.
Abstract:
A video processing device is provided that includes a buffer, a luminance component, a maximum threshold component, a minimum threshold component and a flagging component. The buffer can store frame image data for a plurality of video frames. The luminance component can generate a first luminance value corresponding to a first frame image data and can generate a second luminance value corresponding to a second frame image data. The maximum threshold component can generate a maximum indicator signal when the difference between the second luminance value and the first luminance value is greater than a maximum threshold. The minimum threshold component can generate a minimum indicator signal when the difference between the second luminance value and the first luminance value is less than a minimum threshold. The flagging component can generate a flagged signal based on the maximum indicator signal and the minimum indicator signal.
Abstract:
A method implemented in a computer system for controlling the delivery of data and audio/video content. The method delivers primary content to the subscriber device for viewing by a subscriber. The method also delivers secondary content to the companion device for viewing by the subscriber in parallel with the subscriber viewing the primary content, where the secondary content relates to the primary content. The method extracts attention estimation features from the primary content, and monitors the companion device to determine an interaction measurement for the subscriber viewing the secondary content on the companion device. The method calculates an attention measurement for the subscriber viewing the primary content based on the attention estimation features, and the interaction measurement, and controls the delivery of the secondary content to the companion device based on the attention measurement.
Abstract:
A method of classifying the shot type of a video frame, comprising loading a frame, dividing the frame into field pixels and non-field pixels based on a first playfield detection criteria, determining an initial shot type classification using the number of the field pixels and the number of the non-field pixels, partitioning the frame into one or more regions based on the initial classification, determining the status of each of the one or more regions based upon the number of the field pixels and the non-field pixels located within each the region, and determining a shot type classification for the frame based upon the status of each the region.
Abstract:
A video processing device is provided that includes a buffer, a luminance component, a maximum threshold component, a minimum threshold component and a flagging component. The buffer can store frame image data for a plurality of video frames. The luminance component can generate a first luminance value corresponding to a first frame image data and can generate a second luminance value corresponding to a second frame image data. The maximum threshold component can generate a maximum indicator signal when the difference between the second luminance value and the first luminance value is greater than a maximum threshold. The minimum threshold component can generate a minimum indicator signal when the difference between the second luminance value and the first luminance value is less than a minimum threshold. The flagging component can generate a flagged signal based on the maximum indicator signal and the minimum indicator signal.