Abstract:
A method of extracting scrolling text from a sequence of moving images includes obtaining video input data representative of a sequence of image frames(8-10) representing the sequence of moving images.Atleast one region(11,12) containing text within the images of the sequence is detected.For each of at least one of the at least one detected region a displacement of the region(11) fromimage to image is estimated, the images are registered in accordance with the estimated displacement,such that the region(11) is generallystatic from one registered image to the next, and at least parts of image frames representing multiple registered images are caused to be used as input in a method of segmentingtext from background,based on multiple images so as to generate an image of the text filtered from these.
Abstract:
In a system (2) for processing video (1), the system comprises a processor for processing the video. The processor comprises a shot grouper (6) for grouping shots into groups of visually similar ones of the shots. The shot grouper is operative to compare corresponding feature points in the shots. In a method for processing video, the method comprises automatically grouping shots into groups of visually similar ones of the shots, using a comparison of corresponding feature points in the shots.
Abstract:
In a system (2) for processing video (1), the system comprises a processor for processing the video. The processor comprises a shot grouper (6) for grouping shots into groups of visually similar ones of the shots. The shot grouper is operative to compare corresponding feature points in the shots. In a method for processing video, the method comprises automatically grouping shots into groups of visually similar ones of the shots, using a comparison of corresponding feature points in the shots.
Abstract:
The invention relates to a method of identifying a boundary (211, 212) of a content item in a content stream (201), the method comprising the steps of: (110) receiving predetermined additional data related to the content item, the additional data comprising attribute data describing substantially the whole content item, - (130) using a content-analysis processor (310) for analyzing the content stream so as to detect whether the content stream corresponds to the attribute data, and (140) identifying the boundary of the content item in the content stream when the correspondence changes from valid to invalid, or vice versa. The attribute data may indicate a genre of a movie, a music style of a song, etc. or a sequence of genres/music styles. The content-analysis processor (310) utilizes the attribute data to detect whether the content stream belongs to the content item by analyzing the content stream so as to detect the correspondence of the content stream to the attribute data.
Abstract:
The invention relates to an apparatus (300) and a method for analyzing a content stream (201) comprising a content item, and to a computer program product enabling a programmable device. The apparatus comprises a content analysis processor (310) for identifying an exact indicator of a boundary (221, 222) of the content item in the content stream, wherein identifying comprises determining a remote indicator (231) being remote from the boundary and analyzing the content stream starting from the remote indicator towards the boundary to identify the exact indicator.
Abstract:
A transition between a first video segment (C 1 ,...,C M ) and a second video segment (S 1 ,... S N ), is smoothed by determining (103) a first profile of content of a first video segment then determining (103) a second profile of content of a second video segment; and inserting (105) the first video segment within the second video segment at a location (S j , S j+1 ) where the determined first profile is similar to the determined second profile to smooth the transition between the first video segment and the second video segment.
Abstract:
Text data associated with an object or subject appearing in an image is retrieved. The text and recognition data of said object or subject are generated and stored such that, during subsequent appearances of the targeted object or subject, the targeted object or subject can be recognised, the stored text associated with the recognised object or subject is retrieved and can be displayed with subsequent appearances of the object or subject.
Abstract:
The invention relates to an apparatus (300) and a method for analyzing a content stream (201) comprising a content item, and to a computer program product enabling a programmable device. The apparatus comprises a content analysis processor (310) for identifying an exact indicator of a boundary (221, 222) of the content item in the content stream, wherein identifying comprises determining a remote indicator (231) being remote from the boundary and analyzing the content stream starting from the remote indicator towards the boundary to identify the exact indicator.
Abstract:
Silences are detected when a local signal power is below a given fixed or relative threshold value, the duration of the local signal power being below the given fixed or relative threshold value is within a first range and at least one of the parameters signal power fall/rise rate and local power deviation falls within a respective further range. The invention further relates to the use of such a silence detection in a receiver (1).
Abstract:
The invention relates to the automatic adjusting a lighting atmosphere based on presence detection, particularly based on the detection of the presence of people in a monitored area. An embodiment of the invention provides a system (10) for automatically adjusting a lighting atmosphere based on presence detection, comprising - at least one sensor (12) for gathering information on the presence of people in a supervised area (14), and - a processing unit (16) being adapted for determining the presence level of people in the supervised area (14) based on the gathered information and for adjusting the lighting atmosphere based on the determined presence level by controlling the dynamics level of the lighting atmosphere depending on the determined presence level. This allows for example to reduce the level of dynamics of the lighting atmosphere if the number of people in the supervised area increases, i.e. the supervised area becomes crowded.