Abstract:
A video encoding method is provided when three scenes are separated by two closely spaced scene changes. For scene changes spaced greater than a threshold, scene changes are programmed with I frames in a normal fashion. If less than the threshold, the method encodes depending on complexity of the first, second and third scene to determine how to encode the scene changes. To compare complexities, the process begins by using X 1 , X 2 , and X 3 to note respectively the complexities of the first, the second and the third scenes. If the absolute difference of X 1 and X 2 is higher than a first threshold and the absolute difference of X 2 and X 3 is higher than a second threshold, the first scene change is more significant than the second scene change, so in that case the process encodes the first scene change as an I-frame and picks a quantization parameter (QP) based on the complexity blended from the complexity of scene 2 (X 2 ) and scene 3 (X 3 ).
Abstract:
A method for enhancing at least a section of lower-quality visual data using a hierarchical algorithm, the method comprising receiving at least a plurality of neighbouring sections of lower-quality visual data. A plurality of input sections from the received plurality of neighbouring sections of lower quality visual data are selected and features are extracted from those plurality of input sections of lower-quality visual data. A target section based on the extracted features from the plurality of input sections of lower-quality visual data is then enhanced.
Abstract:
Systems and methods for reusing encoding information in the encoding of alternative streams of video data in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, encoding multimedia content for use in adaptive streaming systems, includes selecting a first encoding level from a plurality of encoding levels using a media server, determining encoding information for a first stream of video data using the first encoding level and the media server, encoding the first stream of video data using the media server, where the first stream of video data includes a first resolution and a first bitrate, selecting a second encoding level from the plurality of encoding levels using the media server, and encoding a second stream of video data using the encoding information and the media server, where the second stream of video data includes a second resolution and a second bitrate.
Abstract:
Computer processor hardware receives settings information for a first image. The first image includes a set of multiple display elements. The computer processor hardware receives motion compensation information for a given display element in a second image to be created based at least in part on the first image. The motion compensation information indicates a coordinate location within a particular display element in the first image to which the given display element pertains. The computer processor hardware utilizes the coordinate location as a basis from which to select a grouping of multiple display elements in the first image. The computer processor hardware then generates a setting for the given display element in the second image based on settings of the multiple display elements in the grouping.
Abstract:
Deriving illumination compensation parameters and detection of illumination dominant transitions types for video coding and processing applications is described. Illumination changes such as fade-ins, fade-outs, cross-fades, and flashes are detected. Detection of these illumination changes is then used for weighted prediction to provide for improved illumination compensation.
Abstract:
Scene change detection in encoding digital pictures is disclosed. A statistical quantity µ M is calculated for a given section in a current picture. A window of one or more sections is defined around a co-located section in a previous picture. A statistical sum E is calculated over the sections in the window. A difference between the statistical sum E and the statistical quantity µ M is calculated. The difference between E and µ M is used to determine whether the given section is a scene-change section. Whether the current picture is a scene-change picture may be determined from the number of scene change sections. Information indicating whether or not the current picture is a scene-change picture may be stored or transferred.
Abstract:
Eine Erkennung einer Änderung zwischen Bildern wird effektiver durchgeführt, indem zur Erkennung ein Änderungsmaß verwendet wird, das von einer Länge der Code-Blöcke abhängt, zu denen die Bilder individuell entropiecodiert sind, und die unterschiedlichen Abschnitten des jeweiligen Bildes zugeordnet sind, da die Länge dieser Code-Blöcke auch ohne Decodierung zur Verfügung steht. Hierdurch wird genutzt, dass die Länge bzw. die Menge der Daten eines Code-Blocks direkt größtenteils abhängig von der Entropie und damit von der Komplexität des zugeordneten Bildabschnitts ist, und dass sich Änderungen zwischen Bildern mit hoher Wahrscheinlichkeit auch in einer Änderung der Komplexität niederschlagen.
Abstract:
A method and apparatus for detecting (51) in a video stream a scene cut (11, 12) between a current field of the video stream and an immediately preceding field includes determining (61) differences for a first plurality of image parameters between values of the image parameters for a current field and for one or more immediately preceding fields. A flag value is set (62) for each parameter indicating whether a possible scene break exists between the current field and the immediately preceding field dependent on the respective differences. The flag values for each parameter are combined (63) to form a combined parameter and a scene break trigger signal generated (64) indicating a scene break between the current field and the immediately preceding field if the combined parameter exceeds a predetermined trigger threshold. A change of criticality is determined (52) at a forthcoming scene cut. A quantisation parameter is adjusted (53) dependent on the criticality change to avoid overflowing a buffer on encoding of a field following the scene cut as an intra-coded field. A field following the scene cut is encoded (54) as an intra-coded field having a quantisation parameter dependent on the criticality change; such that encoding of forward or backward coded fields prior to or following the scene change is based only on fields preceding or following the scene change, respectively.
Abstract:
The frame (n) following a scene cut (SCC) is usually coded as an I picture. In CBR encoding, the encoder will try to keep the bit rate (R) constant, which will often cause serious picture quality degradation at scene changes. In VBR encoding, more bits will be allocated to the first frame (n) of the new scene and the bit rate will increase significantly for a short time. Therefore subsequent frames must be coded in ^skipped' mode, which will often cause jerk artifacts. According to the invention, in each frame belonging to a scene change period, areas are determined that have different human attention levels. In the frames (n-1, n-2, n-3) located prior to the first new scene frame, to the areas having a lower attention level less bits are assigned than in the default encoding, and in the frames (n, n+1, n+2) located at and after the scene cut the thus saved bits are additionally assigned to the areas having a higher attention level.