摘要:
A particular implementation detects scene cut artifacts in a bitstream without reconstructing the video. A scene cut artifact is usually observed in the decoded video (1) when a scene cut picture in the original video is partially received or (2) when a picture refers to a lost scene cut picture in the original video. To detect scene cut artifacts, candidate scene cut pictures are first selected and scene cut artifact detection is then performed on the candidate pictures. When a block is determined to have a scene cut artifact, a lowest quality level is assigned to the block.
摘要:
A particular implementation receives a bitstream and derives parameters from the bitstream. The parameters include quantization parameters, content unpredictability parameters, ratios of lost blocks, ratios of propagated blocks, error concealment distances, motion vectors, durations of freezing, and frame rates. Using these parameters, a compression distortion factor, a slicing distortion factor, and a freezing distortion factor are estimated respectively for distortions resulting from video compression, a slicing mode error concealment, and a freezing slicing mode error concealment. The distortion factors are then mapped to a composite video quality score. For applications with limited computational power, the estimation of distortion factors can be simplified. In particular, the compression distortion factor, the slicing distortion factor, and the freezing distortion factor can be predicted from quantization parameters, ratios of lost blocks, and durations of freezing, respectively.
摘要:
A particular implementation receives a bitstream and derives parameters from the bitstream. The parameters include quantization parameters, content unpredictability parameters, ratios of lost blocks, ratios of propagated blocks, error concealment distances, motion vectors, durations of freezing, and frame rates. Using these parameters, a compression distortion factor, a slicing distortion factor, and a freezing distortion factor are estimated respectively for distortions resulting from video compression, a slicing mode error concealment, and a freezing slicing mode error concealment. The distortion factors are then mapped to a composite video quality score. For applications with limited computational power, the estimation of distortion factors can be simplified. In particular, the compression distortion factor, the slicing distortion factor, and the freezing distortion factor can be predicted from quantization parameters, ratios of lost blocks, and durations of freezing, respectively.
摘要:
Accuracy and efficiency of video quality measurement are major problems to be solved. According to the invention, a method for accurately predicting video quality uses a rational function of the quantization parameter QP, which is corrected by a correction function that depends on content unpredictability CU. Exemplarily, the correction function is a power function of the CU. Both QP and CU can be computed from the video elementary stream, without full decoding the video. This ensures high efficiency.
摘要:
The invention provides a method and apparatus for detecting a gradual transition picture in a bitstream. The method comprises: accessing a bitstream including encoded pictures; and determining a gradual transition picture in the bitstream using information from the bitstream without decoding the bitstream to derive pixel information.
摘要:
A macroblock in a video sequence may be undecodable because the corresponding compressed data is lost or the syntax is out of synchronization. An undecodable macroblock may be concealed using error concealment technique. The level of initial visible artifacts caused by undecodable macroblocks may be estimated as a function of motion magnitude, error concealment distance, and/or residual energy. The initial visible artifacts may propagate spatially or temporally to other macroblocks through prediction. Considering both initial visible artifacts and propagated artifacts, levels of overall artifacts may be estimated for individual macroblocks. The visual quality for the video sequence can then be estimated by pooling the macroblock level artifact levels.
摘要:
A method for estimating video quality on bit-stream level, wherein the video quality refers to a video after error concealment and the method is performed on bit-stream level before said error concealment, comprises extracting and/or calculating a plurality of global condition features from a video bit-stream, extracting and/or calculating a plurality of local effectiveness features at least for a lost MB, calculating a numeric error concealment effectiveness level for each (or at least for each lost) MB by emulating an error concealment method that is used in said error concealment, and providing the calculated error concealment effectiveness level as an estimated visible artifacts level of video quality.
摘要:
A method and apparatus are disclosed for predicting subjective quality of a video contained in a bit stream on a packet layer. Header information of the bit-stream is parsed and frame layer information, such as frame type, is estimated. Visible artifact levels are then estimated based on frame layer information. An overall artifact level and quality metric are estimated based on artifact levels for individual frames with other parameters. Specifically, different weighting factors are used for different frame types when estimating the levels of initial visible artifacts and propagated visible artifacts. The number of slices per frame is used as a parameter when estimating the overall artifact level for the video. Moreover, the quality assessment model considers quality loss caused by both coding and channel artifacts.
摘要:
Spatial distortion (i.e., when a frame is viewed independently of other frames in a video sequence) may be quite different from temporal distortion (i.e., when frames are viewed continuously). To estimate temporal distortion, a sliding window approach is used. Specifically, multiple sliding windows around a current frame are considered. Within each sliding window, a large distortion density is calculated and a sliding window with the highest large distortion density is selected. A distance between the current frame and the closest frame with large distortion in the selected window is calculated. Subsequently, the temporal distortion is estimated as a function of the highest large distortion ratio, the spatial distortion for the current frame, and the distance. In another embodiment, a median of spatial distortion values is calculated for each sliding window and the maximum of median spatial distortion values is used to estimate the temporal distortion.
摘要:
Objective video quality assessment models at media-layer or at packet-layer are known for estimating audio/video quality of experience. Existing models are not able to provide stable performance. A method for enabling quality assessment of a stream of frames of video data comprises receiving a sequence of packets, generating a set of parameters and inserting said generated set of parameters as side information into said stream of frames, wherein at least one parameter refers to a video slice level. A method for assessing the quality of a stream of frames of video data comprises receiving a sequence of packets, extracting a set of parameters from said sequence of packets and generating an estimated mean opinion score, wherein the video data comprise a slice level and wherein the extracted set of parameters comprises at least one parameter that refers to a video slice level.