Video quality objective assessment method based on spatiotemporal domain structure
    1.
    发明申请
    Video quality objective assessment method based on spatiotemporal domain structure 审中-公开
    基于时空域结构的视频质量客观评估方法

    公开(公告)号:US20160330439A1

    公开(公告)日:2016-11-10

    申请号:US15214456

    申请日:2016-07-20

    CPC classification number: H04N17/02 H04N19/154 H04N19/89

    Abstract: A video quality objective assessment method based on a spatiotemporal domain structure firstly combines a spatiotemporal domain gradient magnitude and color information for calculating a spatiotemporal domain local similarity, and then uses variance fusion for spatial domain fusion. The spatiotemporal domain local similarity is fused into frame-level objective quality value, and then a temporal domain fusion model is established by simulating three important global temporal effects, which are a smoothing effect, an asymmetric track effects and a recency effect, of a human visual system. Finally, the objective quality values of the distorted video sequence are obtained. By modeling the human visual temporal domain effect, the temporal domain weighting method of the present invention is able to accurately and efficiently evaluate the objective quality of the distorted video.

    Abstract translation: 基于时空域结构的视频质量客观评估方法首先结合时空域梯度大小和颜色信息,计算时空域局部相似度,然后使用方差融合进行空间域融合。 将时空域局部相似度融合为帧级客体质量值,然后通过模拟人类的平滑效应,平滑效应,非对称轨道效应和近似效应三种重要的全局时间效应建立了时域融合模型 视觉系统。 最后得到失真视频序列的客观质量值。 通过对人类视觉时域效应进行建模,本发明的时域加权方法能够准确有效地评估失真视频的客观质量。

    Video quality objective assessment method based on spatiotemporal domain structure

    公开(公告)号:US09756323B2

    公开(公告)日:2017-09-05

    申请号:US15214456

    申请日:2016-07-20

    CPC classification number: H04N17/02 H04N19/154 H04N19/89

    Abstract: A video quality objective assessment method based on a spatiotemporal domain structure firstly combines a spatiotemporal domain gradient magnitude and color information for calculating a spatiotemporal domain local similarity, and then uses variance fusion for spatial domain fusion. The spatiotemporal domain local similarity is fused into frame-level objective quality value, and then a temporal domain fusion model is established by simulating three important global temporal effects, which are a smoothing effect, an asymmetric track effects and a recency effect, of a human visual system. Finally, the objective quality values of the distorted video sequence are obtained. By modeling the human visual temporal domain effect, the temporal domain weighting method of the present invention is able to accurately and efficiently evaluate the objective quality of the distorted video.

Patent Agency Ranking