Video temporal perception characteristics based just noticeable difference model

The existing temporal domain JND(just noticeable distortion) models are not sufficient to depict the interaction between temporal parameters and HVS characteristics, leading to insufficient accuracy of the spatial-temporal JND model.To solve this problem, feature parameters that can accurately descr...

Full description

Saved in:
Bibliographic Details
Main Authors: Yafen1 XING, Haibing YIN, Hongkui WANG, Qionghua LUO
Format: Article
Language:zho
Published: Beijing Xintong Media Co., Ltd 2022-02-01
Series:Dianxin kexue
Subjects:
Online Access:http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2022030/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841529005771063296
author Yafen1 XING
Haibing YIN
Hongkui WANG
Qionghua LUO
author_facet Yafen1 XING
Haibing YIN
Hongkui WANG
Qionghua LUO
author_sort Yafen1 XING
collection DOAJ
description The existing temporal domain JND(just noticeable distortion) models are not sufficient to depict the interaction between temporal parameters and HVS characteristics, leading to insufficient accuracy of the spatial-temporal JND model.To solve this problem, feature parameters that can accurately describe the temporal characteristics of the video were explored and extracted, as well as a homogenization method for fusing heterogeneous feature parameters, and the temporal domain JND model based on this was improved.The feature parameters were investigated including foreground and background motion, temporal duration along the motion trajectory, residual fluctuation intensity along motion trajectory and adjacent inter-frame prediction residual, etc., which were used to characterize the temporal characteristics.Probability density functions for these feature parameters in the perception sense according to the HVS(human visual system) characteristics were proposed, and uniformly mapping the heterogeneous feature parameters to the scales of self-information and information entropy to achieve a homogeneous fusion measurement.The coupling method of visual attention and masking was explored from the perspective of energy distribution, and the temporal-domain JND weight model was constructed accordingly.On the basis of the spatial JND threshold, the temporal domain weights was integrated to develop a more accurate spatial-temporal JND model.In order to evaluate the performance of the spatiotemporal JND model, a subjective quality evaluation experiment was conducted.Experimental results justify the effectiveness of the proposed model.
format Article
id doaj-art-3b80d78f893d40d79f1a84a9dfe655b8
institution Kabale University
issn 1000-0801
language zho
publishDate 2022-02-01
publisher Beijing Xintong Media Co., Ltd
record_format Article
series Dianxin kexue
spelling doaj-art-3b80d78f893d40d79f1a84a9dfe655b82025-01-15T03:26:42ZzhoBeijing Xintong Media Co., LtdDianxin kexue1000-08012022-02-01389210259809420Video temporal perception characteristics based just noticeable difference modelYafen1 XINGHaibing YINHongkui WANGQionghua LUOThe existing temporal domain JND(just noticeable distortion) models are not sufficient to depict the interaction between temporal parameters and HVS characteristics, leading to insufficient accuracy of the spatial-temporal JND model.To solve this problem, feature parameters that can accurately describe the temporal characteristics of the video were explored and extracted, as well as a homogenization method for fusing heterogeneous feature parameters, and the temporal domain JND model based on this was improved.The feature parameters were investigated including foreground and background motion, temporal duration along the motion trajectory, residual fluctuation intensity along motion trajectory and adjacent inter-frame prediction residual, etc., which were used to characterize the temporal characteristics.Probability density functions for these feature parameters in the perception sense according to the HVS(human visual system) characteristics were proposed, and uniformly mapping the heterogeneous feature parameters to the scales of self-information and information entropy to achieve a homogeneous fusion measurement.The coupling method of visual attention and masking was explored from the perspective of energy distribution, and the temporal-domain JND weight model was constructed accordingly.On the basis of the spatial JND threshold, the temporal domain weights was integrated to develop a more accurate spatial-temporal JND model.In order to evaluate the performance of the spatiotemporal JND model, a subjective quality evaluation experiment was conducted.Experimental results justify the effectiveness of the proposed model.http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2022030/JNDHVS characteristicsvisual maskingvisual attentionself-informationinformation entropy
spellingShingle Yafen1 XING
Haibing YIN
Hongkui WANG
Qionghua LUO
Video temporal perception characteristics based just noticeable difference model
Dianxin kexue
JND
HVS characteristics
visual masking
visual attention
self-information
information entropy
title Video temporal perception characteristics based just noticeable difference model
title_full Video temporal perception characteristics based just noticeable difference model
title_fullStr Video temporal perception characteristics based just noticeable difference model
title_full_unstemmed Video temporal perception characteristics based just noticeable difference model
title_short Video temporal perception characteristics based just noticeable difference model
title_sort video temporal perception characteristics based just noticeable difference model
topic JND
HVS characteristics
visual masking
visual attention
self-information
information entropy
url http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2022030/
work_keys_str_mv AT yafen1xing videotemporalperceptioncharacteristicsbasedjustnoticeabledifferencemodel
AT haibingyin videotemporalperceptioncharacteristicsbasedjustnoticeabledifferencemodel
AT hongkuiwang videotemporalperceptioncharacteristicsbasedjustnoticeabledifferencemodel
AT qionghualuo videotemporalperceptioncharacteristicsbasedjustnoticeabledifferencemodel