River Surface Space–Time Image Velocimetry Based on Dual-Channel Residual Network

Space–Time Image Velocimetry (STIV) estimates the one-dimensional time-averaged velocity by analyzing the main orientation of texture (MOT) in space–time images (STIs). However, environmental interference often blurs weak tracer textures in STIs, limiting the accuracy of traditional MOT detection al...

Full description

Saved in:
Bibliographic Details
Main Authors: Ling Gao, Zhen Zhang, Lin Chen, Huabao Li
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/10/5284
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Space–Time Image Velocimetry (STIV) estimates the one-dimensional time-averaged velocity by analyzing the main orientation of texture (MOT) in space–time images (STIs). However, environmental interference often blurs weak tracer textures in STIs, limiting the accuracy of traditional MOT detection algorithms based on shallow features like images’ gray gradient. To solve this problem, we propose a deep learning-based MOT detection model using a dual-channel ResNet (DCResNet). The model integrates gray and edge channels through ResNet18, performs weighted fusion on the features extracted from two channels, and finally outputs the MOT. An adaptive threshold Sobel operator in the edge channel improves the model’s ability to extract edge features in STI. Based on a typical mountainous river (located at the Panzhihua hydrological station in Panzhihua City, Sichuan Province), an STI dataset is constructed. DCResNet achieves the optimal MOT detection at a 7:3 gray–edge fusion ratio, with MAEs of 0.41° (normal scenarios) and 1.2° (complex noise scenarios), respectively, outperforming the single-channel models. In flow velocity comparison experiments, DCResNet demonstrates an excellent detection performance and robustness. Compared to current meter results, the MRE of DCResNet is 4.08%, which is better than the FFT method.
ISSN:2076-3417