YOLOT: Multi-scale and diverse tire sidewall text region detection based on You-Only-Look-Once(YOLOv5)

Driving safety is significant to building a people-oriented and harmonious society, Tires are one of the key components of a vehicle and the character information on the tire sidewall is critical to their storage and usage. However, due to the diverse and differentiated features of typographic fonts...

Full description

Saved in:
Bibliographic Details
Main Authors: Dehua Liu, Yongqin Tian, Yibo Xu, Wenyi Zhao, Xipeng Pan, Xu Ji, Mu Yang, Huihua Yang
Format: Article
Language:English
Published: KeAi Communications Co. Ltd. 2024-01-01
Series:Cognitive Robotics
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S266724132400003X
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Driving safety is significant to building a people-oriented and harmonious society, Tires are one of the key components of a vehicle and the character information on the tire sidewall is critical to their storage and usage. However, due to the diverse and differentiated features of typographic fonts, simultaneously extracting comprehensive characteristics is an extremely challenging task. To effectively break through these performance degradation issues, a multi-scale tire sidewall text region detection algorithm based on YOLOv5 is introduced, called YOLOT, which fuses comprehensive feature information in both width and depth directions. In this study, we firstly propose the Width and Depth Awareness (WDA) module in the text region detection field and successfully integrated it with the FPN structure to form the WDA-FPN. The purpose of WDA-FPN is to empower the network to capture multi-scale and multi-shape features in images, thereby augmenting the algorithm’s abstraction and representation of image features and concurrently boosting its robustness and generalization performance. Experimental findings indicate that, compared to the primary algorithm, YOLOT achieves significant improvement in accuracy, providing a higher detection reliability. The dataset and code for the paper are available at: https://github.com/Cloude-dehua/YOLOT.
ISSN:2667-2413