Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural Networks

Neural networks have inductive biases owing to the assumptions associated with the selected learning algorithm, datasets, and network structure. Specifically, convolutional neural networks (CNNs) are known for their tendency to exhibit textural biases. This bias is closely related to image classific...

Full description

Saved in:
Bibliographic Details
Main Authors: Akinori Iwata, Masahiro Okuda
Format: Article
Language:English
Published: MDPI AG 2024-11-01
Series:Signals
Subjects:
Online Access:https://www.mdpi.com/2624-6120/5/4/40
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846102702899068928
author Akinori Iwata
Masahiro Okuda
author_facet Akinori Iwata
Masahiro Okuda
author_sort Akinori Iwata
collection DOAJ
description Neural networks have inductive biases owing to the assumptions associated with the selected learning algorithm, datasets, and network structure. Specifically, convolutional neural networks (CNNs) are known for their tendency to exhibit textural biases. This bias is closely related to image classification accuracy. Aligning the model’s bias with the dataset’s bias can significantly enhance performance in transfer learning, leading to more efficient learning. This study aims to quantitatively demonstrate that increasing shape bias within the network by varying kernel sizes and dilation rates improves accuracy on shape-dominant data and enables efficient learning with less data. Furthermore, we propose a novel method for quantitatively evaluating the balance between texture bias and shape bias. This method enables efficient learning by aligning the biases of the transfer learning dataset with those of the model. Systematically adjusting these biases allows CNNs to better fit data with specific biases. Compared to the original model, an accuracy improvement of up to 9.9% was observed. Our findings underscore the critical role of bias adjustment in CNN design, contributing to developing more efficient and effective image classification models.
format Article
id doaj-art-4276b6362a554ed7b85d6cfa3d21fac6
institution Kabale University
issn 2624-6120
language English
publishDate 2024-11-01
publisher MDPI AG
record_format Article
series Signals
spelling doaj-art-4276b6362a554ed7b85d6cfa3d21fac62024-12-27T14:53:36ZengMDPI AGSignals2624-61202024-11-015472173510.3390/signals5040040Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural NetworksAkinori Iwata0Masahiro Okuda1Faculty of Science and Engineering, Doshisha University, Kyoto 602-0898, JapanFaculty of Science and Engineering, Doshisha University, Kyoto 602-0898, JapanNeural networks have inductive biases owing to the assumptions associated with the selected learning algorithm, datasets, and network structure. Specifically, convolutional neural networks (CNNs) are known for their tendency to exhibit textural biases. This bias is closely related to image classification accuracy. Aligning the model’s bias with the dataset’s bias can significantly enhance performance in transfer learning, leading to more efficient learning. This study aims to quantitatively demonstrate that increasing shape bias within the network by varying kernel sizes and dilation rates improves accuracy on shape-dominant data and enables efficient learning with less data. Furthermore, we propose a novel method for quantitatively evaluating the balance between texture bias and shape bias. This method enables efficient learning by aligning the biases of the transfer learning dataset with those of the model. Systematically adjusting these biases allows CNNs to better fit data with specific biases. Compared to the original model, an accuracy improvement of up to 9.9% was observed. Our findings underscore the critical role of bias adjustment in CNN design, contributing to developing more efficient and effective image classification models.https://www.mdpi.com/2624-6120/5/4/40convolutional neural networksinductive biasclassificationshape/texture bias
spellingShingle Akinori Iwata
Masahiro Okuda
Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural Networks
Signals
convolutional neural networks
inductive bias
classification
shape/texture bias
title Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural Networks
title_full Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural Networks
title_fullStr Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural Networks
title_full_unstemmed Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural Networks
title_short Quantifying Shape and Texture Biases for Enhancing Transfer Learning in Convolutional Neural Networks
title_sort quantifying shape and texture biases for enhancing transfer learning in convolutional neural networks
topic convolutional neural networks
inductive bias
classification
shape/texture bias
url https://www.mdpi.com/2624-6120/5/4/40
work_keys_str_mv AT akinoriiwata quantifyingshapeandtexturebiasesforenhancingtransferlearninginconvolutionalneuralnetworks
AT masahirookuda quantifyingshapeandtexturebiasesforenhancingtransferlearninginconvolutionalneuralnetworks