Transformation-Based Data Synthesis for Limited Sample Scenario
We consider a challenging learning scenario where neither pretext training nor auxiliary data are available except for small training samples. We call this a transfer-free scenario where we cannot access any transferable knowledge or data. Our proposal for resolving this issue is to learn a pair-wis...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10781377/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846123603992510464 |
|---|---|
| author | Chang-Hwa Lee Sang Wan Lee |
| author_facet | Chang-Hwa Lee Sang Wan Lee |
| author_sort | Chang-Hwa Lee |
| collection | DOAJ |
| description | We consider a challenging learning scenario where neither pretext training nor auxiliary data are available except for small training samples. We call this a transfer-free scenario where we cannot access any transferable knowledge or data. Our proposal for resolving this issue is to learn a pair-wise transformation function (e.g., spatial or appearance) between given samples. This simple setting yields two practical advantages. The training objective can be defined as a simple reconstruction loss, and data can be synthesized by merely manipulating or sampling the learned transformations. However, the limitation of previous transformation methods lies in a strong assumption that all images should be transformable to each other, i.e., all-to-all transformable. To relax this constraint, we propose a novel concept called ‘template,’ designed to be transformable to any other data, i.e., “template-to-all” transformable. A range of experiments on the transfer-free scenarios confirms that our model successfully learns transformation and synthesizes new data from minimal training data (less than five or ten for each class). The subsequent data augmentation experiments showed significantly improved classification performance. |
| format | Article |
| id | doaj-art-058a4a6afc79497587b09dcb73ace66e |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-058a4a6afc79497587b09dcb73ace66e2024-12-14T00:00:55ZengIEEEIEEE Access2169-35362024-01-011218484118485210.1109/ACCESS.2024.351253810781377Transformation-Based Data Synthesis for Limited Sample ScenarioChang-Hwa Lee0https://orcid.org/0000-0001-5961-8284Sang Wan Lee1https://orcid.org/0000-0001-6266-9613Brain and Cognitive Engineering Program, Korea Advanced Institute of Science Technology (KAIST), Daejeon, South KoreaDepartment of Brain and Cognitive Sciences, Department of Bio and Brain Engineering, Kim Jaechul Graduate School of AI, Center for Neuroscience-inspired AI, Korea Advanced Institute of Science Technology (KAIST), Daejeon, South KoreaWe consider a challenging learning scenario where neither pretext training nor auxiliary data are available except for small training samples. We call this a transfer-free scenario where we cannot access any transferable knowledge or data. Our proposal for resolving this issue is to learn a pair-wise transformation function (e.g., spatial or appearance) between given samples. This simple setting yields two practical advantages. The training objective can be defined as a simple reconstruction loss, and data can be synthesized by merely manipulating or sampling the learned transformations. However, the limitation of previous transformation methods lies in a strong assumption that all images should be transformable to each other, i.e., all-to-all transformable. To relax this constraint, we propose a novel concept called ‘template,’ designed to be transformable to any other data, i.e., “template-to-all” transformable. A range of experiments on the transfer-free scenarios confirms that our model successfully learns transformation and synthesizes new data from minimal training data (less than five or ten for each class). The subsequent data augmentation experiments showed significantly improved classification performance.https://ieeexplore.ieee.org/document/10781377/Image synthesissmall sample classificationcomputer visiondeep learning |
| spellingShingle | Chang-Hwa Lee Sang Wan Lee Transformation-Based Data Synthesis for Limited Sample Scenario IEEE Access Image synthesis small sample classification computer vision deep learning |
| title | Transformation-Based Data Synthesis for Limited Sample Scenario |
| title_full | Transformation-Based Data Synthesis for Limited Sample Scenario |
| title_fullStr | Transformation-Based Data Synthesis for Limited Sample Scenario |
| title_full_unstemmed | Transformation-Based Data Synthesis for Limited Sample Scenario |
| title_short | Transformation-Based Data Synthesis for Limited Sample Scenario |
| title_sort | transformation based data synthesis for limited sample scenario |
| topic | Image synthesis small sample classification computer vision deep learning |
| url | https://ieeexplore.ieee.org/document/10781377/ |
| work_keys_str_mv | AT changhwalee transformationbaseddatasynthesisforlimitedsamplescenario AT sangwanlee transformationbaseddatasynthesisforlimitedsamplescenario |