Device‐Algorithm Co‐Optimization for an On‐Chip Trainable Capacitor‐Based Synaptic Device with IGZO TFT and Retention‐Centric Tiki‐Taka Algorithm

Abstract Analog in‐memory computing synaptic devices are widely studied for efficient implementation of deep learning. However, synaptic devices based on resistive memory have difficulties implementing on‐chip training due to the lack of means to control the amount of resistance change and large dev...

Full description

Saved in:
Bibliographic Details
Main Authors: Jongun Won, Jaehyeon Kang, Sangjun Hong, Narae Han, Minseung Kang, Yeaji Park, Youngchae Roh, Hyeong Jun Seo, Changhoon Joe, Ung Cho, Minil Kang, Minseong Um, Kwang‐Hee Lee, Jee‐Eun Yang, Moonil Jung, Hyung‐Min Lee, Saeroonter Oh, Sangwook Kim, Sangbum Kim
Format: Article
Language:English
Published: Wiley 2023-10-01
Series:Advanced Science
Subjects:
Online Access:https://doi.org/10.1002/advs.202303018
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Analog in‐memory computing synaptic devices are widely studied for efficient implementation of deep learning. However, synaptic devices based on resistive memory have difficulties implementing on‐chip training due to the lack of means to control the amount of resistance change and large device variations. To overcome these shortcomings, silicon complementary metal‐oxide semiconductor (Si‐CMOS) and capacitor‐based charge storage synapses are proposed, but it is difficult to obtain sufficient retention time due to Si‐CMOS leakage currents, resulting in a deterioration of training accuracy. Here, a novel 6T1C synaptic device using only n‐type indium gaIlium zinc oxide thin film transistor (IGZO TFT) with low leakage current and a capacitor is proposed, allowing not only linear and symmetric weight update but also sufficient retention time and parallel on‐chip training operations. In addition, an efficient and realistic training algorithm to compensate for any remaining device non‐idealities such as drifting references and long‐term retention loss is proposed, demonstrating the importance of device‐algorithm co‐optimization.
ISSN:2198-3844