Reducing Memory and Computational Cost for Deep Neural Network Training with Quantized Parameter Updates

For embedded devices, both memory and computational efficiency are essential due to their constrained resources. However, neural network training remains both computation and memory intensive. Although many existing studies apply quantization schemes to mitigate memory overhead, they often employ st...

Full description

Saved in:
Bibliographic Details
Main Authors: Leo Buron, Andreas Erbslöh, Gregor Schiele
Format: Article
Language:English
Published: Graz University of Technology 2025-08-01
Series:Journal of Universal Computer Science
Subjects:
Online Access:https://lib.jucs.org/article/164737/download/pdf/
Tags: Add Tag
No Tags, Be the first to tag this record!