A Computational Model of Attention-Guided Visual Learning in a High-Performance Computing Software System

This research investigates transformer architectures in high-performance computing (HPC) software systems for attention-guided visual learning (AGVL). The study focuses on the effects of environmental factors and non-contextual stimuli on cognitive control. It reveals how attention increases respons...

Full description

Saved in:
Bibliographic Details
Main Authors: Alice Ahmed, Md. Tanim Hossain
Format: Article
Language:English
Published: IMS Vogosca 2024-12-01
Series:Science, Engineering and Technology
Subjects:
Online Access:https://setjournal.com/SET/article/view/245
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This research investigates transformer architectures in high-performance computing (HPC) software systems for attention-guided visual learning (AGVL). The study focuses on the effects of environmental factors and non-contextual stimuli on cognitive control. It reveals how attention increases responses to attentive stimuli, thereby normalizing activity across the population. Transformer blocks use parallelism and less localized attention than current or convolutional models. The study investigates the use of transformer topologies to enhance language modeling, focusing on attention-guided learning and attention-modulated Hebbian plasticity. The model includes an all-attention layer with embedded input vectors, non-contextual vectors containing generic task-relevant information, and self-attentional and feedforward layers. The work employs relative two-dimensional positional encoding to address the challenge of encoding two-dimensional data such as photographs. The feature-similarity gain model proposes that attention multiplicatively strengthens neuronal responses based on how similar their feature tuning is to the attended input. The attention-guided learning approach rewards learning with neural attentional response gain, which the network modifies via gradient descent to achieve the projected objective outputs. The study discovered that supervised error backpropagation and the attention-modulated Hebbian rule outperformed the weight gain rule on MNIST; however, concentration differed.
ISSN:2831-1043
2744-2527