Leveraging Local LLMs for Secure In-System Task Automation With Prompt-Based Agent Classification

Recent progress in the field of intelligence has led to the creation of powerful large language models (LLMs). While these models show promise in improving personal computing experiences concerns surrounding data privacy and security have hindered their integration with sensitive personal informatio...

Full description

Saved in:
Bibliographic Details
Main Authors: Suthir Sriram, C. H. Karthikeya, K. P. Kishore Kumar, Nivethitha Vijayaraj, Thangavel Murugan
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10766449/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent progress in the field of intelligence has led to the creation of powerful large language models (LLMs). While these models show promise in improving personal computing experiences concerns surrounding data privacy and security have hindered their integration with sensitive personal information. In this study, a new framework is proposed to merge LLMs with personal file systems, enabling intelligent data interaction while maintaining strict privacy safeguards. The methodology organizes tasks based on LLM agents, which apply designated tags to the tasks before sending them to specific LLM modules. Every module is has its own function, including file search, document summarization, code interpretation, and general tasks, to make certain that all processing happens locally on the user’s device. Findings indicate high accuracy across agents: classification agent managed to get an accuracy rating of 86%, document summarization reached a BERT score of 0.9243. The key point of this framework is that it splits the LLM system into modules, which enables future development by integrating new task-specific modules as required. Findings suggest that integrating local LLMs can significantly improve interactions with file systems without compromising data privacy.
ISSN:2169-3536