MindLLM: Lightweight large language model pre-training, evaluation and domain application
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial intelligence is leveraged by developing increasingly large-scale models, there could be another b...
Saved in:
| Main Authors: | Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Yang Gao, Heyan Huang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
KeAi Communications Co. Ltd.
2024-01-01
|
| Series: | AI Open |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2666651024000111 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
PreparedLLM: effective pre-pretraining framework for domain-specific large language models
by: Zhou Chen, et al.
Published: (2024-10-01) -
LLM-Guided Crowdsourced Test Report Clustering
by: Ying Li, et al.
Published: (2025-01-01) -
Enhancing intention prediction and interpretability in service robots with LLM and KG
by: Jincao Zhou, et al.
Published: (2024-11-01) -
Sentiment Analysis of Product Reviews Using Machine Learning and Pre-Trained LLM
by: Pawanjit Singh Ghatora, et al.
Published: (2024-12-01) -
BdSentiLLM: A Novel LLM Approach to Sentiment Analysis of Product Reviews
by: Atia Shahnaz Ipa, et al.
Published: (2024-01-01)