PreparedLLM: effective pre-pretraining framework for domain-specific large language models

The direct application of large language models (LLMs) to specific domain tasks frequently encounters challenges due to the scarcity of domain data, variations in domain semantics, and the complexity of domain knowledge. Further pretraining of advanced foundational models on extensive domain-specifi...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhou Chen, Ming Lin, Zimeng Wang, Mingrun Zang, Yuqi Bai
Format: Article
Language:English
Published: Taylor & Francis Group 2024-10-01
Series:Big Earth Data
Subjects:
Online Access:https://www.tandfonline.com/doi/10.1080/20964471.2024.2396159
Tags: Add Tag
No Tags, Be the first to tag this record!

Similar Items