Simulated misuse of large language models and clinical credit systems
Abstract In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare i...
        Saved in:
      
    
          | Main Authors: | , , , , | 
|---|---|
| Format: | Article | 
| Language: | English | 
| Published: | Nature Portfolio
    
        2024-11-01 | 
| Series: | npj Digital Medicine | 
| Online Access: | https://doi.org/10.1038/s41746-024-01306-2 | 
| Tags: | Add Tag 
      No Tags, Be the first to tag this record!
   | 
| Summary: | Abstract In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems. | 
|---|---|
| ISSN: | 2398-6352 | 
 
       