FakeMusicCaps: A Dataset for Detection and Attribution of Synthetic Music Generated via Text-to-Music Models
Text-to-music (TTM) models have recently revolutionized the automatic music generation research field, specifically by being able to generate music that sounds more plausible than all previous state-of-the-art models and by lowering the technical proficiency needed to use them. For these reasons, th...
Saved in:
| Main Authors: | Luca Comanducci, Paolo Bestagini, Stefano Tubaro |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-07-01
|
| Series: | Journal of Imaging |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2313-433X/11/7/242 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Study on of Music Features Derived from Audio Recordings Examples – a Quantitative Analysis
by: Aleksandra DOROCHOWICZ, et al.
Published: (2018-07-01) -
Neural audio instruments: epistemological and phenomenological perspectives on musical embodiment of deep learning
by: Victor Zappi, et al.
Published: (2025-08-01) -
Emotional response to music: the Emotify + dataset
by: Abigail Wiafe, et al.
Published: (2025-07-01) -
MusiQAl: A Dataset for Music Question–Answering through Audio–Video Fusion
by: Anna-Maria Christodoulou, et al.
Published: (2025-07-01) -
Interacting with Annotated and Synchronized Music Corpora on the Dezrann Web Platform
by: Charles Ballester, et al.
Published: (2025-05-01)