STAR Drums: A Dataset for Automatic Drum Transcription

Current state‑of‑the‑art automatic drum transcription (ADT) algorithms make use of neural networks. To train such models, large amounts of annotated data are needed. We introduce the Separate–Tracks–Annotate–Resynthesize Drums (STAR Drums) dataset, derived from full audio recordings that include mix...

Full description

Saved in:
Bibliographic Details
Main Authors: Philipp Weber, Christian Uhle, Meinard Müller, Matthias Lang
Format: Article
Language:English
Published: Ubiquity Press 2025-07-01
Series:Transactions of the International Society for Music Information Retrieval
Subjects:
Online Access:https://account.transactions.ismir.net/index.php/up-j-tismir/article/view/244
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Current state‑of‑the‑art automatic drum transcription (ADT) algorithms make use of neural networks. To train such models, large amounts of annotated data are needed. We introduce the Separate–Tracks–Annotate–Resynthesize Drums (STAR Drums) dataset, derived from full audio recordings that include mixtures of drum instruments, melodic instruments, and vocals. First, we separate the music recordings into a drum stem and a non‑drum stem by applying a music source separation algorithm, then automatically annotate the drum stem with an ADT algorithm. The annotations are used for the re‑synthesis of the drum stem using sample‑based virtual drum instruments. Finally, we mix the re‑synthesized drum stem with the original non‑drum stem to obtain the final mix. In summary, STAR Drums includes annotated synthesized drum sounds mixed with real recordings of melodic instruments and vocals, offering several benefits: high temporal accuracy of annotations; training data that include recordings of instruments played by musicians, rather than solely relying on MIDI‑rendered audio; a large number of supported drum classes; the possibility to customize the final mix by, for instance, applying additional processing to the drum stem, as both drum and non‑drum stems are provided; and suitable licenses of audio files for making the dataset fully available to the research community. We demonstrate that, in the context of ADT, training with STAR Drums achieves superior performance compared to training with datasets solely relying on MIDI‑rendered data and that the synthesized nature of the drum stem does not diminish performance.
ISSN:2514-3298