Skip to content

jazznet dataset of piano patterns for music audio machine learning research

License

Notifications You must be signed in to change notification settings

tosiron/jazznet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

jazznet Dataset DOI

image

The paper describing the jazznet dataset will appear at ICASSP 2023. Read the preprint.

Table of contents

The jazznet dataset is an extensible dataset containing 162520 labeled piano patterns: chords, arpeggios, scales, and chord progressions, and their inversions in all keys of the 88-key piano. This results in ~95GB and more than 26K hours of audio. The patterns are guided by the jazz piano genre, but encompass other genres, like country, pop, blues, etc. You can use the dataset as is or easily generate new data using the provided piano pattern generator.

The figure below depicts the taxonomy of the dataset.

image

The table below depicts the dataset's statistics.

image

Why the dataset was created

The dataset is created for machine learning research in music information retrieval (MIR). In general, datasets in the music arena seem to lag datasets in other areas like image recognition. So, this dataset aims to contribute to the body of large-scale datasets available for MIR research.

The general motivation behind the dataset is simple: if you wanted to learn jazz piano, you could listen to several jazz music pieces and try to replicate them. Or you could learn the fundamentals of jazz piano music (chords, scales, arpeggios, chord progressions) and build on the fundamentals--this would be a more effective way to learn. So, the dataset aims to mimic the data that would be used for effective learning. It contains automatically generated "fundamental jazz piano patterns".

Another reason (perhaps, the most important reason) why the dataset was created is that I enjoyed doing it. It was very time-consuming to curate the dataset, develop the pattern generator, ascertain correctness of generated patterns and labels, etc. But it was fun and I hope you find it useful.

The dataset is by no means "complete". For instance, it currently does not contain musical auditory attributes like dynamics and rhythmic variations; it might not contain some of your favorite chords or progressions (partly because I probably don't know you and I wasn't trying to please you). But it is extensible. With the provided piano pattern generators, you can easily generate tons of new patterns. Furthermore, the scripts are open-source, so you can easily modify them to support even more piano patterns.

How the dataset was created

There is a bit of background music theory necessary for understanding how the dataset was created (although, not necessary for using the dataset). To determine which patterns to include, several resources were surveyed, including jazz piano education books (like The Jazz Piano Book) and numerous jazz standards to identify the basis for the most popular patterns.

The dataset is automatically generated using what I call distance-based pattern structures (DBPS). DBPS describes the structure of each musical pattern based on the distance between successive pitches within the pattern. This approach enables the data to be automatically generated in all keys of the piano using symbolic MIDI representation, where the distance between two adjacent pitches is 1 (i.e., a semitone). For example, the major triad, with note0, note1, and note2 can be represented using the structure [4 3], meaning note1 is 4 pitches from note0, and note2 is 3 pitches from note2. This holds in all keys of the piano. Similarly, the chords within progressions can be described using the Roman numerals and generated based on the DBPS approach.

What can you do with the data?

I imagine it would be an excellent dataset for a variety of challenging ML tasks. The basic tasks include machine music understanding, e.g., is some input a chord, scale, or arpeggio. These would be relatively easy for a ML model. More challenging tasks within machine music understanding would be identifying what kinds of patterns are present; e.g., is it an augumented or diminished chord, or what progression is it? Other possible tasks include things like automatic music transcription (related to music understanding); develop music recommendation systems based on the kinds of musical patterns a listener likes (someone who likes a Taylor Swift song might like other songs with only four chords: I, IV, V, VI); understand and generate music based on the patterns in the dataset (e.g., learn what arpeggios/scales are played over what chords/progressions). And probably a bunch of other interesting things. You can also test your image recognition models on spectrograms generated from the music in the dataset.

Download the data

Details on how to download the data can be found here. Thanks to the kind people at Zenodo for hosting the dataset!

Samples

Listen to a few samples here.

Generate new data

Python scripts are provided to enable you to extend the dataset by generating numerous new patterns.

Details can be found in the Pattern Generator folder.

Citing jazznet

If you use the jazznet dataset in your work, please cite it as follows:

@online{adegbija23_jazznet,
 title  = {jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine Learning Research},
 author = {Tosiron Adegbija},
 year   = {2023},
 booktitle = {IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}
 organization = {IEEE}
}

Contact

If you have any questions, comments, or just want to say hi, feel free to email me (Tosi): tosiron[at]arizona.edu.

License

The project is licensed under the MIT license.