Generating music with resting-state fMRI data

Dataset type: Imaging, Neuroscience
Data released on October 26, 2016

Froehlich C; Dekel G; Margulies DS; Craddock RC (2016): Generating music with resting-state fMRI data GigaScience Database. http://dx.doi.org/10.5524/100224

DOI10.5524/100224

Resting-state fMRI (rsfMRI) data generates time courses with unpredictable hills and valleys. People with musical training may notice that, to some degree, it resemble the notes of a musical scale.
Taking advantage of these similarities, and using only rsfMRI data as input, we use basic rules of music theory to transform the data into musical form. Our project is implemented in Python using the midiutil library.
We used open rsfMRI from the ABIDE dataset preprocessed by the Preprocessed Connectomes Project. We randomly chose 10 individual datasets preprocessed using C-PAC pipeline with 4 different strategies. To reduce the data dimensionality, we used the CC200 atlas to downsample voxels to 200 regions-of-interest.
A framework for generating music from fMRI data, based on music theory, was developed and implemented as a Python tool yielding several audio files. When listening to the results, we noticed that music differed across individual datasets. However, music generated by the same individual (4 preprocessing strategies) remained similar. Our results sound different from music obtained in a similar study using EEG and fMRI data.

Additional details

Read the peer-reviewed publication(s):


Related datasets:

doi:10.5524/100224 IsPartOf doi:10.5524/100215

Additional information:

https://github.com/carolFrohlich/brain-orchestra





Displaying 1-1 of 1 File(s).
Date Action
October 26, 2016 Dataset publish
November 17, 2016 Manuscript Link added : 10.1186/s13742-016-0147-0