I’m merely a dumb not even SWO, but the paper from #1 in your Quick Hits seems to me like model overfitting based off the paper. I’m sure I’m mistaken somewhere but they trained their model on only 10 tracks, and their hold out song (trigger 21) wasn’t one of the examples from their linked website. I have a hunch that they’re not “recons…
I’m merely a dumb not even SWO, but the paper from #1 in your Quick Hits seems to me like model overfitting based off the paper. I’m sure I’m mistaken somewhere but they trained their model on only 10 tracks, and their hold out song (trigger 21) wasn’t one of the examples from their linked website. I have a hunch that they’re not “reconstructing” the song so much as the model is mapping a few rough data points to one of the 10 tracks.
Still impressive that they could extract enough signal from 1khz data to match a 41khz track, but the model might be overfitting to a few important data points to then reconstruct the track from the data set rather than the EEG data.
Totally possible. My audience is fairly generalist so I don’t always like to get into the weeds. The main contribution here imo is that the OOD track prediction was better than baseline with the EEG data. The broader context for this is papers like MindEye 2 (image reconstruction from fMRI) and recent language reconstruction advancements. Those findings lead me to believe that EEG data will work for music reconstruction, especially considering what we know about the way music is stored/processed in the brain.
But broadly speaking in this paper you’re totally raising a legitimate concern. I wouldn’t want to exaggerate the impact of this study. As with many neuro/AI studies, the binding constraint is data.
I’m merely a dumb not even SWO, but the paper from #1 in your Quick Hits seems to me like model overfitting based off the paper. I’m sure I’m mistaken somewhere but they trained their model on only 10 tracks, and their hold out song (trigger 21) wasn’t one of the examples from their linked website. I have a hunch that they’re not “reconstructing” the song so much as the model is mapping a few rough data points to one of the 10 tracks.
Still impressive that they could extract enough signal from 1khz data to match a 41khz track, but the model might be overfitting to a few important data points to then reconstruct the track from the data set rather than the EEG data.
Am I missing something here?
Totally possible. My audience is fairly generalist so I don’t always like to get into the weeds. The main contribution here imo is that the OOD track prediction was better than baseline with the EEG data. The broader context for this is papers like MindEye 2 (image reconstruction from fMRI) and recent language reconstruction advancements. Those findings lead me to believe that EEG data will work for music reconstruction, especially considering what we know about the way music is stored/processed in the brain.
But broadly speaking in this paper you’re totally raising a legitimate concern. I wouldn’t want to exaggerate the impact of this study. As with many neuro/AI studies, the binding constraint is data.