Music meets neuroscience: A glimpse at the possible future of personalized listening

headphones red background 300wWe think about listening to music as a sensory, intangible experience. Great music taps into our feelings in ways that seem impossible to identify. An article in Quartz examined how much science there can be behind music, with an investigation of a trend that uses neuroscience to create a certain response in the listener.

One approach is being led by musician/composer Eduardo Miranda about his work in combining music with neuroscience. He has used electroencephalogram readings to examine how listeners’ brains respond to sound. It’s a very different purpose for this technology, which is usually turned toward studying medical conditions.

In Miranda’s research, he controls musical compositions in order to induce the listener’s brain to create either alpha or beta waves. Those different types of waves help set a mood, whether it’s happiness, sleepiness, or neutrality.

Another path toward neurosceinece in music can be experienced now with Brain.fm. The website uses bot-created tracks to set a mood for the listener. The person is asked to rate the effectiveness of the track every couple minutes, and the site adjusts the algorithm creating the music until the listener rates the track as “very effective” in putting them into the desired mindset.

There won’t be consumer applications of this technology for some time. But many of the niches that could funnel into such a creation have been developing nicely. Smart technology and wearables are on the rise. Personalization and contextual listening are already standard features for modern music companies. “I’m very optimistic in about five or six years time we will have this thing working mainstream,” Miranda said.

Anna Washenko