Gracenote develops Sonic Style, a new metadata category for classifying music

Venerable media metadata company Gracenote (a Nielsen subsidiary) today announced a new technology product called Sonic Style, which classifies tracks into micro-categories that more exactly match up to what consumers with to hear in playlists. The company says this is the first time that precise music style classification can be scaled across massive catalogs of music. “Sonic Style provides the music industry with a powerful and scalable new dataset at the recording level, enabling a more perfect playlist,” according to Gracenote’s press announcement.

The idea behind this new music intelligence layer is to break apart standard genre classifications (Rock, Hip Hop, etc.) and match tracks to playlists in a way that will assist curators and please listeners. Gracenote uses Taylor Swift as an example. Her music as a whole is usually put in the Pop or Country bucket, whereas an analysis of her tracks are more like Pop Electronica, R&B, or other genre slivers.

“These new turbo-charged style descriptors will revolutionize how the world’s music is organized and curated, ultimately delivering the freshest, most personalized playlists to keep fans listening,” said Brian Hamilton, General Manager of Music and Audio.

In a Q&A with Gracenote, RAIN News learned that the company has developed nearly 450 sonic styles in which to categorize tracks. If that number sounds familiar to music intelligence geeks, it’s probably because Pandora boasts of the same number of music attributes analyzed by the human-powered Music Genome. Sonic Style is different, though — each of its 450 styles is an end-point location of a track’s placement within the Sonic Style classification system.

Gracenote notes that Sonic Style is useful in the growing voice-first world of smart speakers and voice-controlled assistance. In that realm music search and discovery could be driven by natural language commands rather than requests for known music genres. We asked Gracenote who are its target customers — smart-speaker OS builders like Amazon and Google? Music services that distribute on those speakers? Auto infotainment systems?

The reply: “Essentially anyone who is driving search, playlisting, discovery and recommendations of music in order to maximize consumption and engagement.  This includes digital service providers, CE device/smart speaker manufacturers and TV providers as well as music labels and publishers.”

Of course, Gracenote is not the first team to specialize in algorithmic understand of music, and matching songs to listeners. In addition to Pandora’s Music Genome, Spotify acquired The Echo Nest years ago, and has integrated that tech into its data-rich understanding of playlists which helps power Discover Weekly and other personalized playlists. Gracenote itself has been in the game for decades, building an iconic reputation as a metadata master that can match audio waveforms to published records.

We asked how Gracenote is differentiating with Sonic Style. “Metadata is at the core of voice search. The more descriptive information we have about content (in this case songs), the better results we can retrieve for music-related voice searches. We already have mood data that is enabling activity and mood-based playlists such as “play wake-up music” or “play a workout playlist.” Gracenote Sonic Style data now takes the guesswork out of playlisting and ensures the most relevant songs are surfaced by analyzing the unique audio characteristics against our “style” taxonomy. ”

It’s not all machine work. Gracenote emphasized to RAIN that humans are involved at every step, with human editors who have defined the hundreds of styles, and are continually refining the algorithm.

 

 

Brad Hill