INTERVIEW: Part 2, Jim Lucchese, CEO, The Echo Nest

This is Part 2. Read Part 1 here.

You might have The Echo Nest to thank for the thing you love most about your favorite music service.

The Echo Nest is a data company that develops music intelligence technology, used by many of the most popular listening services covered by RAIN every day. Through the company’s application programming interfaces (APIs), music services can develop apps and features for their users, such as song recommendations and artist-based stations. The Echo Nest has furnished music intelligence for Spotify, Rdio, MOG, iHeartRadio, Xbox Music, and many others

This is Part 2 of RAIN’s conversation with Jim Lucchese, CEO.

Read Part 1 here.

RAIN: How does The Echo Nest develop its music intelligence?

JL: Our approach to understanding music is [twofold]. On the content side, we have software that analyzes a full-length song in about three seconds, and call tell you the time signature, whether it’s live or studio, whether it’s vocal or instrumental, the structure of the song, the mood — all kinds of acoustic attributes.

We combine that with cultural analysis, where we are crawling the web, and listening to what the online world says about music every day. We’re parsing the text on about 10M documents every day. Every blog post, news item, social media post, review — everything written about music on the web, we’re crawling it. We have the technology to understand it. We take all that understanding, and we open it up in the developer API top power a range of applications.

RAIN: You mentioned the three-second analysis. It’s natural to compare that kind of computer-based process with Pandora’s human-powered Music Genome Project. Can you give me some kind of qualitative comparison of what you get in three seconds, and the results Pandora gets in its more labor-intensive process?

JL: I don’t know enough about the Genome to do a one-for-one comparison. Pandora also has an exceptional amount of user-interaction data as well. I’m a fan of what they built. We have different technology approaches. But they’re doing a really good job.

We often hear a man-vs.-machine theme in music discovery, and I generally think it’s a myth. There’s nobody at The Echo Nest who doesn’t think that music is ultimately about people connecting with it, and how personal that is. What we’re trying to do, is harness and understand how tens of millions of people are experiencing music all the time.

What we do obviously scales. We cover a lot more ground a lot more quickly [than Pandora]. But we’re actually bringing in the voices of a lot more people, which eliminates a lot of human bias. Danceability is a good example. To get a subjective measurement like that, we get experts to rate songs for whatever characteristic we’re trying to determine. Then, because we’ve got the underlying audio analysis, we’re able to machine-learn what that sound is like, and assign it to other songs. We’ve got real people, music experts, determining whether we’re getting that right. So there is a lot of human editorial subjectivity inherent in what we do, but we do it in a way that scales. We’re crowd-sourcing the entire base of music fans to understand how they’re describing every artist song and album. We’re able to synthesize all that.

Another critical element that we’ve spent a lot of time in the last two years thinking about, is music is a cultural output. It’s the backbone to a culture. It’s different all over the world. When we started digging into the music of India, it opens up a whole new world. the harmonic structure is different. What key is the song in? It’s a whole different bag. what time is it in? Well, it’s in seventeen. Most Western music is in four, three, or six, and you’re pretty much done. The point is, when you start thinking about this globally, there are so many biases that you don’t understand until you dig into the music. Having an underlying technology platform that starts with a deep cultural understanding of that music and its fans, is critical to understanding.

There’s no question you need people to understand music. But you need people and technology if you’re going to scale and completely understand all the music out there in a really nuanced way.

RAIN: Do you have a music background?

JL: I’m a drummer, not a musician. Probably 75 percent of the people here are musicians or serious DJs, or have some background in playing. That’s one of the cool things about the company. People can combine their obsession for music with their ability to write code. And it’s critical. The problems that we’re solving are big, thorny problems, but if you’re a huge music fan, they are some of the most fun problems out there.

RAIN: In the office, is everyone buried in their headphones, or is there music in the air?

JL: I have to moderate my response, because I could talk about this for an hour! We have a communal music queue in the office, where anyone can put on a song. We have speakers throughout the office. One of our guys wrote an app tying together the two offices, so in San Francisco and Boston, we’re listening to the same thing. You can put a song on the common queue right now. Anyone can delete that song. We have a chat area where you can comment and complain about selections. It has really taken off. It’s every man for himself. This morning … [pause as Lucchese brings up the Echo Nest office music app] … right now Dire Threat is on; before that it was Miles Davis’ “Someday My Prince Will Come”; coming up is a whole bunch of straight-edge punk. It’s chaos every day.

RAIN: Does anybody get any work done?

JL: It’s part of the work.

RAIN: Sounds like fun. Wouldn’t that make a good public-facing service?

JL: [laughs] We get that a lot with the stuff we build in the office.

RAIN: It’s easy to imagine your office’s social music queue as a Spotify app.

JL: We’re hacking on this stuff a lot, and sometimes share with our customers, and hopefully, sometimes, it influences their product direction.

Brad Hill