Bicycles, Butlers, Stans, and Stories: A Framework for Understanding How AI Can Shape the Future of Music

John Greene
11 min readMay 28, 2024

--

When Drake released his AI-assisted diss track, he brought Tupac back to life (again), used AI to give Snoop a verse (though Snoop is still very much alive), and stirred up a bunch of questions. Is this another sign that the robots are taking over? Is this a part of an elaborate promotional campaign for an AI startup? Is any of this legal?

Though these questions are valid (as is the question of whether it’s a good idea to antagonize Kendrick), I believe the popular discussion around AI and the future of music is very tunnel-visioned and limiting. So I’d like to share a simple framework that structures the different kinds of opportunities that AI can create for both artists and fans. I intend to illustrate the expansive opportunities that AI opens up, and how these opportunities can be fantastic for artists and fans alike… not the existential threat that so many in the legacy music industry are making it out to be.

Like lots of things in business (and life), this framework breaks down into a four-quadrant chart (though, in this rare case, the answer isn’t just to pursue the upper-right quadrant).

Let’s get into it.

The x Axis: Consume ↔ Create

Artists and fans have always been the two fundamental building blocks of the music business. As I’ve explored previously, there is a lot that is changing about the nature of the relationship between artists and fans: most notably, we are increasingly all becoming creators in different ways. So rather than the traditional definition of artists as the ones who create and fans as the ones who passively consume, the line between creation/consumption is increasingly blurred.

GenAI is going to meaningfully transform how music is created and consumed, changing the nature of both artistry and fandom. To explore how this is going to be the case, let’s add in the y-axis.

The y Axis: Lean Back ↔ Lean Forward

To explore how AI is going to reshape music artistry and fandom, I think the most interesting axis to add is from “leaning back” to “leaning forward”. What I mean by this is how much time and energy is put into the experience by the artist/fan– whether they want to (because they love the experience so much and want to lean in and go deeper), or they have to (because of the investment of time/money that the experience requires).

To be clear, this axis is neither a value judgment nor is it static: just because one experience is “lean back” does not mean that it can’t be amazing, and just because another experience requires “lean forward” energy today doesn’t mean that it will tomorrow.

Now that we’ve got the two axes, let’s look at how the framework comes together.

Quadrant 1: Bicycles, and How AI Can Strengthen the Craft of Song-Making

For every one story about a great song appearing fully formed to an artist in their dream, there are thousands of stories about an artist spending three weeks to find the right pitch for a single drum note. Making a great song has, historically, involved a lot of hard work and a non-trivial amount of suffering.

With the rise of GenAI, many have speculated that the craft of song-making will become a relic of the past. I disagree. AI is going to transform the craft of great song-making and– in a way– enable artists to scale the suffering that is inherent in the process of creation.

The best example for this is Mad Men. In the writer’s room of Mad Men, showrunner Michael Weiner had a simple rule. Whenever they were coming up with a major plot development, the writer’s room vowed to throw out the first five ideas they came up with. The reasoning behind this was simple: even though the writers who worked on Mad Men were talented, and even though at least some of those first five ideas they came up with were probably pretty strong, their commitment to greatness involved a commitment to push past the obvious.

When we think about AI-assisted music, it’s important to remember that GenAI is, at its core, a tool that completes patterns. Based on all of the data that has been used to train the tool, GenAI will very quickly present information that fits the pattern of all of the data it has. If asked to create a club track about dancing all night, AI will generate a song that fits the pattern of the past that is in its data set.

Because of AI’s ability to instantaneously match the patterns of music that have been created to date, it is an outstanding tool to help an artist come up with those “first five ideas” that they can then set aside as they seek something fresh. Much like Steve Jobs thought of a computer as a “bicycle for the mind,” AI can be a bicycle for the craft of song creation: scaling and accelerating the creative exploration of artists so they can go faster and farther than ever before.

Quadrant 2: Stans, and How AI Can Personalize the Artist-to-Fan Relationship

For all that has been written about how AI could affect artists, surprisingly little has been written about how AI could transform what it means to be a music fan: specifically, a music superfan. Now one might think that AI will never transform the life of a superfan because– after all– who will ever become a superfan of music made by GenAI? I’d argue that the far more interesting way to consider the role of AI for superfans is how AI can transform how a superfan can interact with the human artist they love. AI becomes an interface for super-fandom, not a replacement for the human artist. To dig into this a little bit more, let’s enlist the help of Bill Gates.

In an essay written at the end of last year, Bill Gates argued that AI is about to completely change how you use computers (this was not the novel part of his argument). The exciting bit came when he described the emergence of AI-enabled agents: software interfaces “that respond to natural language and can accomplish many different tasks based on its knowledge of the user.” But what on earth could an AI-powered digital assistant have to do with music superfans? This is a huge (and largely unexplored) area of opportunity. To see why, let’s dig into the nature of superfans.

At its simplest, a superfan is someone who has an irrational level of enthusiasm for an artist. Fueled by this enthusiasm, a superfan is constantly seeking to gain greater emotional proximity to the artist they love. Far beyond wanting to listen more, superfans want to understand more, they want to see more, they want to feel more. And yes, they’re very interested in buying more. But, importantly, superfans are all different from each other, and the nature of their superfandom changes constantly. And that’s where AI can come in.

Right now, the interfaces between an artist and their super fans tend to be one-size-fits-all. You’re all seeing the same social posts, you’re all visiting the same website, you’re all standing in the same merch line.

But if you think about AI as an interface (an agent) that facilitates the connection between a superfan and the artist they love, what is now one-size-fits-all could become completely customized. An AI agent can go a long way to knowing what I already know about the artist I love, the kind of merch that I like, the shows I’ve already been to, and on and on. As such, AI can be very insightful in how to fuel my super fandom, in a way that is tailored to who I am uniquely, and keep me constantly surprised (and wanting more).

Quadrant 3: Butlers, and How AI Can Usher in the Post-App Era of Music Streaming

Since the arrival of streaming apps (such as Spotify and Apple Music), most people have assumed that apps like these will be the way that music is consumed for the foreseeable future. I disagree. I believe that apps are an era of music consumption (much like CDs were in the past), and this era will only exist until new technology replaces it (as streaming did to CDs).

AI has the opportunity to usher in the post-app era of lean-back music listening. When you think about tech titans like Apple and Google, these companies already know your whereabouts, your calendar, your daily routine, and your preferences. With this information, they already have a very good idea of what music I would love at any moment in time. Don’t make me open a music app at the same time that I always work out, and subsequently have to track down a workout playlist. Don’t make me tell you the information you already know, and do tasks that should be automatic. Instead, like a great butler, serve up the music I want to listen to without me ever having to lift a finger. Now, with AI having developed to where it is today, there is a terrific opportunity for this to fully come to life.

The advent of the post-app music streaming era is really good news for Apple (and potentially Google) and could be really bad news for Spotify. For years, Apple has inexplicably chosen to compete with Spotify as Apple Music and not Apple. That is, Apple has tried to compete with Spotify as if Apple Music were a standalone product and company, and not a key ingredient in the broader Apple experience. And, as they have competed with (at least) one hand tied behind their back, Apple has continued to lag far behind Spotify (with a 13.7% market share compared to Spotify’s 30.5%).

What AI can enable Apple to do is leverage contextual clues (your calendar, your whereabouts, etc) to lift music out of a standalone app and into constantly dynamic recommendations on the home page of your iPhone. More specifically, AI can make music a more seamless part of everyday activities, rather than music apps insisting on being a destination. Melding music and context would make for a more amazing listening experience, it would lead to more music being streamed and would open up new use cases for how music could enhance the different parts of our lives.

Quadrant 4: Stories, and How AI Can Create a New Language of Social Expression

One of the most powerful aspects of music is its ability to express meaning and emotions more powerfully than words ever could. If a picture is worth a thousand words, a song is worth ten thousand. The issue, however, is that songwriting has historically only been available to those who had the talent (and dedication) to be able to create a powerful song. For example, I think of myself as a reasonably good writer, but crafting song lyrics (let alone music to accompany the lyrics) feels far beyond my reach.

The same used to be true about photography. It wasn’t long ago that taking a great photograph involved expensive equipment, loads of arcane technical knowledge, and a lot of practice. But once digital cameras effectively eliminated the cost of taking a photograph and mobile phones eliminated the need for extra equipment, the stage was set for Instagram to radically democratize photography. What once was necessarily a lean-in form of creation became one that could be lean-back (and also great).

And that’s when things got really interesting. Because, when photography became lean-back creation in terms of the time/skill/money required, people didn’t just take more photographs. Photography became a democratized language of expression. What once would have been a set of words in a social post describing a feeling or experience became a photograph (or short video) that expressed that feeling or experience better than words ever could. And the same is about to happen with music.

As AI-powered tools like suno and udio have burst onto the scene, there has been lots of speculation and hand-wringing about whether any of these AI-generated songs could become hits. This discussion entirely misses the mark of the true use case for these emergent tools.

It’s not very interesting to debate whether an AI-generated song will ever ring up 400 million streams. The far more interesting discussion is how to democratize the “lightweight” creation of songs in a way that enables 400 million people to express themselves through music more powerfully than words (or photos) ever could. This isn’t about AI-assisted music competing for streams with humans; this is about radically increasing the amount of music that is streamed because there would be an (entirely) additive type of music: millions of songs that are meant to be conversations among friends and family, not chart-toppers. Rather than searching for a song that sort-of-kind-of matches the feeling you want to express through your IG story, you could instead create a custom song that precisely expresses how you and you alone feel at that moment.

Summing Up: It’s Time to Stop Bickering and Start Building

Right now, the dominant dialogue around AI and the future of music is bogged down with bickering about who is stealing from whom. Clearly, there are a lot of important IP issues to be sorted out, as is always the case with the advent of any new technology. But this bickering is all too familiar: for decades now, the various constituents of the music industry have spent most of their time arguing about who is getting a bigger slice of the status quo. Meanwhile, industries like video gaming have spent their time creating innovations that radically increased the size of the entire industry. The end result is embarrassing for the entire music industry, as major label execs high-five each other because the music industry has grown ~17% over the past twenty years… when they should actually be deeply embarrassed, as the video game industry has grown more than 700% over the same time period.

It’s time to stop bickering about AI’s impact on the music industry status quo and start focusing on creating ideas made possible by AI that will dramatically grow the entire music industry. With this framework, I think it is clear how GenAI can strengthen the craft of songwriting, personalize the relationship between artist and fan, usher in the post-app era of music streaming, and create an entirely new language of social sharing. I believe that each of these ideas is rich with value creation, and none of them compete with each other. So let’s get after those bicycles, butlers, stans, and stories.

--

--

John Greene
John Greene

Written by John Greene

Girl dad, husband, builder, strategist, optimist. Inspired by music, insatiably curious, and always in search of adventure.

No responses yet