In a World Where You Can Make Anything, Make Kindness

The Solipsis Project · December 30, 2022

(This is the first in a series of posts exploring the ramifications of generative AI.)

I live with a foot in two worlds.

According to people in one world, generative AI tools like Stable Diffusion and ChatGPT are going to usher in a new golden age of independent creation by massively lowering the barrier of entry and labor costs of independent artists.

According to people in the other world, these same generative AI tools are going to usher in an era of exploitation and poverty for artists, whose sources of income dry up as they find themselves replaced with automation. The Disney Behemoth trudges on, dictating their vision of human culture. Only this time, the artists are removed from the equation entirely.

I think what the debates around generative AI tend to miss is that these two worlds are not contradictory; they can and will coexist. As a result, the butting heads tend to talk past each other.

Let’s break this down; for lack of a better term, let’s call the people living in these two worlds the fanatics and the skeptics.

There’s an instinct by the fanatics to cast the skeptics as worrywarts. They point out (correctly) that automation has never been the death of creativity. MIDI did not kill the orchestra. Chess bots did not kill competitive chess. They point out (again, correctly), that automation frees up labor, which can (in theory) now be spent on additional creative pursuits. They call the skeptics Luddites.

(For the unaware, the Luddites were a secret organization of textile workers who sabotaged textile machinery. Many of them were the former owners of independent workshops that had been forced out of business by larger factories. While the term has been watered down and become a pejorative for anyone who disavows modern technology, its roots are important.)

The arguments of the fanatics are not technically wrong, but they get rejected by the skeptics anyway, for a simple reason: it does not solve any of the real problems faced by the skeptics.

The comparison to Luddites is especially ironic, given that the real Luddites had a real grievance, and a real point: not that automation would destroy their industry (it didn’t), but that automation would displace many workers and remove a source of income upon which they relied. That even though automation increases the total productivity and cultural output of a society, those gains are not distributed evenly. (Who specifically would benefit from those gains would depend on the economic system, and America has never had an economic system that favors the independent laborer, if such a thing is even possible.) And that when the productivity of a job increases, fewer of those jobs are needed: people will, necessarily, be put out of work.

When people express concerns about how their livelihoods will be upended by new technology, it’s not helpful to point out that these tools will enable additional creativity in aggregate by decreasing the barrier to entry, or reduce the labor costs in art, or make it easier for artists to balance their art with another revenue stream (assuming they’re privileged enough to have one), or boost the aggregate quality of living of humanity.

These are all true… but for someone worried about their income, it’s a small comfort. Especially if those purported gains will mostly be enjoyed by someone else.

The fanatics, unable to think beyond the abstract and unable to see beyond their own face, completely miss the point. They’re not wrong, but at best they’re callously dismissive of the actual problems presented to them.

The skeptics meanwhile, facing a real sense of impending doom and economic anxiety, attempt to identify the source of their discomfort, in the hopes that naming the problem will reveal a solution. The problem, they say, is that these machine learning models are trained on copyrighted data without the permission of the owners. It must be, they say, because where else does the apparent creativity of the generated content originate? The solution, then, must then be to expand copyright law to explicitly forbid the use of unauthorized copyrighted material in training data (or perhaps even ban generative AI altogether).

The problem with this proposal is that it will actually make the (again, very real) problems introduced by generative AI worse, not better. This is because the skeptics have misidentified the actual threat. (Hint: it’s almost never the tools.)

Let’s suppose that the skeptics succeed in their goal of expanding copyright to exclude training data from fair use. What will be the consequences of such a decision? Who will suffer the most?

I can tell you who will suffer the least: large publishing companies who have already amassed a considerable degree of copyright: the very same organizations that the skeptics claim to be fighting against.

In such a world, Disney will still be able to make their own generative AI models, trained on their own massive library of content. They will implement the use of these models into their workflow, reducing the labor required to produce their immensely detailed animated films, which will require them to hire fewer artists. The great displacement of artists and creative vision from an increasingly procedurally generated culture will continue, unabated.

The only people who are hurt in such a world are independent artists who, by the skeptic’s own hand, now lack the ability to use the same tools as their larger competition.

And what about the bolder proposal: to ban all use of generative AI, regardless of the ownership of the training data (or alternatively, a proposed cap on the proportion of a project that can consist of AI generated work)? That will never happen, at least in the US. If not because of the obvious First Amendment implications, then because there is a concentrated amount of capital in Hollywood and elsewhere that has a vested interest in this not happening. The genie cannot be put back in the bottle.

There is no outcome where generative AI does not fundamentally change the art industry. (And like it or not, art is an industry, and under our current economic system where only that which is profitable is able to exist, nearly all art participates in this industry.) The art market is a zero-sum game: while new tools can boost productivity for everyone, there is ultimately a finite amount of customer time, attention, and disposable income. Any artist using the market as a source of income is competing for a slice of this pie.

The impact of generative AI on the industry is pretty clear: by reducing labor and removing barriers to entry, these technologies increase competition. Unfortunately, corporations who own the world’s capital and means of production are best poised to take advantage of this new technology and take the lion’s share of the subsequent monetary benefits… but even if they weren’t, the introduction of new competition will make the field less profitable anyway.

The losses suffered by Luddites from the invention of the textile factory was real and measurable. The fact that we today enjoy the benefits of textile automation doesn’t change that. Likewise, the losses that will be suffered by truck drivers from self driving cars will be real and measurable. And the losses that will be suffered by independent artists from generative AI will be real and measurable.

This could very well push important fields like art and culture into the realm of unprofitability. These concerns are real and valid. But the flaw isn’t in any tool used by artists or laborers, but in the framework of capitalism that these artists and laborers are forced to participate in. It’s not generative AI that’s devaluing art: it’s the market, and the fact that we have tied the worthiness of art to its market value.

The production of art in our economy is only possible when it can be profitable. But the unfortunate reality is that not every business is guaranteed to be profitable, even businesses that we consider essential for society. And when we have an economic system that says that only things that are profitable are able to exist, there’s a temptation to enact policy to keep a specific business model profitable, with the argument that doing so serves a greater good.

(That temptation is doubly present when your own well being and career also depend on preserving that business model: if you’re significantly invested in a business model, any technology that disrupts that model is now an existential threat.)

But it’s not the technology that’s the threat: it’s the social and economic relations that all art production is forced to participate in. It’s the lack of social safety nets. It’s the fact that the means of cultural production are owned by large corporations. It’s the fact that all artists must compete with each other in a market that will always be rigged in favor of larger publishers. @ckjong on Twitter does an excellent job of summing this up.

And like he says, those relations can be changed! But only if we can recognize their culpability.

At the core of being a modern artist is a conflict: Culture benefits when the barrier to participation is low: when everyone has a voice and the dominant cultural attitude isn’t dictated by a class of elites. But sellers in markets benefit when the barrier to participation is high: preventing new competition from entering the market helps maintain the value of what the seller is selling.

When art is a market, these two facts are at odds with each other. Anyone who participates in the art market as a seller (which is to say, all professional artists) is forced to grapple with this conflict.

The skeptics, forced to prioritize their own well being and unable to question the very system that they’re forced to participate in, miss the point. They’re not wrong, but being (understandably) unable to think beyond their own immediate self-preservation blinds them to the real threat.

The remedy for both fanatics and skeptics is the same: to challenge the social and economic conditions that lead to the commodification of art in the first place. The conflict, it turns out, is not fanatics vs skeptics. It’s fanatics and skeptics vs a system that says that your value is only as much as people are willing to pay for it, a system that exacerbates inequities because capital attracts capital.

Generative AI can be a great cultural boon, but only when the fruits of those labors are shared, only when the intrinsic value of art is separated from its market value, and only when art can be produced free from the corrupting and coercive hand of consumerism and wage slavery.

I think that’s something worth fighting for.

But at the very least, the next time that you get in an argument with someone about generative AI… be kind. Be patient. Be empathetic. Recognize that our positions in such debates are often derived from our lived experiences, which are different from the lived experiences of others. Recognize the role that your own financial and social situation plays in shaping your viewpoint, just as it does for them. Seek to understand, not to win.

In a world where you can make anything, make kindness.

Mastodon