Citizen Voices presents unique and unedited opinion pieces from our citizens and allies. The views expressed within these articles may not necessarily be that of Citizen State as a whole and are presented in the spirit of democratic and free discourse. We are open to pitches and submissions made to firstname.lastname@example.org.
AI is going to kill us all. Supposedly.
Hypothetical AI-driven extinction events are referenced so routinely in coverage of AI tech that they’ve become background noise. Running jokes about welcoming our new machine overlords turn up in a substantial proportion of their comments sections. Even industry heads have recently taken to proclaiming dire warnings of the existential threat their own products could pose to humanity’s future.
Perhaps it’s because they know how good a distraction it makes from the far more concrete harm those products are actually doing right now.
The saturation of AI doomsday scenarios in popular culture – and in the mainstream discussion about emerging AI tech – has redirected the public’s focus away from the real and immediate problems this emerging technology poses. The Skynets and Roko’s Basilisks that dominate the debate on AI risk are still a far-future hypothetical, and a relatively unlikely one at that. Barring someone actively trying to create a fully autonomous mass-killer AI, it’s entirely likely they’ll remain that way. The threats posed by AI development as it currently stands, and which aren’t being taken anywhere close to as seriously as they should, are ultimately mundane ones – and ones where the core problem comes not from any issue with the technology itself, but from harmful trends in our socioeconomic system that AI threatens to worsen by more efficiently enabling them.
The saturation of AI doomsday scenarios in popular culture has redirected the public's focus from the real and immediate problems this technology poses.
Illegal, unethical, or just damagingly incompetent uses of generative AI pop up with depressing regularity. Deepfakes being used for disinformation, propaganda or even non-consensual pornography is now a fact of life online. Even without direct malice, generative AI can still produce harmful outcomes through pure random misfires. The use of ChatGPT for research has already led to a lawyer citing false cases in court and defamatory claims being spread against an Australian politician and whistleblower and an American radio host. While these instances were – however damaging – the result of accidents due to inadequate safeguarding and unprofessional usage, and are currently the target of legal proceedings, it’s if anything even more disturbing to see what’s being done with generative AI deliberately and legally.
Generative AI exists, first and foremost, to eliminate jobs and undermine workers’ leverage. In an era of increasing unionization and labor pushback against the dire pay and conditions in many sectors, it’s easy to understand why so much effort and money has been funneled into its fast-track development with so little interest – beyond lip service to the Black-Swan possibility of an AI apocalypse – in the risks involved. Especially considering most of what we would see as risks are, as far as the investors backing the generative AI boom are concerned, exactly what they want – fewer workers, with fewer rights, paid less, while productivity and profitability go through the roof. Buzzfeed recently laid off its entire news staff as it pivoted to publishing more AI-generated content. IBM is so committed to replacing thousands of jobs with AI that it’s already cut back on hiring for roles it thinks could be automated, even in the absence of tech that can actually do the jobs in question. An eating disorder helpline recently used a chatbot to replace its entire human workforce in a blatant act of union-busting. The results were predictably harmful to those who needed its services; ultimately forcing the organization to shut it down, but the precedent for using such technology as a means of retaliating against a unionizing workforce – something that would have no doubt continued if it wasn’t for the damaging consequences and public backlash – should be enormously worrying.
Generative AI exists, first and foremost, to eliminate jobs and undermine workers’ leverage.
Perhaps the most extreme potential for both livelihoods destruction and cultural harm, though, is in the rapid infiltration of generative AI into the arts. As a market sector that combines high profitability with a workforce that’s already undervalued, disrespected, and precariously-employed, arts and entertainment are perfect targets for predation by the generative AI industry.
100% of AI content generation in this context – and it’s important to use more accurate terms like content generation as opposed to the softened and fundamentally misleading concept of “AI art” – functions by mashing together the work of actual artists, writers, or performers, whose work was almost invariably used without compensation or consent. The tech industry has, in practice, not automated creativity so much as plagiarism; something it’s been able to get away with because the existing IP law framework still lacks the provisions to tackle it. For all the novelty of the technology itself, it’s a fairly standard example of the Silicon Valley tech bro style of business “innovation” – finding legal loopholes to circumvent workers’ rights, and marketing the exploitation of these loopholes as some kind of dynamic step forwards.
And this is, when we get down to the nuts and bolts, a workers’ rights issue. Netflix, for example, recently released an anime short containing background art generated by AI, claiming a “labor shortage” as justification. Only there is, in reality, no “labor shortage”; just a shortage of people who are willing to work for less than they can afford to live on, especially when we factor in the poor conditions and long hours that are well-documented problems in the Japanese animation industry. The use of AI, whatever disingenuous spin the producers put on it, was the product of a conscious and deliberate choice to use it instead of paying an actual artist enough to do the job. Over the past couple of years, the problem has spread with wildfire speed and ferocity. Voice actors are increasingly finding themselves expected to sign over the rights to their performances for use in AI voice synthesis. A serious and growing threat within the film and television industries of AI being used to replace screenwriters whose work has been fed into it, and industry executives’ flat rejection of proposals to regulate it, has been a major motivating factor behind the recent Writers Guild of America strike – and some senior Hollywood execs are, allegedly, already putting out feelers to see if AI can be used as automated scab labor to break it. The end result – and the end goal – of normalizing AI content generation is mass layoffs and crushed wages. Without serious measures to prevent it, the upper management of the arts and media industries will end up making a simple, predictable calculation and coming to a simple, predictable conclusion – why pay a real person to produce your content, when you can get an algorithm to generate an infinite volume of half-decent facsimiles for the cost of a software license.
It will not be the death of the arts themselves – just that of being able to make a living from them. The only people who will be able to produce creative work and develop the skills to do it well will be those who don’t need to be paid for their time. The creative arts will no longer be a profession; only a hobby for the extremely wealthy. Unless you’re a tech company exec salivating over the possibility of finally putting swathes of artists and writers whose skills you consider valueless out of a job, it goes without saying this will be a scorched-earth cultural disaster as much as a devastating blow to the lives of enormous numbers of skilled professionals. Authentically-created art will survive – just as a dulled, diluted, and radically diminished fringe that only the most privileged will ever see or participate in, while the bulk of what passes for the creative arts in the cultural mainstream loses its last vestiges of human creativity and becomes nothing but a range of hollow consumer products generated from the inputs of a marketing department.
This is to say nothing of the impacts on free expression. By cutting actual human creative input out of the loop, control of the arts to a degree unimagined even in the wildest dreams of totalitarianism will be baked in at the foundational level. With the generation of content being precisely determined by the specifications of big-business media companies, conventional censorship will become beside the point, as anything produced will be by definition fully and meticulously aligned with the interests of executive boards and shareholders; yet another factor that will make AI content far more appealing to companies who’ll no longer have to concern themselves with corralling artists who have independent thoughts and views of their own.
Some people would dismiss these predictions out of hand as nothing but Luddite doomsaying. Others will point out – correctly – that AI is, at best, only capable of producing bland, parroted mashups of whatever’s fed into it without any particular flair, and claim that’s going to curtail its attractiveness to the media industries. Those people do not have a grounding in economics.
There are two things to remember here.
Firstly, the commercial publishers that produce a high enough turnover to be a viable, paying option for creative professionals are dominated by a corporate culture that doesn’t care about creativity, only continually-increasing profit margins. The function of any capitalist enterprise, after all, is not to provide jobs or to create social or cultural value; only to maximize profit for its owners. Between the most brilliant artistic talent that they have to pay – even if they pay as little for that talent as is typically the case in today’s market – and an algorithm that can, at a fraction of the cost, generate something a fraction as good but still just good enough to be marketable, the raw foundational logic of capitalism will always choose the latter. We can see in any number of other industries how the logic of the market has repeatedly pushed the adoption of business practices aimed at reducing employee numbers even when those practices have led to a worse product or service, because a worse product made more cheaply by fewer workers serves the goal of profit maximization more effectively than a better product made at a higher cost by more workers. AI-generated content doesn’t have to be as good as the work of an actual creative professional in order to become more attractive simply on the grounds that it can produce output at a drastically increased profit margin. Creatives who are already established at the upper levels of the industry might have enough pull and enough proven market value to be relatively safe, at least for a while – but next to a generative AI program that can crank out blandly consumable material in extreme bulk, no business is going to take the risk of hiring new talent. The ladder will be permanently withdrawn, and those currently employed in most creative positions will find their pay and working conditions worsening until they’re forced to quit and find other employment in order to pay their rent, or until they’re laid off in favor of even further automation.
Secondly, attempting to mitigate this problem via modifications of existing intellectual property law will be entirely ineffective. Simply tightening the current laws to protect against the outright theft of creative works for AI content generation without significant additions to the legal framework will not be adequate to protect against other AI-related business practices designed to facilitate the replacement of creative professionals. Established content publishers will, for the most part, already legally own enough material to feed content-generation algorithms, with the employees or freelancers who produced the original work having very little legal recourse to do anything about it. The larger the company, the larger the volume of material they’ll already own and be able to use as algorithmic feedstock. Consider what happens when you give that technology to a company with the back catalogue of, for example, Disney or Warner Brothers. If and when the technology matures – something that, at the current rate of advancement, could be only a few years – it would be entirely possible for such a company to carry out every stage of production on a feature film, from screenplay to post-production visual effects, entirely using generative AI; the only human involvement being that of curating the output. The fact that actual creativity is the one thing that can’t be automated, with AI only being able to produce extremely formulaic material, will if anything be considered another selling point to an industry that more often than not prefers reliable mediocrity over the risks of originality. Imagine being able to crank out a new Marvel blockbuster every week, precisely engineered to the latest market-research inputs, without having to pay a single writer, actor, director, designer, or member of crew. You can bet that at least one Hollywood executive has a.) imagined it and b.) immediately got an erection so hard he passed out. Many large companies will also continue to do what they already do with all but the most successful creators – simply ignore the existing copyright laws and steal whatever they like, safe in the knowledge that those they’ve stolen from either can’t afford to fight them in court or, if they do, that the fine will be a fraction of the profit made from the stolen content. After all, if the punishment for something is a fine, what that means in practice is that it’s functionally legal for anyone with enough money to pay it.
The simple logic of business will always result in an inexorable push towards whatever will maximize profits. So long as we operate under a capitalist system, this will remain the case. No amount of IP-law tinkering aimed at making AI content generation more technically lawful or superficially ethical is going to make the companies that see the potential for turning a profit from it give it up – or make their legal teams stop endlessly searching for loopholes to exploit in the interests of making their workforce obsolete. The only effective way to protect creative professionals’ rights and livelihoods from AI content generation is to remove the thing that makes it appealing to their employers – the ability to profit from it in the first place. As the industry can’t be trusted to regulate itself, we need a new legal framework in place to take it out of their hands before the problem becomes too big and too profitable to do anything about it. The following legal proposals could serve as an effective foundation:
1.) Any work of audiovisual or textual media content which has been:
a.) created either wholly or in any part by the use of any form of generative AI technology, or
b.) modified using any application of generative AI technology in a way that tangibly alters its content
cannot be copyrighted, trademarked, sold, or otherwise monetized, or otherwise used for any commercial purpose or in any commercial context, either in itself or as part of a larger work, product, or service.
2.) Any content producer or publisher found to be in violation of these terms will be subject to a fine for each individual offence equivalent to either:
a.) five times the standard industry rate for paying the maximum number of creative professionals who might have otherwise been needed to create the content in question, as assessed by the relevant trade union or unions, or
b.) double the sum total of all turnover generated in any part from the offending content,
whichever sum is greater, and will also be open to unlimited civil litigation by the specific parties whose work, voice, or likeness has been unlawfully used.
These proposals might run into potential conflicts with a number of current laws – but ultimately this is unavoidable. We’re dealing with a novel problem that the current legal framework as it stands simply lacks adequate measures to tackle, and the difficulty of pushing through preventative measures now will be far less than that of trying to put the problem back in its box after it’s already proven profitable and become entrenched in multiple major industries. When the law is inadequate, the law needs to evolve. We have a narrow and rapidly closing window to prevent irreparable devastation to both a swathe of livelihoods and the continued presence of vibrant, organic, human art as a part of mainstream culture. Either we take the necessary steps now in order to prevent the damage before it becomes too great – or we can watch as a coalition of tech bros hell-bent on creating the Torment Nexus and soulless marketing executives who see human imagination and creative skill as nothing but an inefficiency to be automated away drive yet more workers into unemployment and poverty, and bulldoze yet another part of what makes our lives worth living.