The Great AI Scam

01 February 2024 | Inclusive Journalism Cymru

I recently attended an event at the Millennium Centre in Cardiff organised by the Institute of Welsh Affairs and Creative Wales. Whilst it was an encouraging event overall, I was surprised to hear that Creative Wales had no policy regarding so-called “artificial intelligence” (AI) software. Such things take time, but writers’ unions that I’m a member of have been all over the issue and already have policies in place. More worryingly, several speakers seemed to view AI as an exciting new development that Wales should be investing in.

What we call AI is not really intelligent. ChatGPT and similar programs are trained to recognize and repeat patterns. They are used in everything from astronomy to medical diagnosis to making art. ChatGPT is specifically trained to recognize patterns in language, and is more properly called a Large Language Model (LLM).

Like any other piece of technology, AI has its place. I am particularly excited by the project at the University of Kentucky that uses AI to try to read the text on carbonised scrolls recovered from the lava-buried town of Herculaneum. Other uses of AI are much more worrying.

For example, everyone should be concerned about AI’s use of valuable resources. The software requires an enormous amount of processing power. While improvements in technology may help with this, the rapid growth of AI is likely to result in huge increases in electricity demand. It has been predicted that AI systems will soon be consuming as much electricity as a small country such as Ireland (so way more than we use here in Wales).

All of this computing power requires cooling too. Each time you ask ChatGPT a question, it uses between 10ml and 100ml of water. Then there is the need for rare elements to make the computers. The quest for these often leads to mining companies coming into conflict with indigenous peoples.

"It has been predicted that AI systems will soon be consuming as much electricity as a small country such as Ireland (so way more than we use here in Wales)."

Cheryl Morgan

Writer

That’s all before we even consider how AI works, and what it is being used for.

Multinational media companies are interested in AI because they think it can replace writers, artists and actors. Why bother to pay anyone to write a book when you can feed  AI the world’s literature and get “new” novels by Charles Dickens, Arthur Conan Doyle and Agatha Christie every year? Issues like this are what the recent strikes by writers and artists in Hollywood were mostly about.

There are arguments that the use of AI will help regional newspapers. It is suggested that the software can take over a lot of the grunt work of day-to-day reporting and writing, freeing up journalists to do more interesting and impactful work. But will that actually happen, or will newspaper owners simply use the software as an excuse to cut staff? The latest academic research suggests that demand for freelancers is dropping, and pay rates are falling too.

Of course, we must remember that AI is trained predominantly in English. If AI becomes an essential part of journalism, what does that mean for reporting yn Gymraeg, or in any of the other languages spoken by people in Wales?

We should also consider how AI works, and what that will mean for people from minority backgrounds. AI cannot use reason, it is essentially just a very sophisticated pattern-matching system. For example, AI is behind the face recognition technology that is now being widely used by police forces in the UK, despite evidence that it is much less accurate for people of colour, and particularly for Black people.

This is partly due to a problem with how the AI systems are trained. If a facial recognition system isn’t shown a lot of Black faces, it won’t learn to distinguish between them. But first police forces and their software suppliers have to care about whether their tools are biased or not. If they don’t, those tools will never get better.

AI also works to reinforce dominant cultural narratives. For example, when ChatGPT is used to write recommendation letters, it picks up on the sexist bias that many humans already have, and reinforces it. Now imagine similar software being used to produce news, having been trained on the output of UK tabloids. Inevitably there will be articles about Muslims being terrorists, about Black youth being criminals, about trans women being rapists, and so on.

AIs cannot distinguish truth from falsehood. All they can do is check what opinions are most popular in the data they have been given. But decisions made by supposedly intelligent computers are often believed too easily. In the USA one health insurance company is using  AI to review claims, despite research suggesting that the software is wrong 90% of the time.

You can train AI software to avoid bias, but that’s not a simple process. Once AI is up and running, it will continue to learn from the input that it gets. It is depressingly easy to “poison” AI by feeding it nonsense. All of this training and checks for bias takes time and money, and actual expert humans to carry them out. You can guess what is likely to get cut next time the software company’s shareholders complain that profits are too low.

So yes, there are all sorts of exciting uses that AI software can be put to. But its use in creative industries, and in the public sphere, is fraught with danger. That is something that we, as a country, need to be very much aware of and follow with great caution.

You can follow Cheryl on Mastodon. 

Chart courtesy of John Burn-Murdoch / Financial Times, from the paper by Hui, Xiang and Reshef, Oren and Zhou, Luofeng, The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market (July 31, 2023).