Flickr

You can view my pictures and my contacts from Flickr.

Flickr

Amazon

You can view my wish list from Amazon.

Amazon

Delicious

You can view the newest links I've tagged on Delicious.

Delicious

Upcoming

You can view my future events.

Upcoming

Feeds

You can view RSS feeds from my friends and colleagues.

Another uncalled-for blog post about the ethics of using AI

I read a couple of posts about AI recently, which seemed to hold opposing ideas, but I agreed with them both to some extent. (It’s a radical idea, I know).

First of all let’s just say that generative AI including large language models (LLMs) might be considered a form of artificial intelligence, but AI is not just LLMs. Also, as Michelle Barker put it, It’s plausible that AI (or more accurately, machine learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy. [This] could feasibly constitute a legitimate application of AI. LLMs are not this. They synthesise text, which is not the same as data. Particularly when they are trained on the entire internet, which we all know includes a lot of incorrect, discriminatory and dangerous information.

With that in mind I had a good chuckle at John Willshire’s response to Microsoft’s whine that British firms are ‘stuck in neutral’ over AI.

“I can’t believe more people aren’t buying these wonderful clothes,” exclaimed the emperor.

Nothing says ruh-roh like a leader complaining that a market isn’t buying enough of their services. If AI really is as seamless to integrate as is claimed, then ‘wait and see’ is a probably a perfectly acceptable strategy for most businesses.

But at almost the same time Mark Boulton had an interesting take on the role of generative AI in the context of design tools:

AI will disrupt. What I’m hoping is, it will encourage more designers to use AI to build things. Prototyping – as part of a process, not just the end, or for presenting your intent – is a critical way of learning and designing. I’m a big fan of learning by making. Designing by making. Using AI to plug skills gaps (code!) will be a brilliant tool for many designers to build what they are designing. […] Being scrappy with code to prove out an idea is something AI could give us right now.

LLMs are pretty crap at coding; mediocre at best. Mediocrity is intrinsic to how LLMs work – they apply statistics to guess which word should come next. You don't need to know your median from your mode to appreciate this means they’ll go with the average every time. The very definition of mediocre.

But mediocre might suffice in Mark’s scenario. I agree with his point about the importance of prototyping in the design process. And if all you need is crappy code to try out a concept or a solution, then an LLM might well enable you (the designer) to do that. Let’s just hope that a human coder wasn’t available to do the job for you, or that in turn you won’t be putting a developer out of a job.

This got me thinking back to January and the UK Government’s AI press release. It ends with an effluence of quotes from vested interests like this one from the Tony Blair Institute, that shows the hype is all just so much snake oil:

AI can help take care of drudgery in the public sector, freeing people up to focus on high-value tasks that require the human touch. TBI research shows that we can generate up to £40 billion a year in productivity gains and savings.

That kind of statement is both utter bollocks and despicably disingenuous. Let’s say AI of some sort can replace £40 billion worth of government work, for which there is zero evidence thus far, the only way you’ll get £40 billion in savings is by sacking people. Which is one way of freeing people from the ‘drudgery’ of public service I guess.

And then it’s hard to escape how we got here. As my friend and colleague Jeremy Keith put it to me recently, every large language model currently available is trained on data that has been scraped without permission (for example, your books). How well they work or how effective the resulting large language models are doesn’t mitigate that. As OpenAI itself told the UK Parliament, it would be impossible to train today’s leading AI models without using copyrighted materials.

The government has been hoodwinked so hard into this, they ran a ‘consultation’ which would allow LLM companies to continue to ride roughshod over existing copyright law. I put scare quotes around the word consultation, because there’s clearly a pre-defined outcome which would allow LLM companies to data mine any otherwise copyrighted work they had access to.

In theory the copyright holder could reserve their rights, preventing scraping through an ‘agreed mechanism’, but no such mechanisms were provided. For my own website, I have stated explicitly that I do not give consent for any content to be used to train LLMs (even if that horse has bolted), and I have added such rules to my robots.txt file, for what it’s worth. Whether those steps, which categorically show my intent, proves to be a suitable mechanism for this government is another question.

I haven’t even touched on the environmental impact, and I’m not going to now, because I can’t get past my final thought. It’s hard (and reckless) to ignore the heartfelt and cogent perspective laid out by Miriam on the role of AI companies in the current geopolitical crisis:

The AI projects currently mid-hype are being developed and sold by billionaires and VCs with companies explicitly pursuing surveillance, exploitation, and weaponry. They are eager to jump on board an authoritarian movement that wants to exterminate trans and disabled people, fire black people, and deport all my immigrant friends and colleagues.

The beliefs of these CEOs aren’t incidental to the AI product they’re selling us. These are not tools designed for us to benefit from, but tools designed to exploit us.

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

So maybe LLMs are good at coding prototypes. Maybe they are good at summarising documents, drawing out patterns in text and data. But as Hidde put it they also promote biases, including those that I want to break down, such as sexism, racism, ableism and transphobia. They are increasingly anti-woke (whatever “woke” means, beyond a lexical tool for hatred).

I’m not sure what to do with all that. Does it come down to personal choice, or should I mandate a ban on the current crop of LLMs at Clearleft? That doesn’t seem helpful. Should I ban my kids from using them? What if they use them at school? I can try to educate them, but telling a sensitive 11 year old about this shit is pretty tricky.

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

Read or add comments

Adactio Elsewhere

I seem to have left pieces of myself scattered around the internet. This is my attempt to pull some of those pieces together.