An illustration of a pile of papers that are surrounded by a cup of coffee and a pen to the left and a cup of tea and another pen on the right of the paper.
Illustration by Hope Lumsden-Barry, Communications Designer

Artificial Intelligence (AI) and Machine Learning (ML) seem to be the latest in quick-fix silver-bullet technology – with businesses rushing to apply it to everything from online shopping and self-driving cars to medical diagnosis and criminal justice and sentencing.

Because AI and ML contain the words ‘intelligence’ and ‘learning’, it’s natural to attribute a science-fiction level of agency to this technology, but at its core, each AI is still a computer program, and its intelligence is strictly limited to what it has been programmed to do – run through a set of instructions (an algorithm) when it encounters a specific situation.

These algorithms are already being used to make decisions that have real impact on people’s lives. Dan’s link below speaks to one of the newer algorithms in use today (emotion recognition), but there are many others – and countless numbers of them have flaws that have either the potential for, or already documented, negative human consequences:  

How would you like to be ‘diagnosed’ by medical technology that is flawed such that it favours white patients over black patients – even when the white patients are less sick?

Or get a ‘job interview’ from a face-scanning algorithm that decides whether your “facial expressions, eye-movements, body movements, details of clothes, and nuances of voice” are right for the job?

Or be wrongly identified by a “permanent line-up” law-enforcement algorithm capable of scanning the biometric data of almost every citizen in Australia?

Or – and as a father with a son about to enter primary school, this is the most terrifying one for me – have your child’s emails, web history and social media use intensively surveilled by an AI that doesn’t understand teen sarcasm and has the authority to notify police?

If we step back from the AI angle for a moment, these are all examples of technological solutions to social problems. Which would be bad enough. But these companies are worse than just misguided – they barely even seem to care that their solutions don’t actually fix the problems. They are just exploiting problems for profit, using technology that is already known to have serious flaws and ethical concerns.

For example: companies that push school surveillance technology make the argument that “the technology is part of educating today’s students in how to be good ‘digital citizens’, and that monitoring in school helps train students for constant surveillance after they graduate.” Now, this might be true, as far as it goes. But statements like this just point out that our current workforce trajectory is one of privacy intrusion and corporate surveillance, something we should be fighting against. 

Now – I’m not saying that AI is always or will always be bad, or that ML can never be used for anything positive. All I’m suggesting is that relying on companies that sell AI to fix things for us is a risk – one we need to be aware of, take an ethical stance on, and approach with caution and care.

Conversations around AI are extremely useful and enlightening, because they bring to light societal and political problems that need social solutions, not technological ones. Whenever you see AI put forward as a ‘solution’, ask yourself: What is the real problem that’s been identified here, and how might we fix that?

— Reuben Stanton & the PG Team

For this week’s newsletter, I stole most of my links from @hypervisible – please follow them on Twitter if that’s your thing.

Read the rest of Issue #45 here.