By Jo Kingston, Thwaites Communiations Account Executive,
Here at Thwaites we are lucky enough to have not one, but two offices – our Shoreditch HQ, and our Northern home at The Federation in Manchester. Here we share co-working space with lots of brilliant digital and tech firms who have signed up to a pledge outlining a broad set of values that chime with us – to be open, honest and ethical.
As well as providing a great space to work, Federation Manchester also gives us access to excellent talks by leading speakers from around the world – most recently The Federation Presents series, which explored ethics in the tech industry and wider society.
Trust in tech
Naturally, one of the topics that’s arisen (more than once) is AI, and the seemingly boundless scope of machine learning. Yet despite the many ways in which intelligent systems can transform our lives for the better, there is still an underlying mistrust.
One thing that is certain is that the changes brought about by AI are unlikely to be reversed.. According to research company Gartner in a report released this month, global business value derived from AI is projected to reach $1.2 trillion in 2018, an increase of 70% from 2017. This is projected to climb to $3.9 trillion by 2022.
Last month we published a blog advising companies that innovate with AI how to implement developments in a way that will instill trust in their users by being open and accountable. However, collective unease about technology and its capacity to escape our control has existed for centuries – think of Mary Shelley’s (fictional but apposite) Victor Frankenstein in 1818, who rejected his creation when he found he could not exercise his authority over it, and ended up dying in its pursuit. Two hundred years later, with tech that ‘thinks’ for itself a reality, this anxiety is even more pervasive – which is why ethics and risk assessment are as important as the technology itself in developing new products and services.
When tech becomes toxic
Two of the Federation Presents talks recently tackled this issue from different angles. Toxic Tech by Philadelphia-based web consultant Sara Wachter-Boettcher was a fascinating look at what can happen when algorithms go bad. Drawing on examples from her book ‘Technically Wrong: Sexist Apps, Biased Algorithms and other threats of toxic tech’ Sara took us through some of the scenarios which can happen when the datasets and programming which inform machine learning tools create unintended – and often harmful – consequences.
Some examples cited include:
- Facebook tagging an image of a gravestone posted on the anniversary of a child’s death as the user’s ‘most loved photo of 2016’
- Timehop dinosaur Abe popping up to commemorate 9/11 with a flippant ‘some say this day will never be topped – but if anyone can do it, it’s you’
- The bizarre Tumblr notification ‘Beep Beep – #Neo-nazis is here’ popping up on the phone of novelist Sally Rooney, assuming she followed far-right groups because she had read a news article about the rise in fascism
- Another Tumblr user receiving a message ‘Beep Beep – #mental illness is here’.
This is what happens when algorithms created without due attention become unfit for purpose. We sincerely hope that no Tumblr employee would sit and write ‘Beep Beep’ to introduce topics such as mental illness, but a text string – generic copy created to personalise apps and websites, which topics are inserted into automatically – makes no such distinctions.
So far, so inappropriate but these tendencies are also dangerous. A study in social media and hate crime from the University of Warwick hit the news recently when it revealed, amongst other things, that attacks on refugees in Germany were significantly increased in areas where Facebook usage was above average.
An algorithm to promote content that maximises user engagement intuits what people believe and siphons them into echo chambers, free from moderating voices. Posts which tap into negative emotions like fear and anger perform well – so they proliferate, with alarming results. And yet, in terms of the original aim of the platform, this is a success in that it is doing what it intended – engaging the audience. As Sara says, ‘our systems aren’t just vulnerable to abuse – they’re optimised for it’.
In her book, Sara makes a clear call to companies to consider the ethical implications of their products and services – not just as an add on but as a core part of the design process, even if this has an impact on profit – because ‘if we only discuss ethics when they are revenue neutral – then we don’t really have ethics.”
Tech for good
This point was echoed in another Federation Presents talk, Humanity and Tech by Shannon Vallor, a Professor at Santa Clara University in Silicon Valley and a consulting AI Ethicist supporting Google Cloud.
Shannon made a distinction between being human and humane. Our technology can be created by the former but does not necessarily embrace the latter, a theme which she explores in her book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting.
Professor Vallor is researching the impact of emerging technologies – especially automation and AI – on the moral and intellectual habits of human beings. She explains how we are living in a world where we are entrusting virtually everything to technology, but we need courage and wisdom to harness these innovations for good.
Getting the balance right
The skills that make machines more effective than humans at certain tasks – repetition, calculation, statistical analysis and prediction – can generate economic growth, increase efficiency across a range of areas and transform many aspects of our lives for the better.
However, the skills that are still unique to humans – empathy, compassion social acuity, critical thinking and ethical insights – need to be given equal weight. As Shannon’s book claims, if humanity is to have any real hope of not merely surviving but flourishing in the 21st century and beyond, then we are going to need more than just better machines. We will also need better humans.
For more information on the CIPR AIinPR work visit www.cipr.co.uk/ai