INTERNET MARKETING NEWS
The ethics of algorithms and the risks of getting it wrong
[ad_1]
Machines started showing signs of life in the human imagination as far back as the mid-19th century – perhaps most notably in Samuel Butler’s 1872 novel Erewhon, where he wrote about the possibility of machines developing consciousness through Darwinian selection.
The idea of artificial intelligence (AI) has since become one of the most popular tropes in science fiction. With it, important questions have been raised over both the utopic and dystopic impact of increasingly clever machines on society and the way we use them to understand the world.
While modern-day AI doesn’t look quite like Data from Star Trek or Jude Law in the Steven Spielberg film AI, technology and machine learning are evolving at a rapid rate and being used to make sense of an incomprehensible amount of data about people and the world.
As such, algorithms are being given responsibility for making decisions that impact our lives more than ever before. But it is becoming increasingly clear this is not always being done fairly or transparently. In some cases, it is actually being used to do harm and influence behaviours in morally dubious ways.
From Google’s secret work with the US government on its military AI drone initiative and Facebook’s ad-serving algorithm discriminating by race and gender; to Amazon’s internal recruiting tool disadvantaging female candidates and its allegedly biased facial recognition tech; to Instagram resetting its algorithms after exposing children to harmful content – numerous high-profile stories have brought conversations around the ethics of AI to the fore in the last year.
For the first time, Google and Microsoft have acknowledged in investor statements that “flawed” algorithms could result in “brand or reputational harm” and have an “adverse affect” on financial performance. These companies have been using AI for years; that they are only flagging it as a problem now shows just how much the issue has escalated.
So as brands in all sectors increasingly rely on AI to predict and understand the behaviour of their customers, as well as improve their marketing and customer experience, what does best practice AI look like, how can it be achieved and what role does marketing play?
READ MORE: Why are marketers kidding themselves that AI is about more than sales?
Data, diversity, bots and spots
When Microsoft’s Twitter chatbot Tay started spouting racist, anti-feminist and pro-Trump tweets three years ago, resulting in Microsoft having to remove the bot less than 24 hours later, it was merely doing what it had learned from other Twitter users.
Algorithms themselves are not biased; the bias comes from the data that is used to train the machine learning system. And because bias runs deep in humans on many levels, whether around race, age or gender, building algorithms that are completely free of those biases is no easy task.
It is especially difficult when the tech world is still very much skewed towards white men.
A new study published by the AI Now Institute, a New York University research centre, finds more than 80% of AI professors are men. It also shows just 15% of AI researchers at Facebook are women, while at Google the figure drops to 10%. It also finds just 2.5% of the latter’s workforce is black.
While Amazon wasn’t measured in this study, it goes some way to explain why Amazon’s facial recognition technology had much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.
“The industry has to acknowledge the gravity of the situation and admit its existing methods have failed to address these problems,” says Kate Crawford, one of the authors of the report. “The use of AI systems for classification, detection, and prediction of race and gender is in urgent need of re-evaluation.”
The opportunities for AI are amazing but we need people to adopt the capabilities – and they will if they can trust it.
James Luke, IBM
As a business that operates in 180 countries, global telecoms giant Ericsson understands how imperative it is to have diverse data sets.
“Non-discrimination, non-bias, is something we need to check all the time,” says Elena Fersman, research director of artificial intelligence at Ericsson Research.
“We are trying to overcome that potential bias in data by using very varied data sets. How do we make sure the data sets are diverse? It has to come from diverse teams. Having a diverse organisation helps us to create more diverse data sets and then the algorithms are behaving in a more correct way. It’s all about symbiosis of the AI algorithms and humans.”
For Ericsson, “it’s about prioritisation” and ensuring all aspects of its business and how people use its products are considered. For example, its machine learning needs to prioritise things like someone doing remote surgery, or operating heavy machinery, and not just entertainment or gaming.
Another example of the benefit of using diverse data sets can be seen in L’Oreal’s new AI-powered spot diagnosis tool.
Co-developed with dermatologists, the algorithm has been modelled on more than 6,000 dermatologist patient photos of all ethnicities, genders and varying skin conditions, each of which has been graded by acne experts.
Similarly, Oral B’s latest smart toothbrush combines the knowledge of thousands of human brushing behaviours to assess individual brushing styles and coach users to develop better brushing habits. The AI technology is able to track where people are brushing their mouths and give personalised feedback on the areas that need more attention.
From pearly whites to finance, Virgin Money is another brand that sees “immense potential” in this area. It is currently using AI to gain a deeper understanding of its customers so it can improve the customer experience, enhance risk modelling, and enable more effective and targeted marketing spend.
The transformative effects of AI will be felt especially quickly in the financial services industry. According to IHS, the global business value of AI in banking on track to reach $300bn (£230bn) by 2030.
But with potentially tens of millions of banking and financial services jobs set to be impacted in the coming decade, this is a sector that will need to work hard to ensure the algorithms it uses are diverse to maximise fairness.
“AI is poised to challenge and blur our concepts of computing and the ‘natural’ human. AI technology will reconfigure the financial industry’s structure, making the banking sector more humane and intelligent,” says IHS analyst Don Tait.
Marketing’s role in humanising AI
The majority of consumers, however, are more frightened (52%) about the future impact of AI on society than excited (48%), according to research by Kantar.
And given the growing list of examples of AI gone wrong in recent years, it is hardly surprising there are big challenges around getting people to trust and adopt this kind of tech.
People’s willingness to use even a simple AI-powered chatbot is polarising. According to the Kantar research, 39% of global consumers have no problem talking to an automated bot if it means their question is answered faster, but 33% ‘completely object’.
This is where marketers can step in.
“The first wave of AI development has been led tech-outward. The second wave will have to be human-centric and human-outward, and that’s where brands and marketers can play a really big role because they bring their human perspective to the technologists,” explains Tara Prabhakar, global director of qualitative at Kantar’s insights division.
“The language we use when we are thinking about technology and the creation of AI tends to be quite dehumanised. So long as you think your technology is meant to drive traffic, you are not really thinking about the person that you are using this technology on. How many times is this person likely to receive communication, how much targeting is happening? You’re not thinking about the annoyance value that you are creating.”
READ MORE: Helen Edwards – use technology to show more humanity, not less
Indeed, Ericsson’s AI and marketing teams work together to make sure as much customer insight goes into the algorithms as possible.
“That’s very important because that gives outside-in perspective,” Fersman explains. “In my team of researchers, they are pretty inside-out driven but we cannot do it only inside-out. We need to bring in the outside-in perspective and that puts it into context. Then context can be translated to the objective function: what is it exactly that we need to optimise and how do we want it to be in the end?”
An important example of forward-thinking, human-first innovation can be seen in the development of the world’s first genderless voice assistant this year, built to reflect the growing number of people who define themselves as gender-neutral.
Q is modelled on the voices of hundreds of people who identify as male, female, transgender or non-binary and was tested on over 4,600 people who were asked to rate the voice on a scale of 1 (male) to 5 (female).
In a world of Alexa, Siri and Cortana, where softly-spoken subservient female voices are the norm, Q highlights the importance of diversification in everyday customer communications.
Looking ahead, brands will need to think carefully about how they adapt gender and the tone of their voice for different consumer groups and purposes.
But what if the fundamental belief systems and language these algorithms are built on are, as one scientist puts it, completely “fucked up”?
We are trying to overcome bias in data by using very varied data sets. How? It has to come from diverse teams.
Elena Fersman, Ericsson Research
Armed with a PhD in quantum physics and artificial intelligence from Princeton, Heidi Dangelmaier set up female-led innovation lab Girlapproved. For the past 12 years, Dangelmaier and her team of young female scientists from all over the world have been doing experiments on the female brain to build a new language, metrics and algorithm for the female mind.
“Right now we don’t have the mechanism to create things that are universally beneficial,” she says. “There is a deep, pre-existing bias before we even get to algorithms. The bias is in the paradigm we’re using to understand nature. This is a big problem.”
Dangelmaier believes the problem with current algorithms driving culture and economics is that they are all built on male-made paradigms of science, humanity and design. As such, data on the female brain is “full of errors”.
“The fundamental algorithms that have been running our understanding of human agencies, human expression, human evolution, are fucked to no end,” she says. “Unless we correct those, AI is an illusion in marketing.”
Rules, regulation and responsibility
As AI becomes increasingly integrated in society, the likes of Google, Facebook, Microsoft and Amazon, alongside governments and regulatory bodies, will need to work fast to ensure they are minimising unethical AI practices before it is too late.
More stringent rules and guidelines will no doubt come into place. Earlier this year the UK government launched an investigation into the potential for bias in algorithmic decision-making in society.
Now the EU has released guidelines to encourage ethical AI development, including principles around diversity, non-discrimination and fairness; privacy and data governance; societal and environmental well-being; and accountability.
Automated decision-making based on personal data – including profiling – is also covered by the EU’s General Data Protection Regulation (GDPR), which says that it must be necessary to perform a contract, authorised by law or based on explicit consent if it has significant effects on a person.
“Having an unbiased, trusted AI is not a nuisance; this is something that is really important because it’s good for business,” says James Luke, chief technology officer of IBM, which helped to develop the EU’s AI guidelines.
“The opportunities for AI to improve our lives are amazing but we need people to adopt the capabilities and they will only adopt it if they can trust it. So getting these ethics right is not just about the moral stance, it’s the business as well.
“One of the questions I often ask clients upfront is: can what you’re doing – what you want to do – be done by a human being at present? That’s a good test when you’re starting your AI journey. If it can’t be done by a human being, why do you assume that a machine can do it?”
Opinions are divided over whether AI will ever need more specific regulation. While Luke believes it will come through existing trade bodies, Ericsson’s Fersman believes bad practice will lead to regulation.
“The drawback if we are using algorithms that are not optimised is of course, in the end, there will be regulations that will be stopping the algorithms because it is being detected and seen by society that it is not optimal,” she says.
“This is not where we want to be. I want to have my AI as a trusted partner, something I can rely on, and for me it’s much more important that it’s all correct by construction and behaves in a good way. It’s a prerequisite – then performance and optimisation, that comes next.”
For better and for worse, artificial intelligence is transforming the world we live in. It is impossible to predict what that future world looks like, but we can say with a large degree of certainty that machines and algorithms will only become more intelligent and they will have greater influence on marketing, society and human behaviour.
Any brand, business, developer or data scientist that does not work to tackle algorithm bias is complicit in accelerating social inequality.
Good business ethics are no longer a nice-to-have, they are a necessity to pave the way for a fair future where society can thrive. Without them, we are opening ourselves up to long-term chaos and destruction.
[ad_2]
Source link