Blue circle with small light lines

Surviving Progress: the downside of AI

Artificial intelligence has become a pervasive technology, impacting businesses across different industries throughout the world. In recent decades, AI research has gone from being relatively obscure to extremely influential. Research objectives of AI, however, are influenced by several factors such as personal preferences of the researcher, the interests of academic institutions, funding and other related research in similar fields. These factors shape patterns in research topics, as well as who benefits from this research…and who doesn’t.

A recent study has shown that the majority of AI research disproportionally favours “the needs of research communities and large firms over broader social needs”. In addition, that the consideration of negative consequences of AI is extremely rare. A big part of the problem is the private sector’s pursuit of profit above all else. 

This is only the latest research to point out the flaws within the AI industry. It is well-known by now that AI can have built-in bias. Princeton computer science professor Olga Russakovsky explains, “A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities…it’s a challenge to think broadly about world issues”. The inequalities with the AI industry can be harmful, particularly given that it fails to clarify how, why and to whom specific bias can cause harm. If AI algorithms have a built-in bias, they will produce biased results. This can lead to unintended consequences like Microsoft’s Twitter chatbot that became racist in a day or Amazon’s automated recruitment service which favoured white males. 

The progress trap 

AI by and large, is developed to increase efficiency of processes, thereby increasing profit. As long as this is the main focus of AI, we run the risk of these drivers being optimised at the expense of other things we find important; fairness, equal opportunities, privacy, the continued existence of the human race…

One of the values that AI research papers most frequently justify themselves on is “building on previous work”. However, by continually building on previous work without solving the problems within that work, the negative aspects of AI research go unresolved. Research moves on. As does AI. Society learns to adapt to new technologies and by then, the benefits of comfort and efficiency are far too great to go without. As long as human ingenuity drives us to keep developing AI into something that cements a significant role in our everyday lives, we might be creating something that will end up doing more harm than good. This in short, is called the progress trap; the condition human societies experience when, in pursuing progress through human ingenuity, they inadvertently introduce problems they do not have the resources or political will to solve, for fear of short-term losses in status, stability or quality of life. Take Twitter for example, created to bring people together and learn from one another. Now, a racist, homophobic, death-threat-loving platform with the occasional semi-funny cat meme. 

Trust in AI

As society falls deeper and deeper into this progress trap, trust in artificial intelligence is diminishing. A recent survey found that only 9% of respondents felt very comfortable with businesses using AI to interact with them. And it’s easy to see why when leading brands like Microsoft and Amazon can’t even seem to get it right. By solely relying on AI data, organisations run the risk of neglecting the needs of a large proportion of the population (i.e. anyone who is neither white nor male). So, until the AI industry can ensure complete equality, brands should tread carefully when using AI research. After all, it only tells part of the story. 


To read about the importance of the human context, check this blog post.

P.S. like what you see? Subscribe for our Signals, a weekly spark of inspiration.