In 2016, Microsoft pulled the plug on Tay (short for Thinking About You) a chatbot designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter. According to Microsoft CEO, Satya Nadella, Tay was an important influence on how Microsoft is approaching AI," and has taught the company the importance of taking accountability.
It can be argued that as we come to depend on data and on technology to make decisions, we also need to consider the implications such dependence has on the outcomes.
Unbiased machines
Brandon Purcell, VP, principal analyst with Forrester, is quick to point out that there are two types of biases.
"One is an algorithmic bias which is machine learning algorithms finding useful patterns, anomalies and data. The bias lies in the data itself and with algorithmic bias, the data used to teach the machine is not representative of the entire population,” he explained.
The other type is human bias. “All the past inequity has now been codified in data, training the systems to make decisions and machine learning algorithms pick up on disparities in the way that we make decisions about groups,” he added.
Impact of bias on AI development and adoption
Purcell warned that the people creating these systems are aware that AI bias is real. But, at the same time, many companies want to adopt AI because it is the new, hot, shiny object.
“With AI, you start to optimize decisions at scale, and in the rush for AI, many companies are overlooking some of the potential pitfalls. One of the best emerging practices is that companies have started talking to impacted stakeholders on how to ensure the system does not harm them,” he warned.
Questions leaders must ask about biases
He suggested that corporate values around AI need to be a top-down initiative. Many companies, even governments and non-governmental organizations, have adopted ethical AI principles such as trust, transparency, fairness, and accountability.
“The problem is that companies that have created the ethical frameworks find it challenging transforming them into everyday practice. The layer of accountability below the top level of leadership ensures that data scientists creating the systems are not defining data themselves,” he continued.
The road to ethical AI
Purcell acknowledged that much still needs to be done before ethical AI becomes a reality.
Shannon Vallor, a professor in Edinburgh, Scotland, said, “Bias is precisely what we want these systems to have. We just want them to have the right type of bias.”
Purcell said this remains true today. “The beauty of machine learning is that it is great at identifying differences between groups of people and exploiting those differences that are sometimes very useful,” he commented.
Forrester’s take on unbiased AI
“There are over 20 different mathematical representations of fairness from an AI perspective and just two different ways to measure fairness,” said Purcell. “One is focused mostly on equality which is equalizing the accuracy. The other is more focused on equity of outcome. These are two very different ways and could lead to two very different outcomes. The best practice is to measure different metrics.”
He conceded that the policies are in place and if adhered to regularly, it is not that expensive.
“But if you are in a regulated industry, like financial services, using AI, and it has been found that your AI lending algorithm has a disparate impact on minorities, you could be fined a lot of money. There is also a real opportunity which is that by using AI more ethically, you could extend your products and services to potential groups of people,” concluded Purcell.
Click on the Chatbot player and listen to Purcell discuss in detail the nuances around building an unbiased AI.
- One of the attributes of machines is that they are “supposedly” unbiased executing based on a pre-defined set of “rules”. And yet, studies from the World Economic Forum and commentaries from Harvard Business Review suggests AI is biased. Where does the fault (if any) lie? On the code? On the algos?
- Would you consider these concerns about AI bias as having a significant impact on how AI will be adopted in commercial environments?
- What should leadership ask of their data science/AI research teams to mitigate against the risks that may come from perceived AI bias?
- In your view, how far away are we from achieving ethical AI?
- You contributed to the Forrester report, How to Measure AI Fairness. What was the conclusion of the report?