These words are part of Brad Smith’s quote on how Microsoft views the deployment of Artificial Intelligence. He is the current President of the company. The full quote is:
Artificial intelligence is already changing society and empowering people in new ways by enabling breakthrough advances in areas like healthcare, agriculture, education and transportation. As this technology continues to grow, we will work to deploy AI around the world ethically, inclusively, and with transparency to ensure that it works for everyone.
The words I put in bold caught my attention when I read that quote and I decided to write this article regarding them. With or without knowing it, we are becoming more and more dependent on A.I. algorithms. Actually, scratch that. We have been reliant on these algorithms for years now. They are only becoming more applicable and disruptive because of a merging of factors including tons of data being generated every day, increasing computing power with lowering costs, and better algorithms. This increasing ubiquity of A.I. systems presents some challenges; some of which are beyond the technical aspects of building them:
1. Ethical challenges
One of the biggest challenges in developing an A.I. algorithm is the potential of bias in either the algorithm itself or the data we train it on. Let’s say your algorithm is meant to help you assess whether your banking client deserves to get a loan or not. You might expect this algorithm to be objective and devoid of emotion. Problem is machine learning algorithms that do this task need to be trained on data of loans you have issued previously for them to be able to accurately predict whether future clients will be a good debtors or not. This is where bias might come in. If your dataset of previous loans prejudices clients of a certain demographic, your A.I. algorithm, which is supposed to be objective, will carry that bias forward. Algorithmic bias is introduced when the coders of these systems train them on data that is incomplete which potentially negatively impacts those who are underserved by traditional financial offerings. When these clients come to your online loan application service powered by A.I., the algorithm runs the risk of carrying the biases that were in your data you used to train it and negatively judge the applicants. Question then is how do you deploy your A.I.-powered offering in such a way that it is ethical and does not discriminate or prejudice certain demographics?
2. Inclusivity challenges
Tied very closely to the ethical challenges of deploying A.I.-powered solutions is the inclusivity challenge. How do you make sure the data you are using to train the algorithm is inclusive enough? Artificial Intelligence has the potential to increase inequalities based on race, gender, sexual orientation, religion, nationality, age, educational or economic background if engineers do not take care to understand the intricacies of their solutions. The other day I was on Google and I searched for images of “African schools”. Here is a screenshot:
Google is powered by complex A.I. algorithms and this is a screenshot of the top results. 6 out of the 9 images there are of dilapidated schools and it gets even worse when you scroll down the results. Are 95% of our African schools rural? According to Google, the top additional search filters after searching for “African schools” include words such as poor, rural, traditional, and bad. On the other hand if you type “American schools”, they suggest beautiful, modern, high school, poor and classroom. If the world at large employs A.I. algorithms to help inform some critical decisions, won’t the lack of inclusivity introduce some biases against under-represented countries? I admit, this is a very complex topic. You can read the research by the International Telecommunications Union titled “Assessing the Economic Impact of Artificial Intelligence” to understand more. Without making this article longer than it should be, with our understanding that A.I. algorithms are trained on data and the fact that the more of it you have, the better your algorithms are, do you agree that the benefit of these systems is potentially going to be skewed towards companies and countries that have large datasets for example Facebook and Google? Increasing ubiquity of A.I. algorithms will potentially exacerbate the gap between those who can create the best algorithms and those who will just scrap by. Whenever there are skewed resources, there is inequality.
3. Transparency challenges
Anyone who has played around with A.I. algorithms knows all too well the blackbox problem. In simple terms, what this means is sometimes it is quite difficult to explain why an algorithm makes the decision that it does. All we can investigate is the accuracy of the algorithm but not quite understand how it works under the hood. For example, you can train your algorithm to figure out the factors that contributed to someone surviving the sinking of the Titanic by training it on a dataset of passengers who you know survived and those who were unfortunate. You can measure the accuracy of the prediction because you already know their fates but it might be difficult to explain exactly how the algorithm is doing that. This problem of explainable artificial intelligence is a hot research field. How do we deploy these algorithms in areas such as education or criminal investigation if we cannot explain exactly what they are doing? This is also compounded by the fact that many people who are not plugged into this field do not quite understand what A.I. can and cannot do. Media hype clouds reality and fuels impossible expectations. Researchers need to be more transparent by communicating better. Transparency in artificial intelligence is very critical if they are going to be accepted by the general public.
It is good that large and powerful companies like Microsoft are considering these three challenges in how they deploy their A.I.-powered solutions.
It’s a brave new world!