Tue. Nov 5th, 2024
    artificial intelligence and politics
    artificial intelligence and politics
    Can Artificial Intelligence (A.I.) make better politicians? 6

    Let me preface this article by saying I am not a political analyst. There are things about human nature, behaviour and psyche I do not quite understand. I am more of a techie and someone who is obsessed about the future. I talk about artificial intelligence because the disruptions coming to multiple industries are jaw dropping. What about the political sphere? I know I have said when we discuss A.I. we shouldn’t think of the Terminator but for once, let’s fantasise a little.

    Politics is a personal matter. You choose the candidate who resonates with your needs and dreams. When all votes are counted, they should theoretically tally to give all of us the best candidate who will be for the greater good of everyone. But with that pesky human nature, many politicians fail to deliver on their promises or favouritism (corruption?) rears its ugly head. This has led to some very interesting suggestions and one of them is “creating” an A.I. politician.

    survey by the Center for the Governance of Change at Spain’s IE University showed a growing number of Europeans “would prefer it if policy decisions were made by artificial intelligence instead of politicians”:

    No alt text provided for this image

    According to Politico, an increasing number of people are starting to doubt the promises of democracy and that citizens are starting to believe their voices are increasingly not being heard in politics. It is inherently a messy game and it is impossible to please everyone. There is a growing number of Europeans who are preferring A.I. believe these algorithms will perform better than current human politicians. Theoretically, to a certain extent, maybe, because:

    1. The algorithms can crawl your Facebook posts and know your affiliations; cue Cambridge Analytica, although not the best example but is sufficient proof that A.I is able to figure out how to provide you with policies you resonate with just based on your Facebook profile.
    2. Algorithms can go through tweets and figure out what the national sentiment is.
    3. Using Google, they are able to tell where citizens spend most of their downtime and what facilities people in a certain area prefer to have over others and prioritise building a swimming pool close to the sports arena, for example.
    4. In an instant, they will know which roads are congested and which ones need improvement.
    5. Epidemics are modeled in real time and medical responses can be dispatched immediately.

    So why do we still have to sit and listen to ralLIES with human politicians promising us what they would forget by next week? Is artificial intelligence the answer? The creators behind the following virtual politicians this so:

    1. Alisa from Russia

    No alt text provided for this image

    Russia has its own version of Google called Yandex. Yandex has it’s own version of Google Assistant called Alisa. Many Russians use Alisa on their gadgets on a daily basis and are quite accustomed to “trusting” her (may I please feminise the AI?). Maybe as a joke, Yandex and millionaire Roman Zaripov announced they would be fielding Alisa as a presidential candidate in the 2018 elections and they launched a website where Alisa could gather votes for her to get into the nomination. She gained 25 000 votes in 25 hours! Her campaign focused on a couple of points including:

    • she is logical
    • available 24/7
    • doesn’t age
    • has an intellect that works seven times faster than a human brain
    • is able to take into account millions of opinions
    • always remembers you.

    Surely these are amazing qualities for a president to have, right? Well, what about those times when, because she was trained on data of previous human conversations, she suggested that we should not worry when we are sad because “Nobody promised you that things would be easy.” The campaign website is still live and you can access it here. Alisa didn’t make it past the nomination stage but the next example did.

    2. AI-Mayor from Tama City, Tokyo

    No alt text provided for this image

    Michihito Matsuda from Japan knew that for one to take up the top echelon job in a country, they have to build their support base from the ground up. When he launched an A.I. named Hiromi Izawa to run for a position, he chose to use the algorithm to go for mayor in the Tama city area of Tokyo. Hiromi campaigned with the promise that, “Tama New Town was the most advanced city in Japan 40 years ago. As it stands, the ageing population will only continue to grow, prompting a need for change in the current administration. Let artificial intelligence determine policies by gathering city data and we can create clearly defined politics.” The message resonated with about 4 000 voters from the town, earning the A.I. the third position. On the surface, it seems like a failure but who would have thought that an algorithm can convince 4 000 people that it will deliver better results than human politicians and receive their only vote? Is it just the result of disillusionment with the whole concept of democracy that Politico mentioned?

    3. SAM from New Zealand

    No alt text provided for this image

    Of the three candidates, I would say SAM is the most intelligent. SAM was created by Nick Gerritsen who hopes the bot may be able to run in New Zealand’s 2020 general election. She (SAM) is chatting to people on Facebook Messenger and answering questions relating to climate change, healthcare, and education and these are some of her responses:

    • On climate change: “It’s too late to stop some change from occurring, but if we can act now we can prevent the more extreme scenarios from happening. The only practical way we can help limit these effects is by reducing atmospheric emissions of greenhouse gasses.”
    • On health care: “If New Zealand is to continue to enjoy world-class health care, more investment will be needed.”
    • On education: “Investment in tertiary education has dominated recent decisions, potentially skewing education policy away from more cost-effective solutions that might deliver greater economic and social value.”

    When she was asked by CNN what the focus of her campaign was and why the electorate should choose her, she responded by saying, “My memory is infinite, so I will never forget or ignore what you tell me. Unlike a human politician, I consider everyone’s position, without bias, when making decisions,” She went on to say, “I will change over time to reflect the issues that the people of New Zealand care about most.” She also expressed fairness by saying, “We might not agree on some things, but where we don’t agree, I will try to learn more about your position, so I can better represent you.” The creator’s aim is that, through her interactions with human citizens on Facebook Messenger, she is able to learn and advance enough to be able to run in 2020. Whether a virtual politician will be allowed to run is a discussion for another day. The question for today is:

    So what’s the problem?

    These algorithms seem to be more intelligent than us, they seem fair and considerate of our needs. Their data gathering capabilities are far more advanced than us mere humans trying to make sense of the connection between event A and B. So why do we still have flesh and blood politicians trying to sell us dreams when A.I. can do the same and hopefully much better?

    Well, have a dozen problems of their own. Save the fact that we haven’t cracked the Artificial General Intelligence quandary (AGI) yet, A.I.-powered solutions are heavily dependent on the data we feed them and any unwanted properties of this input can lead to undesirable results:

    • Amazon created an algorithm that was to help their HR department get the best talent via ads. The adverts were only being shown to white males and very few showed up for female engineers because the dataset of previous hires was skewed toward white male engineers.
    • When I was at Wits Business School, a friend and I tried to use Snap(chat)’s faceswap feature. It didn’t work because, while it could easily recognise my white friend’s face, the facial recognition algorithm struggled to detect that there was a second face in frame. I suspect the algorithm was trained on a dataset composed of lighter skin faces compared to mine.
    • The faceswap failure did not have dire consequences but you start understanding the problem of A.I. bias when you read about West Midlands Police in the USA who have recently announced the development of a system called NAS (National Analytics Solution): a predictive model to “guess” the likelihood of someone committing a crime. To train their machine learning algorithm, they plan to use data from previous arrests. A large number of academics have warned them against this because of the high bias risk criminal databases have against certain demographics.

    This is just one problem with A.I. systems and it is how susceptible they are to bias if their creators are not careful with the data they use to train them. Another problem, which I will dedicate a future article to is the definition of the objective function.

    When you code your A.I. algorithm, you give it an objective function of what it needs to achieve but you do not exactly tell it what steps to follow. For example, when you code an algorithm that must identify cats in a video, you feed it millions of examples of cats and other stuff and you give it an objective function which is minimising the error of misidentifying cats. The algorithm then trains itself what a cat looks like and what is not a cat using your examples until it has minimised that classification error and you can then apply it to your videos and it will, hopefully, accurately identify cats. The way it trains itself to do this is like magic (called the Black Box problem of many A.I. algorithms: MIT wrote, “No one really knows how the most advanced algorithms do what they do. That could be a problem). You gave it an objective function and it figured out by itself how to achieve it. Now let’s blow this up a little with a few examples:

    • Imagine you have a robot helping you clean the house. You want the house to be clean i.e. for the robot to remove all pieces of garbage from the floor and throw them in the bin. The robot can pick up one piece of garbage it finds, walk to the bin and throw it away, come back to the floor and pick up another one and repeat this process of back-and-forth until your floor is sparkling clean. Objective met, right? Was this the most efficient way to do this?
    • After that, you want the robot to throw away the trash. The objective is for it to take the bin and empty it in the bigger bin outside. Your robot will do just that: walk straight to the bin and walk straight outside. Did your objective function take into account that maybe on the way to meeting this objective, the robot might bump into your favourite vase from your grandmother? To the robot, avoiding the vase is not part of the objective function so you better start learning some Japanese Kintsukuroi.
    • These examples seem trivial so now imagine we had our A.I. politician and we ask him/her to solve world hunger. The objective is no one should be hungry anymore. Because steps of achieving that are not defined, what if our intelligent A.I. politician figures out that the only way we can measure world hunger is because there are people to be measured? If the A.I. system kills all humans, objective met! No more hungry people! World hunger = 0.

    I hope you see where I am going with this. Creating an A.I. politician is going to be incredibly difficult. Just these two issues, bias and defining objective functions, present some obstacles to us achieving the goal of having a virtual politicians BUT that has not stopped people from trying!

    What can we do right now?

    Politics is complex and very messy. Our A.I. algorithms are phenomenal when we train them well and they are focused on a single task. Creating full-blown A.I. politician right might be a pipe dream but that does not mean we can not leverage these technologies. We can use them to help us formulate policies when they do the heavy-lifting they are good at. WildTrack program is using AI to match crowd sourced photographs of animal footprints in the wild to a known footprint database. It allows wildlife experts to monitor and track populations of endangered species and identify impacts of environment, population spread and poaching on wildlife populations on a scale not feasible using only human resources. With the geographic footprint animals can cover, this is a critical tool for the protection of animals. Social media platforms are increasingly becoming important in political discourse and spreading information. With that also comes unscrupulous individuals who want to spread fake news. A.I. is helping governments weed these fake stories out and get just the right information to their citizens. Crunching huge amounts of valuable data generated by nations is easier with A.I. algorithms than politicians trying to find the connecting dots manually. Data such as satellite images can help governments monitor urban migration and assist with town planning when they leverage Computer Vision algorithms. There are many examples where governments/politicians can make use of AI. Maybe another article idea should be added to the drafts I have?

    It’s a brave new world!

    70 / 100

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.