Twenty-odd years ago, I was a Computer Science student at Queen’s University in Belfast. It was time to choose modules for my final year and my (more intelligent) peers all told me to choose Computer Algebra. “It’s a cakewalk”, they said. “Easy exam”, they said. “You don’t even need to go to the lectures”, they said. It was right up my street. There was another module that caught my eye though: “210CSC306 – Artificial Intelligence”.
The module looked at concepts like fuzzy logic, the idea of varying degrees of truth as opposed to the 1s and 0s of traditional computing, computational linguistics - Chomsky’s Grammar - developing computer programs that understood human language using rule-based systems. Sounds sexy, right? Well, as you might imagine, it was difficult.
In mainstream popular culture, the idea was taking off too. Steven Spielberg wrote and directed 2001’s “A.I. Artificial Intelligence” based on a 1969 (gasp!) short story by Brian Aldiss. It told the story of a future of intelligent machines and the advent of an artificial human companion that’s capable of love. Is ‘utopian sci-fi’ a genre?
Fast forward a few years and AI as a term has taken a backseat to a new hotness; Machine Learning (ML). Everyone is taking better photos on their iPhones because of ML. You can search for dogs in your pictures because Google has really advanced its image processing and benefited from your hard work on solving captchas for them. Privacy is a concern though. The former were doing all their learning on the device; the latter were doing it in the cloud, with your photos. Commoditisation of ML means that data, your data, all of the data, is helping computers to make predictions and even complete tasks they hadn’t been explicitly programmed to. Welcome to Deep Learning.
But AI is making a comeback. “Large Language Models” or “LLMs” like ChatGPT have gone from fringe research tools to the mass market. It’s the fastest-selling product in history. With help from Microsoft (read: $10 Billion) you’ve got AI in your browser and AI in your word processor. Soon there’ll be AI in your toaster. This is conversational, two-way, contextually relevant text. It’s nothing like the rule-based systems from the early 2000s, but its pedigree has caused some issues. Deep Learning is learning from, for example, the entire web, all of the hate speech on Twitter, all of the memes on Reddit, etc. It creates an AI with biases. Researchers have to compensate for this and pour in positivity to balance it out. Even with the best of intentions, others work towards things like the Palantir platform, AI warfare. Now we’re in a world full of moral quandaries with human decision being replaced by AI ‘objective analysis’.
Just last week I watched a 2022 movie called “Artifice Girl” written and directed by Franklin Ritch. The basic premise is that an AI chatbot of sorts is tasked with catching child predators (wow how movies have changed). My favourite question raised in the movie was “How does the AI feel about being used to interact with (but ultimately catch) the worst of human society?” As AI gets closer to being “good enough” to pass the Turing test, the question we must ask is “Are we good enough for them?”.