The AI-ed Generation
You wake up to the (sweet?) Voice of Alexa, the Amazon AI assistant, in the morning. Before you even ask, she knows that you wish the coffee machine be turned on.
Fuming espresso is ready by the time you’re done brushing you teeth.
To some, that’s sheer dystopia. To some, absolute bliss. But any way you look at it, there’s no denying the fact that AI is here to stay for good.
While there are a zillion blogs and articles on apparent threats posed by AI, I thought I’ll take a detour and spin the bottle around, for a round of truth or dare.
First, let’s begin with an idiot’s 30 seconds crash course on how AI works:
AI works based on a pretty complex algorithm that when turned on, acts like a self learning system. Even the creator of the program can’t always explain a certain behaviour of the software agent. Thats because AI works just like a human mind on budget. Its actions are, indeed more of a reactive process based on its own interpretation of events happening in its context. The interpretation arrived at by the AI decision hub, is in turn function of two factors:
- The knowledge that has been fed to it during the inception, also called the training set. After initial structuring, the initial weights of opposite data sets (right and wrong, black and white) are chosen randomly. Then, the training, or learning, begins. There are two approaches to training – supervised and unsupervised. Supervised training involves a mechanism of providing the network with the desired output either by manually “grading” the network’s performance or by providing the desired outputs with the inputs. Unsupervised training is where the network has to make sense of the inputs without outside help.
Based on the training/ learning, now the neural network (the brain of AI) is now equipped to interpret something as right or wrong, when real world scenarios are thrown at it.
So, clearly the process is same as how humans learn. We are taught as a child what’s right and what’s wrong, by our elders, and then we are set free. Based on our conditioning, we judge other humans, the surroundings, and pretty much everything.
Drumrolls, coz this is where it gets tricky. Consider the followings:
- New research published recently in Nature Magazine’s Scientific Reports shows how in a simulation, a network of AI agents autonomously developed not only an in-group preference for other agents that were similar to themselves but also an active bias against those that were different. In fact, the scientists concluded that it takes very little cognitive ability at all to develop these biases, which means it could pop up in code all over the place.
- an algorithm called COMPAS, used by law enforcement agencies across multiple states in the US, to assess a defendant’s risk of re-offending, was found to falsely flag black individuals almost twice as often as whites, according to a ProPublica investigation.
Am sure you see what I am arriving at.
The world is in a bad shape now. Everyday we comes across heartbreaking news of bullying and prejudice based on skin colour, race, ethnicity, sexual orientation and so on. The problem however is more deep seated than that. This heartbreaking video shows that even the kids, who obviously have much less cognitive capabilities than adults, are easily conditioned to harbor hate towards others who are “different” from them.
This article is not just about technology.
Unless we exercise caution while creating, training and maintaining AI in a neutral way, free from bias, we as a collective human race are, without doubt going to face a variety of consequences ranging from inconvenient to outright catastrophic in the time to come. Same principle applies when guiding the future generations, our kids.
Technology has brought us this far from the time when we used to be scared of fire and lightning. We are now way more rational, knowledgeable and pragmatic. We are standing at a cross-roads of Technological evolution. We are the first generation of a civilization powered by artificially intelligence.
So next time when we code that decision making algorithm for a software, or prepare the training set for the AI, the next time we are writing the appraisal sheet for a junior or meting out the jury decision for an accused, the next time we teach our kids what’s right and what’s wrong, let’s keep in mind the journey from caves to boardroom, from helpless fear of natural forces to incredible power to harness them. Let’s be a little more compassionate, a little more accommodating, a little more inclusive.
Let’s remove bias.
Ok firstly a very good read and nicely written but I am confused a little. I understand human mind has bias and AI picks up experiential learning but isn’t the entire point of AI is to identify any bias in system and eliminate the same.
What I agree is that AI probably will never have a conscience which will help it distinguish “need to done” vis-a-vis “good to be done”…….
Great question! So it’s like a recursive process. I’ll try to explain. Generally speaking, any AI program is built with a few specific intents in mind. Any AI is trained to do a few sets of things. The thumb rules are taught to him, but real time interpretation is left to the program itself. Now, Just an external independent consultant is often called upon to figure out problems in a system, similarly, in this case you’ll need another AI program that, may be based on some neutral unbiased learning, will be able to tell right from wrong when it is given a training set for other AI Programs.
There is where it gets recursive, isn’t it?