Today I want to talk a little about AI – the hype, the reality, and our fundamentally misplaced objectives (!).
These days we see all kinds of businesses touting ‘AI-powered-this’ and ‘AI-powered-that’. In many cases these companies are using some basic machine learning techniques at best.
Of course, then one might look at google’s AlphaGo, which used training data from thousands of human games plus neural networks to beat the best human players. Soon after, they took a step forward to create AlphaGo Zero, which was only given the rules of the game and was left alone to train itself. That’s pretty amazing.
Moving on from these real-life examples, I see a lot of articles ruminating about when we’ll achieve ‘real’ AI. These articles are usually referring to ‘strong’ AI, or AI that is as good as a human mind.
I have been struck by the complete meaninglessness of this as a goal. We already have several billion ‘strong AIs’ in the form of human beings, and human minds are pretty flawed really. What insane force is driving us to make more minds just like our own?
In recent months, I’ve been drawn towards content around false memories and cognitive biases. There is a huge amount of research showing us that our (human) ability to gather, process, store, and use information is fundamentally flawed. The linked article and picture from wikipedia is excellent, displaying some of the (very many) known cognitive biases. I also absorbed some other interesting content around the ‘Mandella effect’, which is a name for the fact that humans have a strange tendency to create false memories of events that never actually occurred. There’s plenty of evidence out there.
We already have machines that far outpace in sheer effectiveness across very many tasks. They can calculate better than us, store information better, recall information more accurately, and have direct global connectivity & interoperability with a network of other machines. They can identify the cat pictures (important, clearly) from a library of 10 million images in a tiny fraction of the time it would take a human. They already manage & control much of our critical infrastructure and our economy fundamentally revolves around our symbiosis with phones, computers, and the information within them. They are different, and that’s a good thing. We need to lean into that.
Given the above, what should we make of this (linked below) draft US bill that posits that “understanding and preparing for the ongoing development of artificial intelligence is critical” (to the US). It then defines artificial intelligence as, in effect, computers being just like humans.
We’re looking at this in a fundamentally wrong way. The human mind is not the pinnacle of intelligence. It feels like we’re back in the days of assuming that the sun and stars revolve around the earth. We need to get over our own egos and drive this exponential computational capability at a new, better, goal. It would be better to build the inherent strengths of our computational brethren towards notably different & symbiotic capabilities (vs humans), rather than doggedly bending them towards our own flawed existence. Perhaps we’re too flawed to figure this out?