1st October, 2015
Did you know that Ben Ross can tell you how you’re going to die? The evidence shows that Australians are most likely to die from one of ten causes. But what does this have to do with unbelievably intelligent Artificial Intelligence (AI)?
For the answer to that – and more – here are my Top 5 takeaways from Ben Ross’s TEDx talk, What should we do about unbelievably intelligent AI?, that I was lucky enough to listen to last Saturday.
In Australia’s top ten causes of death Alzheimer’s disease comes in at number 3. Alzheimer’s is a form of dementia which damages the brain and 1 in 4 Australians over the age of 85 have it today.
Alzheimer’s is the only disease in this top 10 without a way to stop it, prevent it or even slow it down. For anyone with Alzheimer’s today, there is a complete reliance on advancements in technology to solve this problem.
Unlike the human brain, AI isn’t physically constrained by processing power, size and reliability. It’s able to be easily upgraded and reproduced. AI gives computers instant global collective knowledge – anything that happens anywhere is instantly incorporated into its knowledge base and available to be scanned. As humans, we rely on social media – which is much slower – to disseminate information.
Technology is advancing fast, but that raises a brand new risk that may be far more dangerous than Alzheimer’s – or any of the top 10.
The risk is that technology will advance to a level where AI is smarter than humans, known as human level AI. Superintelligence happens once AI is smarter than a human in every way.
How likely is it that computers will reach a human level of artificial intelligence?
Based on our progress with computing hardware and software it is very likely. Since 1965, Moore’s Law has accurately predicted that computing power will double every 12 months, a trend that has held firm for over 50 years. This doubling reflects an exponential rate of improvement.
This means that the smartphones we all carry in our pockets are more powerful than the multi-million dollar supercomputers, like Deep Blue, from 20 years ago.
These advancements in hardware and software are creating a path for the simple AI we experience today (like Siri, Google maps, etc) to reach a human level AI in the near future.
With the rate of change we are seeing in technology we are around 20 to 30 years (depending on who you ask) away from superintelligent AI.
A superintelligent agent could solve most of the problems we face today as humans. Superintelligence could develop a cure for Alzheimer’s and cancer, and fix environmental issues like global warming.
But superintelligent AI could also decide that it doesn’t want or need humans sharing its planet.
In our time on this planet, we have been fortunate to discover many technological advancements, but only a few have had the potential to wipe out humanity – and superintelligence is one of them.
We need to talk about and create rules to protect us.
In the 1960s, in the face of nuclear technology advancements, we humans regulated access to nuclear weapons with intergovernmental non-proliferation treaties to guard against widespread harm.
We did a similar thing again in the 1970s when biotechnology advanced to a level that enabled us to combine DNA from different species which gave rise to the possibility of viruses and mutations that could have threatened humankind. At that time, thought leaders gathered at the Asilomar Conference Grounds to develop containment guidelines which have served us well.
Today, we are moving rapidly towards human level AI – and we don’t have guidelines in place. Every day each of us is helping to advance technology with seemingly innocuous activities such as surfing the internet. For example, when you search online you are creating and curating the fact base for a superintelligent AI – a superintelligent AI which today we don’t have any approach to ensure we can control.