Hello Future Fans,
I’ve not posted for a while, since I’ve been travelling. Today I want to re-connect with one of my favourite topics, being the exponential nature of computing; in terms of power, size, and architectural advancements. I’m surely going to be a little guilty of confirmation bias (a tendency to find information that confirms one’s own world view), however the raw numbers can’t be denied!
Firstly, I wanted to touch on nvidia’s ‘DGX-2’ announcement. This is basically an AI-focused supercomputer-in-a-box. It can be yours for only $399,000.
The DX-2 is remarkable for a few reasons. Firstly, the claimed 2-petaflop (quadrillion calculations) per second puts it in the class of the world’s fastest supercomputers just about 9-10 years ago. IBM’s roadrunner supercomputer, based at Los Alamos National Laboratory, cost $120 million to build and broke the 1-petaflop barrier back in 2008. These things aren’t directly comparable for various reasons; however, the pace of change is both evident and exceptional.
Another thing that makes the DX-2 remarkable, is that it is achieving 10x AI model performance and up to 160x performance in very specific tasks versus their own DX-1 that was released just LAST YEAR. That’s partly due to more powerful hardware and, more importantly, is driven via architectural changes. They have developed technologies called NVSwitch and NVLink, which basically allow the internal computational devices to communicate together much much faster; this is a critically important aspect of computation.
Staying on the supercomputer topic for a little while, it’s always nice to the supercomputer page on Wikipedia. There’s a table on this page that shows the raw power of the fastest supercomputers over time. Looking at that chart, you’ll see that it’s pretty common for our fastest supercomputers to double or triple performance every 1-3 years. That’s a simply ridiculous pace. Back when we were doubling the performance of largely incapable machines, that wasn’t particularly relevant to the world. Now we’re talking about doubling and tripling the performance of machines that are already delivering almost 100 QUADRILLION calculations per second, and there’s no end in sight.
Changing tack, I wanted to drill into the very small and mention the recent IBM announcement of a computer the size of a grain of salt. It’s claimed to be about as powerful as a standard computer from 1990 and, of course, this only costs 10 cents & sips a tiny, tiny fraction of the power that was required in 1990. Note that the picture on the page of the device on a finger is actually 64 of these computers. Individually, they are only 1mm x 1mm!
So, what does this all mean? It’s the same as we’ve always known, but I think that it’s important to keep coming back to this. We’re rapidly moving deeper into a computationally-driven world. Almost all decisions will be taken via machines over time, not human minds. Yes, smarter-than-human AI will emerge (as much as I’ve already pointed out that this is a rubbish goal). That smarter-than-human AI will likely become 2-3x more capable every 1-3 years, and will inevitably expand into a globally-connected computational web. Everything around us will have some computation and/or data in it. Cars will very clearly become mobile supercomputers in their own right, and also every can of beans will be chipped; the roads will have tiny sensors; our shoes, socks, plants, blood, soil, sand, and probably even the air will end up packed with tiny information and computational machines. Our best and only option will be to directly interface our minds with these machines, else we risk being largely irrelevant; a curious & quaint historical footnote. This interfacing will squash the equivalent of a billion-years-worth of organic evolution into a small number of decades … a shift of epic proportions where we all become part of an interconnected consciousness.
On a less serious note to wrap up, it’s gong to be quite cool to order pizza directly from our brains!