- OpenVPN IPv6 and IPv4 configuration - Mon, Mar 1 2021
- 4sysops author and member competition 2020 - Fri, Jan 1 2021
- Assign an IPv6 address to an EC2 instance (dual stack) - Tue, Dec 15 2020
Artificial Intelligence ^
Of the total number of participants, 76% declared that they have no plans to introduce AI in production, 12% are already working with AI, and 12% plan to deploy AI in their IT within the year. In other words, 24% will run AI systems in their organization in the foreseeable future.
I have to add a word of caution here. In reality, this number is most likely a bit smaller. The more than 200 participants in the poll certainly guarantee a statistically significant result. Previous polls on 4sysops have shown that after 100 votes, the numbers usually only change slightly. However, the problem is that IT pros who are already interested in AI are more likely to take part in the poll, whereas IT people who were never confronted with the topic at work more often ignore the poll altogether.
One thing is for sure, though, AI is on the rise and it is coming fast. If you didn't notice this, that's probably because so far AI only plays a role in rather specialized fields where huge amounts of data need to be analyzed. The typical brick and mortar IT shop running Windows 7 and Microsoft Office has no need for AI—not yet.
I have been very skeptical about AI for quite a while. When some AI researchers started to claim that the second AI winter is now over, I preferred to think that we are now running right into the third AI hype. This was a couple of years ago. I can tell you that I changed my mind recently.
The reason why I previously had doubts is that I have been very enthusiastic about AI, at a time when most IT people didn't even know what the term means. About 30 years ago, when I was studying at the university, I considered a career in AI. I was coding a lot in Prolog, an AI language that is based on mathematical logic. In my master's theses, I compared symbolic AI and neural networks with regard to their natural language processing capabilities.
This work opened my eyes. I was very much impressed by the performance of neural networks even though the computers in the university's data center could only process networks with a few nodes. I concluded that symbolic AI is a dead end and that the future belongs to neural networks, something almost all AI researchers vehemently denied at the time. But the opposite has occurred. Today, almost all AI in production is based on neural network technology. Nevertheless, I gave up pursuing AI because I realized that it would take decades until computers would be fast enough to run neural networks of the size needed to make a real impact. This time has now come.
The reason AI applications are now materializing everywhere is not really because AI research has made significant progress. As far as I can see, the networks used today are pretty much of the same type as those we worked with decades ago. The only real difference is that now computers are a couple of magnitudes more powerful. And because Moore's law still holds, computer power is growing at an exponential rate. Considering that AI systems are already doing remarkable stuff in real-world production environments, it is easy to predict that we are now on the cusp of a new revolution in computation.
Books keep popping up about the why, how, and when of the AI revolution. The most interesting text about the topic I recently read is AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee, the former president of Google China. In my view, this book is a must-read for everyone working in IT (and everyone who believes in a world order).
Artificial General Intelligence ^
As mentioned in the introduction, the result of our second poll is somewhat surprising. The most remarkable number is the 19% who believe that we already have AGI. In the article about the poll, I speculated that perhaps some big intelligence agencies already consult AGIs running secretly on the super computers in their basements. And if you add the 29% who think that we will reach AGI level within the next 10 years, you have to conclude that the majority of IT pros are not only confident that AGI is possible, they actually expect that they will make the acquaintance of some kind of HAL 9000 within their lifetime. That's stunning to me.
Whereas I am now quite bullish about AI in general, I am more skeptical than ever about Artificial General Intelligence (AGI). I don't want to offend anyone, but I have to admit now that asking IT people about such a topic is perhaps not such a good idea. You guys are just watching too much sci-fi.
In fact, I feel it doesn't even make sense to let AI researchers predict when we will reach AGI level. The Oxford philosopher Nick Bostrom, who is quite popular in the AI community, published the results of a survey in his book Superintelligence, where he asked AI scientists about the arrival dates of human-level machine intelligence (HLMI). This is the result (Kindle location 653):
10% chance: 2030
50% chance: 2050
90% chance: 2100
So it appears that IT experts and AI researchers share a similar optimism or (pessimism if you are afraid of AGI) with regard to the arrival of HLMI. Just in case you plan to read Bostrom's book (I don't really recommend it for IT people because this is hardcore philosophy), I predict you will be more scared than excited about AGI.
As mentioned above, I am neither excited nor scared. I belong more to the smallest group of our poll (2%); that is, I believe we will never build AGI. It is outside the scope of a little blog post to provide the argument for this view in detail. Suffice it to say that the real experts on this matter, which I believe are neuroscientists, are far more skeptical about the topic. Actually, I am not aware of one neuroscientist who believes that we will ever be able to run a complete simulation of a human brain in real time on a computer.
The reason is that the complexity of the brain is mind-boggling, something most AI researchers hopelessly underestimate. They usually claim that it is not necessary to copy all the features of a real brain to reach AGI. They believe that they only have to replicate the functionality of the brain at some highly abstract level.
The truth is that computer scientists have used this argument since Alan Turing (the war hero after which the Turing test was named), who wondered more than 80 years ago whether his electromechanical machine, called the bombe, was actually "thinking" when it cracked the code of the Enigma cipher machine. After all, his machine could accomplish something that the best mathematicians were unable to do.
It is exactly this enormous respect for a machine that seems to be superior in a domain that was previously confined to human intelligence that leads computer scientists to the false belief that we just need to add a bit more of Moore's juice and we can create a computer that can pass a kindergarten test. But all that Turing really did was to automate the deciphering of the Enigma code. Yes, automation is already an old chestnut in IT.
But make no mistake about it. Whereas AI will be no match for the general intelligence of a toddler any time soon, highly specialized AI systems are already replacing many white-collar workers, and I do believe that literally no job is safe from AI. Thus, I very much recommend taking a closer look at the AI services at your cloud provider next door. Those who can build and manage AI systems are certainly on the safe side because this kind of work requires general intelligence. Also, read Kai-Fu Lee's book to get a taste of the disruptions we can expect in the years come.
Subscribe to 4sysops newsletter!
What’s your take? Already bored about DevOps and interested in doing something really cool in IT?