More than 200 IT pros voted in our Artificial Intelligence polls. Whereas the numbers about the current situation in IT departments met my expectations, I am a bit surprised about the opinions IT experts hold about Artificial General Intelligence.
Latest posts by Michael Pietroforte (see all)

Artificial Intelligence

Of the total number of participants, 76% declared that they have no plans to introduce AI in production, 12% are already working with AI, and 12% plan to deploy AI in their IT within the year. In other words, 24% will run AI systems in their organization in the foreseeable future.

I have to add a word of caution here. In reality, this number is most likely a bit smaller. The more than 200 participants in the poll certainly guarantee a statistically significant result. Previous polls on 4sysops have shown that after 100 votes, the numbers usually only change slightly. However, the problem is that IT pros who are already interested in AI are more likely to take part in the poll, whereas IT people who were never confronted with the topic at work more often ignore the poll altogether.

One thing is for sure, though, AI is on the rise and it is coming fast. If you didn't notice this, that's probably because so far AI only plays a role in rather specialized fields where huge amounts of data need to be analyzed. The typical brick and mortar IT shop running Windows 7 and Microsoft Office has no need for AI—not yet.

I have been very skeptical about AI for quite a while. When some AI researchers started to claim that the second AI winter is now over, I preferred to think that we are now running right into the third AI hype. This was a couple of years ago. I can tell you that I changed my mind recently.

The reason why I previously had doubts is that I have been very enthusiastic about AI, at a time when most IT people didn't even know what the term means. About 30 years ago, when I was studying at the university, I considered a career in AI. I was coding a lot in Prolog, an AI language that is based on mathematical logic. In my master's theses, I compared symbolic AI and neural networks with regard to their natural language processing capabilities.

This work opened my eyes. I was very much impressed by the performance of neural networks even though the computers in the university's data center could only process networks with a few nodes. I concluded that symbolic AI is a dead end and that the future belongs to neural networks, something almost all AI researchers vehemently denied at the time. But the opposite has occurred. Today, almost all AI in production is based on neural network technology. Nevertheless, I gave up pursuing AI because I realized that it would take decades until computers would be fast enough to run neural networks of the size needed to make a real impact. This time has now come.

The reason AI applications are now materializing everywhere is not really because AI research has made significant progress. As far as I can see, the networks used today are pretty much of the same type as those we worked with decades ago. The only real difference is that now computers are a couple of magnitudes more powerful. And because Moore's law still holds, computer power is growing at an exponential rate. Considering that AI systems are already doing remarkable stuff in real-world production environments, it is easy to predict that we are now on the cusp of a new revolution in computation.

Books keep popping up about the why, how, and when of the AI revolution. The most interesting text about the topic I recently read is AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee, the former president of Google China. In my view, this book is a must-read for everyone working in IT (and everyone who believes in a world order).

Artificial General Intelligence

As mentioned in the introduction, the result of our second poll is somewhat surprising. The most remarkable number is the 19% who believe that we already have AGI. In the article about the poll, I speculated that perhaps some big intelligence agencies already consult AGIs running secretly on the super computers in their basements. And if you add the 29% who think that we will reach AGI level within the next 10 years, you have to conclude that the majority of IT pros are not only confident that AGI is possible, they actually expect that they will make the acquaintance of some kind of HAL 9000 within their lifetime. That's stunning to me.

HAL9000 in Arthur C. Clarkes Space Odyssey

HAL9000 in Arthur C. Clarke's Space Odyssey

Whereas I am now quite bullish about AI in general, I am more skeptical than ever about Artificial General Intelligence (AGI). I don't want to offend anyone, but I have to admit now that asking IT people about such a topic is perhaps not such a good idea. You guys are just watching too much sci-fi.

In fact, I feel it doesn't even make sense to let AI researchers predict when we will reach AGI level. The Oxford philosopher Nick Bostrom, who is quite popular in the AI community, published the results of a survey in his book Superintelligence, where he asked AI scientists about the arrival dates of human-level machine intelligence (HLMI). This is the result (Kindle location 653):

10% chance: 2030

50% chance: 2050

90% chance: 2100

So it appears that IT experts and AI researchers share a similar optimism or (pessimism if you are afraid of AGI) with regard to the arrival of HLMI. Just in case you plan to read Bostrom's book (I don't really recommend it for IT people because this is hardcore philosophy), I predict you will be more scared than excited about AGI.

As mentioned above, I am neither excited nor scared. I belong more to the smallest group of our poll (2%); that is, I believe we will never build AGI. It is outside the scope of a little blog post to provide the argument for this view in detail. Suffice it to say that the real experts on this matter, which I believe are neuroscientists, are far more skeptical about the topic. Actually, I am not aware of one neuroscientist who believes that we will ever be able to run a complete simulation of a human brain in real time on a computer.

The reason is that the complexity of the brain is mind-boggling, something most AI researchers hopelessly underestimate. They usually claim that it is not necessary to copy all the features of a real brain to reach AGI. They believe that they only have to replicate the functionality of the brain at some highly abstract level.

The truth is that computer scientists have used this argument since Alan Turing (the war hero after which the Turing test was named), who wondered more than 80 years ago whether his electromechanical machine, called the bombe, was actually "thinking" when it cracked the code of the Enigma cipher machine. After all, his machine could accomplish something that the best mathematicians were unable to do.

It is exactly this enormous respect for a machine that seems to be superior in a domain that was previously confined to human intelligence that leads computer scientists to the false belief that we just need to add a bit more of Moore's juice and we can create a computer that can pass a kindergarten test. But all that Turing really did was to automate the deciphering of the Enigma code. Yes, automation is already an old chestnut in IT.

But make no mistake about it. Whereas AI will be no match for the general intelligence of a toddler any time soon, highly specialized AI systems are already replacing many white-collar workers, and I do believe that literally no job is safe from AI. Thus, I very much recommend taking a closer look at the AI services at your cloud provider next door. Those who can build and manage AI systems are certainly on the safe side because this kind of work requires general intelligence. Also, read Kai-Fu Lee's book to get a taste of the disruptions we can expect in the years come.

Subscribe to 4sysops newsletter!

What’s your take? Already bored about DevOps and interested in doing something really cool in IT?

6 Comments
  1. Alen Mikic 3 years ago

    AI Is the biggest thing ever and it will only get bigger with time. Asking "Is AI the next big thing in IT?" in 2020 is not a serious question! Answer is "off course it is!". The only thing that's up for debate is when will it exceed human intelligence and will it be the thing that ends human race or will it make us near Gods. 

    • Author

      I wonder makes you so confident? Are you working with AI?

      • Alen Mikic 3 years ago

        I'm not directly professionally involved with AI, but I'm following developments closely and have few good friends that are deeply involved in AI as software engineers/architects. It is just my opinion and time will show.

  2. Author

    I also follow the development closely for the last 30 years or so and I have not seen any kind progress with regard to AGI. Today I "talked" to one of Microsoft's chat bots that is supposed to give support for Microsoft 365. This was a total embarrassment. This machine didn't "understand" the simplest sentences. I asked "How can I get an invoice?" and this silly bot answered with something totally unrelated. After 3 attempts I had to give up. Even ELIZA that was created more than 50 years ago did a much better job. 

  3. Alen Mikic 3 years ago

    I would tend to agree that not much progress has been made so far but progress is exponential. I believe once it really takes off (my guesstimate is around 2030) we will see unbelievable progress in the next 10-15 years after that. So probably by 2050 we will have human level AGI (Again my guess based on info I gathered over time. I'm not the expert nor do I pretend to be!). BTW what do you think about GPT-3 and IBM Watson? 

    • Author

      I wouldn't say that there was "not much progress." With regard to AGI there was exactly 0 progress. So far nobody was ever able to build something that has any kind of general intelligence. 

      The main progress with regard to AI, was the insight that decades of research in symbolic AI was a waste of the time and resources. The same people who make these unrealistic predictions about the "exponential progress" in AI (Kurzweil, for example), were unable to predict that symbolic AI is a dead end. Thus, their new predictions are not credible at all. AI researchers keep making these unrealistic predictions for decades.

      There was also only little progress in the design of neural networks. As mentioned in the article, AI is beginning of the have an impact because computers are now fast enough to run neural networks that can do useful things. However, because these specialized AI systems will not lead to any kind of AGI no matter how fast our computers are, progress in AGI is still 0.

      GPT-3 is a nice tool, but hopelessly overhyped. It can create nice texts, but these texts are only the result of statistical analysis of syntactic structures, that is, for GPT-3 the text it creates has no meaning at all. There is no semantics involved. Thus, it has exactly 0 understanding of these texts because it does not know to what the symbols in the text refer to in the real world. You can feed GPT-3 with meaningless chains for symbols and it will then create meaningless chains of symbols as long as there is a coherent syntactical structure in the chain of symbols. Just as with ELIZA, people are getting easily fooled by its performance.

      Watson didn't impress me at all. It is old fashioned symbolic AI. Watson is essentially just a search engine. I find it funny when people say Watson "learned" on its own by "reading" Wikipedia and other texts on the internet. This is like claiming that the Google bots "read" the internet. It was silly to let a search engine compete against humans in a quiz show. This is like letting the best mathematicians compete against a pocket calculator. Pointless, isn’t it?

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account