There is no denying that Google has been the most crucial resource for IT professionals for the past two decades. Most IT professionals are adept problem solvers, and there are only a handful of IT issues that cannot be resolved by the vast resources of the internet. As AI-powered search engines continue to emerge, a question arises: will Google be replaced by GPT-4-based chatbots and search engines like ChatGPT and Bing Chat?
Latest posts by Michael Pietroforte (see all)

If you have attempted to seek solutions for IT issues using the assistance of these novel AI-powered systems, you were most likely impressed with their remarkable performance.

Do you trust me?

Do you trust me?

There is one small catch. Let me quote from ChatGPT's disclaimer:

ChatGPT may produce inaccurate information about people, places, or facts.

The ChatGPT disclaimer admits that responses can be inaccurate

The ChatGPT disclaimer admits that responses can be inaccurate

It may be premature to conduct a poll on this subject, given that AI-powered search engines are still in their infancy. However, Microsoft appears to have confidence in the readiness of this technology, as they have made GPT-4-based Bing Chat accessible to the general public.

I would like to know about your experiences using Bing Chat or ChatGPT to search for IT-related information.

Subscribe to 4sysops newsletter!

How reliable are ChatGPT and Bing Chat?

View Results

My opinion on the reliability of ChatGPT shifted after submitting a very personal prompt. Please cast your vote before reading my article, as I want your choice to remain unbiased.

  1. Thomas 4 months ago

    The problem with ChatGPT & co is, that:
    1) They claim things without citing a source
    2) They just parrot what they have heard (from the Internet / Data sources).
    3) So, for example, a Russian chatGPT is quite likely to color all answers with Putin’s ideology.
    4) Because also in the democratic world extremely much sick ideology is found, therefore ChatGPT explains things for correct, which are sick with more exact / sober view.
    5) This creates a vicious circle: it won’t take long for users to declare that something must be true because ChatGPT said it. These users feed the Internet and ChatGPT then learns from it again.

    Just as liars should be held accountable (especially if they are presidents), so should ChatGPT operators be held accountable if their products lie or promulgate harmful ideologies.

    Thanks a lot, kind regards,

  2. Jason Coltrin 4 months ago

    On the one hand I’ve had success using chat GPT to create basic shell Scripts and Powershell Scripts. On the other hand it has hallucinated procedures and troubleshooting steps prompting me to press buttons that do not exist. An IT pro should never forward ChatGPT responses to clients or users without first verifying the veracity of the information it produces.

    • Author

      Yeah, it can accelerate generating code, but you must verify every line. I am afraid many developers will rely too much on AI-generated code. I encountered several cases where ChatGPT fabricated the existence of built-in functions. It seems it confused functions found on the web with built-in functions.

      • In my experience, the result of script generation is not 100 per cent reliable, a minimum of troubleshooting must be done before production deployment.

  3. A factor to consider is also the version, ChatGPT ( based on GPT-3.5), is based on a neural network model with 175 billion parameters in contrast GPT-4 is much more precise with a much larger number, more than 100 trillion parameters. Depending the version the result is more accurate.

    • Author

      GPT-4 only uses one trillion parameters. Some AI researchers claim the human brain works with 100 trillion parameters. I don’t want to go into the details here, but most neuroscientists will tell you that this is a hopeless underestimation. The human brain is a couple of magnitudes more complex.

      It’s worth mentioning that there was a significant increase in the number of parameters between GPT-3.5 and GPT-4 – a factor of 5.7. However, the accuracy only improved by a factor of 1.3. It’s unlikely that we’ll see significant improvements in accuracy anytime soon, as Microsoft has already had to restrict the number of prompts individuals can use due to the extensive resources required by GPT-4.

  4. About ChatGBT integrated in Microsoft Bing, it is probably just an attempt to create another frontier to contrast Google. From an IT point of view, generative AI will have to be integrated into more specific productivity and development tools. I don’t think it is a real game-changer in this case.

    • Author

      Microsoft doesn’t have much at stake in implementing GPT for their search engine. Bing isn’t a crucial part of their business model, and Microsoft isn’t necessarily known for its reliability. On the contrary, Google has a lot riding on its reputation as the leading search engine. This is a significant reason why they have hesitated to publicize Bard. Google recognizes that this technology hasn’t reached maturity to replace search engines yet.

  5. I find more interesting Windows 365 Copilot, an interesting proposal for improving individual productivity

    • Author

      Undoubtedly, there are practical applications for this technology. It is useful when AI serves as support to humans in accomplishing their tasks. This approach is sensible because if the AI makes any mistakes, the human can correct them.

      I just have doubts about AI-powered search engines. ChatGPT wants to be a Wikipedia for everything. Imagine if 10-20% of Wikipedia’s content was fabricated. How beneficial would it be as an encyclopedia?

      • Bot-generated articles are a problem already seen before ChatGPT (Wikipedia:Bot-created articles ), only in this case the use of generative AI increases a lot the level of automatic content generation. And the problem arises of the awareness that the generated text, beyond possible hallucinations, becomes indistinguishable from a human generated text.

  6. Author

    Seems AI is not even “intelligent” enough to drive a car properly. A whistleblower leaked data about countless complaints and problems with Tesla’s autopilot:

    More than 2,400 complaints allege sudden unintended acceleration problems. Although Autopilot and FSD have been the focus of headlines for the last few years, during the mid-2010s there were plenty of reports of Teslas taking off on their own accord—at least 232 cases have been reported in the US, although (as often turns out in cases like these) the National Highway Traffic Safety Administration found no evidence for a hardware or software problem, instead blaming driver error.

    More than 1,500 complaints allege problems braking, including 139 cases of phantom braking and 383 cases of phantom stops. In February 2022, we learned that NHTSA had opened a safety investigation into Tesla’s phantom braking problem after it received hundreds of complaints after an article in The Washington Post drew attention to the issue. But the problem has persisted, causing an eight-car collision over Thanksgiving after Tesla opened up its FSD Beta program to all owners.

    Handelsblatt says there were more than 1,000 crashes linked to brake problems and more than 3,000 entries where customers reported safety concerns with the driver assists.

    The problem is related to ChatGPT’s reliability problems. The compute power we have is just not enough for applications where an AI acts autonomously. Imagine you drive at 200 km/h and your Tesla hits the brakes because it hallucinates that Elon Musk is busy tweeting in the middle of the highway. 😉

  7. The Microsoft AI search has really been an issue for me. It constantly sends me off on completely unrelated results that are not even close to what I was searching for.
    Carl Webster (well known in the Citrix world) tried asking ChatGPT about a Citrix documentation script, and it started replaying his own (very well known) script back to him, including with his own variable names etc.
    I tried a couple of powershell scripts and they were flawed, but got me going in the right direction. Out of curiousity, I deleted one of the “conversations” and started again, and it went down the exact same path, even when I had asked about substantial flaws in the original answer I had from it.

    From a scripting/learning perspective, I do like being able to ask about things I don’t really know or understand, but I know at best I’ll get a framework to start with, but nothing that could be considered ready without very close examination and a lot of correction.

    David F.

    • Author

      I experienced similar cases in ChatGPT. I searched for WordPress hooks because I couldn’t find them on Google. The answers were too good to be true, so I asked about the sources. ChatGPT responded, “Sure, no problem,” and gave me links totally unrelated to its responses. In the end, it turned out that the hook didn’t exist at all.

  8. ChapGPT is not only limited as a IT discussion … A lawyer used ChatGPT to do legal research and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge

    Schwartz’s firm has been suing the Columbian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City. When the airline recently asked a federal judge to dismiss the case, Mata’s lawyers filed a 10-page brief arguing why the suit should proceed. The document cited more than half a dozen court decisions, including “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines” and “Miller v. United Airlines.” Unfortunately for everyone involved, no one who read the brief could find any of the court decisions cited by Mata’s lawyers. Why? Because ChatGPT fabricated all of them. Oops.

    • Author

      It’s a prime illustration of the perilous nature of this technology. The responses are so convincing that individuals are inclined to rely on them. It’s hard for people to fathom that a computer could fabricate such content.

  9. Citation and monetisation of third-party content has been a problem for a while now. The development of digital assistants (e.g. Alexa) has offered little in the way of credit to content authors and certainly no recompense for use of their works. If feels to me that a world of AI derived searches will be fairly bleak without some consideration for the authors of the original content.

    • Author

      It is uncertain how search engine providers could effectively compensate site owners for their content. One possible solution would be requiring crawlers to pay for accessing web content. While technically feasible, it remains doubtful whether politicians possess the ability to implement the corresponding laws, given the mess they have created with GDPR.

Leave a reply

Your email address will not be published. Required fields are marked *


© 4sysops 2006 - 2023


Please ask IT administration questions in the forums. Any other messages are welcome.


Log in with your credentials


Forgot your details?

Create Account