- Poll: How reliable are ChatGPT and Bing Chat? - Tue, May 23 2023
- Pip install Boto3 - Thu, Mar 24 2022
- Install Boto3 (AWS SDK for Python) in Visual Studio Code (VS Code) on Windows - Wed, Feb 23 2022
If you have attempted to seek solutions for IT issues using the assistance of these novel AI-powered systems, you were most likely impressed with their remarkable performance.
There is one small catch. Let me quote from ChatGPT's disclaimer:
ChatGPT may produce inaccurate information about people, places, or facts.
It may be premature to conduct a poll on this subject, given that AI-powered search engines are still in their infancy. However, Microsoft appears to have confidence in the readiness of this technology, as they have made GPT-4-based Bing Chat accessible to the general public.
I would like to know about your experiences using Bing Chat or ChatGPT to search for IT-related information.
Subscribe to 4sysops newsletter!
My opinion on the reliability of ChatGPT shifted after submitting a very personal prompt. Please cast your vote before reading my article, as I want your choice to remain unbiased.
The problem with ChatGPT & co is, that:
1) They claim things without citing a source
2) They just parrot what they have heard (from the Internet / Data sources).
3) So, for example, a Russian chatGPT is quite likely to color all answers with Putin’s ideology.
4) Because also in the democratic world extremely much sick ideology is found, therefore ChatGPT explains things for correct, which are sick with more exact / sober view.
5) This creates a vicious circle: it won’t take long for users to declare that something must be true because ChatGPT said it. These users feed the Internet and ChatGPT then learns from it again.
Just as liars should be held accountable (especially if they are presidents), so should ChatGPT operators be held accountable if their products lie or promulgate harmful ideologies.
Thanks a lot, kind regards,
Thomas
It will be most interesting to see how many lawsuits Microsoft will face because of the false information they spread with Bing Chat.
0 due to disclaimer.
Only ChatGPT has the disclaimer, Bing Chat has not.
On the one hand I’ve had success using chat GPT to create basic shell Scripts and Powershell Scripts. On the other hand it has hallucinated procedures and troubleshooting steps prompting me to press buttons that do not exist. An IT pro should never forward ChatGPT responses to clients or users without first verifying the veracity of the information it produces.
Yeah, it can accelerate generating code, but you must verify every line. I am afraid many developers will rely too much on AI-generated code. I encountered several cases where ChatGPT fabricated the existence of built-in functions. It seems it confused functions found on the web with built-in functions.
In my experience, the result of script generation is not 100 per cent reliable, a minimum of troubleshooting must be done before production deployment.
A factor to consider is also the version, ChatGPT ( based on GPT-3.5), is based on a neural network model with 175 billion parameters in contrast GPT-4 is much more precise with a much larger number, more than 100 trillion parameters. Depending the version the result is more accurate.
GPT-4 only uses one trillion parameters. Some AI researchers claim the human brain works with 100 trillion parameters. I don’t want to go into the details here, but most neuroscientists will tell you that this is a hopeless underestimation. The human brain is a couple of magnitudes more complex.
It’s worth mentioning that there was a significant increase in the number of parameters between GPT-3.5 and GPT-4 – a factor of 5.7. However, the accuracy only improved by a factor of 1.3. It’s unlikely that we’ll see significant improvements in accuracy anytime soon, as Microsoft has already had to restrict the number of prompts individuals can use due to the extensive resources required by GPT-4.
About ChatGBT integrated in Microsoft Bing, it is probably just an attempt to create another frontier to contrast Google. From an IT point of view, generative AI will have to be integrated into more specific productivity and development tools. I don’t think it is a real game-changer in this case.
Microsoft doesn’t have much at stake in implementing GPT for their search engine. Bing isn’t a crucial part of their business model, and Microsoft isn’t necessarily known for its reliability. On the contrary, Google has a lot riding on its reputation as the leading search engine. This is a significant reason why they have hesitated to publicize Bard. Google recognizes that this technology hasn’t reached maturity to replace search engines yet.
I find more interesting Windows 365 Copilot, an interesting proposal for improving individual productivity
Undoubtedly, there are practical applications for this technology. It is useful when AI serves as support to humans in accomplishing their tasks. This approach is sensible because if the AI makes any mistakes, the human can correct them.
I just have doubts about AI-powered search engines. ChatGPT wants to be a Wikipedia for everything. Imagine if 10-20% of Wikipedia’s content was fabricated. How beneficial would it be as an encyclopedia?
Bot-generated articles are a problem already seen before ChatGPT (Wikipedia:Bot-created articles ), only in this case the use of generative AI increases a lot the level of automatic content generation. And the problem arises of the awareness that the generated text, beyond possible hallucinations, becomes indistinguishable from a human generated text.
Seems AI is not even “intelligent” enough to drive a car properly. A whistleblower leaked data about countless complaints and problems with Tesla’s autopilot:
The problem is related to ChatGPT’s reliability problems. The compute power we have is just not enough for applications where an AI acts autonomously. Imagine you drive at 200 km/h and your Tesla hits the brakes because it hallucinates that Elon Musk is busy tweeting in the middle of the highway. 😉
The Microsoft AI search has really been an issue for me. It constantly sends me off on completely unrelated results that are not even close to what I was searching for.
Carl Webster (well known in the Citrix world) tried asking ChatGPT about a Citrix documentation script, and it started replaying his own (very well known) script back to him, including with his own variable names etc.
I tried a couple of powershell scripts and they were flawed, but got me going in the right direction. Out of curiousity, I deleted one of the “conversations” and started again, and it went down the exact same path, even when I had asked about substantial flaws in the original answer I had from it.
From a scripting/learning perspective, I do like being able to ask about things I don’t really know or understand, but I know at best I’ll get a framework to start with, but nothing that could be considered ready without very close examination and a lot of correction.
David F.
I experienced similar cases in ChatGPT. I searched for WordPress hooks because I couldn’t find them on Google. The answers were too good to be true, so I asked about the sources. ChatGPT responded, “Sure, no problem,” and gave me links totally unrelated to its responses. In the end, it turned out that the hook didn’t exist at all.
ChapGPT is not only limited as a IT discussion … A lawyer used ChatGPT to do legal research and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge
Schwartz’s firm has been suing the Columbian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City. When the airline recently asked a federal judge to dismiss the case, Mata’s lawyers filed a 10-page brief arguing why the suit should proceed. The document cited more than half a dozen court decisions, including “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines” and “Miller v. United Airlines.” Unfortunately for everyone involved, no one who read the brief could find any of the court decisions cited by Mata’s lawyers. Why? Because ChatGPT fabricated all of them. Oops.
It’s a prime illustration of the perilous nature of this technology. The responses are so convincing that individuals are inclined to rely on them. It’s hard for people to fathom that a computer could fabricate such content.