ChatGPT hype and why it will end sooner than you think

The general hype around AI isn’t going away anytime soon. It is appearing in almost all industries, from customer service to medicine. However, technically understanding these tools remains a complicated conversation.

Large Language Models (LLMs) cannot understand and mimic human-like conversations. They are trained with huge amounts of data to produce a certain result based on certain inputs. However, they lack the ability to understand the true meaning of these words. Any response generated by the LLM will lack a basic understanding of the context.

While LLMs can produce formal and structured pieces of prose and poetry, these writings are very uninspiring and dull. ChatGPT OpenAI is an LLM that generates new text after training with a huge amount of data. While teachers fear that the popularity of ChatGPT will put an end to takeaway assignments and exams, a close examination of the ChatGPT algorithm reveals its inability to produce creative and interesting human-like prose. This kind of incompetence poses a fundamental question about the usefulness of technology in solving business cases.

Confused fears?

According to statistics, the chatbot market is expected to grow at a CAGR of 23.5%, reaching 10.6 billion by 2026.

ChatGPT is a popular generative AI and not the first AI based chatbot. It competes in a market full of different language bots. However, the latest free version of ChatGPT gained more momentum after gaining 1 million users in just one week. ChatGPT depends on a large number of people handling huge amounts of data to classify, tag, label and annotate data to enhance its capabilities. There is some deterministic speculation that ChatGPT may replace Google Search.

However, the likelihood of inaccuracies in ChatGPT responses forces users to verify them with external sources. Such verification can be more complicated because ChatGPT provides specific answers without any links to sources (unlike Google) or specifying the level of confidence. Therefore, fears and speculations about Google’s replacement may be a bit mixed up.

Disadvantages of ChatGPT

As discussed above, ChatGPT can write prose and poetry, answer complex questions and engage in conversations, but some shortcomings cannot be overlooked. Some of them include:

Wrong answers

ChatGPT is an extensive LLM that improves the accuracy of its responses through continuous training. However, since this LLM is new enough, it has not received enough training. As such, it may give inaccurate answers.

For this reason, Stackflow blocked the answers from ChatGPT, saying that the answers provided by ChatGPT are harmful to the community and users looking for correct answers. Although ChatGPT has a high rate of generating inaccurate answers, the chatbot answers all questions with such confidence that these answers appear to be not only correct, but also the best.

Training Data Limitations

Like all other AI models, ChatGPT has limitations in training data. Limitations, limitations and errors in training data may produce inaccurate results. It can disproportionately affect minority groups and perpetuate stereotypical perceptions. In order to reduce such bias, it is necessary to improve data transparency.

Sustainable development

ChatGPT is a free product, but using this technology is extremely costly. The cost of operation is estimated at around $100,000 per day or $3 million per month. This raises questions about its sustainability in the long term. Open AI partnership with Microsoft may reduce some costs. But this operation is by no means cheap.

Advances in artificial intelligence: a bumpy road ahead

While many tech determinists have called ChatGPT “red code” for Google, the reality is far from it. Testing has shown that ChatGPT generates “mindless misunderstanding”, i.e. thoughtless, inconsistent responses that reveal that the system does not understand what it is talking about. While it protects offensive responses (a major problem with other Generative AI bots), it does so with keywords and doesn’t understand what it’s protecting against.

The other, more significant problem with ChatGPT is hallucinations – it mixes up related things that don’t answer the question correctly. It basically paraphrases and combines different information from the training data. There may be some accidental or unclear connection between this information. Therefore, the answer may seem plausible or plausible, but it may be far from reality.

Unlike traditional chatbots that link keywords to intent, LLMs like ChatGPT are text predictors. This means that they basically learn the relationship between texts, words and sentences. They use these relationships to predict the following string.

While a Google search costs less than a penny, ChatGPT is quite expensive (between data collection, manual data manipulation and massive computation). Similarly, it takes a while to compile an answer, while a Google search is instantaneous. These economic and speed issues put ChatGPT behind Google.

The above discussion of LLM and ChatGPT shows that the ChatGPT hype may be exaggerated. When people start imagining the possibilities, a lot of emotions come up. However, after a short time, those who actually test the parameters of these tools in specific business scenarios reveal that we are still far from the great AI singularity in the sky.

Srini Pagidyala is the co-founder


Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including data techs, can share data insights and innovations.

If you want to read about cutting edge ideas and current information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published. Required fields are marked *