Have you ever wondered what it’s like to talk to a robot? It’s pretty cool.
Ask ChatGPT to write a Shakespearean sonnet, tell a funny story or write an essay on quantum theory and it will do it in seconds.
The chatbot’s applications seem endless: he has written sermons for a rabbi in New York, written a song in the vein of Nick Cave (who called it a “grotesque mockery”), and MP Julian Hill has warned the Australian Parliament of the rise of artificial intelligence (AI), its the speech was partly written by the AI itself.
This is a smart bot to run. Researchers in the US tested ChatGPT’s academic credentials and passed the three-stage exam that all US medical school graduates must pass and achieved a B- in their postgraduate Masters in Business. Doctor Chatbot see you soon!
Developed by the OpenAI research lab, ChatGPT stands for “chat generative pre-trained transformer”: an interactive AI chatbot capable of providing conversational answers to all kinds of questions and requests, with in-depth answers that often surpass the average human intelligence.
Since its appearance last year, ChatGPT has made headlines as people ask questions and get extraordinary answers. But is the hype real – or is the internet easy to impress?
Under the hood
If you’ve used ChatGPT to answer a question, you’ve probably wondered how it works. ChatGPT uses a machine learning class known as the Big Language Model (LLM). LLMs process masses of textual data and then build relationships between the words found in the text. The more the LLM is used, the more their capabilities increase.
Imagine reading books from the entire library and then asking a question. Your answer would be an aggregation of your knowledge taken from books. ChatGPT does something similar, only with a giant dataset of text scraped from the internet rather than a library.
But how does it sound like it was written by a real person? Put simply, ChatGPT works by learning huge chunks of text, spotting patterns, and then replaying what it has learned to predict a logical sequence of human language text that meets the input requirement, i.e. when a question is asked.
While it sounds convincing, what it says isn’t always logical – or factual. Users pointed out all kinds of glaring errors, from historical inaccuracies to impossible-to-follow instructions, all with seemingly complete certainty. Indeed, the prompt page warns users that ChatGPT “may occasionally generate incorrect information” and “may occasionally generate harmful instructions or biased content” (some have reported instances of sexism and racism in the bot’s responses).
Disinformation researchers had more frightening fears – chatbots could be used to massively spread fake news and conspiracy theories more effectively than ever. OpenAI was quick to point out the many limitations of ChatGPT, including its lack of knowledge after 2021 and its inherent vulnerability to biased content due to the nature of the training data. But it seems to be a bot eager to learn, with the ability to admit mistakes and even question misleading or problematic prompts.
Imitating Gareth May
We asked ChatGPT to “write a short article in the style of British journalist Gareth May”. Here’s what the chatbot provided:
“As a language model, I can generate text in the style of Gareth May, a British journalist known for his witty and irreverent writing style. Here’s a short text in his style: Well, well, well – it seems that the latest trend in foodie circles is something called “egg in the cloud”.
‘Yes, you heard right, people. Apparently, a humble egg isn’t good enough anymore, and now we need to add a touch of Instagram-worthy fantasy to our breakfast plates.
‘Now I’ll be the first to admit that these cloud eggs look pretty impressive – they’re all fluffy and ethereal, like something you might see in a Pixar movie. Sure, they look nice, but the consistency is wrong – like eating a mouthful of foam. Stick to good old-fashioned scrambled or fried eggs and leave the culinary theatrics to the professionals.
In January, Microsoft announced a $10 billion investment in OpenAI, in addition to an initial $1 billion investment in 2019. The next iteration of ChatGPT, GPT-4, is due out this summer. Microsoft has already integrated ChatGPT into its new and improved Bing search engine, currently testing it in 169 countries.
But this is not a clear path to chatbot dominance. Google Bard is the main contender for ChatGPT. Currently only available to beta testers, this is basically an enhanced version of Google’s own search tools, which has the advantage of being able to provide up-to-date information directly from the web, allowing you to get what Google says is “fresh, high”. – qualitative responses.
Unfortunately, the premiere on February 6 was far from high quality. During a demonstration demonstration, Bard incorrectly answered a question about recent discoveries made by the James Webb Space Telescope. Alphabet shares, Google’s parent company, fell in response, losing $100 billion in market value.
But Google has more than one horse in the race. Alphabet subsidiary DeepMind is set to launch its own chatbot called Sparrow in private beta this year. ChatGPT went viral for its impressive ability to write love letters and well-argued essays, but has been criticized for its ability to create malware and use discriminatory language. DeepMind hopes to avoid these pitfalls by operating within a strict set of rules, providing evidence for scientific answers, and rejecting “inappropriate” requests.
Other alternatives include YouChat, which can generate visuals or graphics and respond to voice commands; the Socratic app, designed to break down complex concepts for schoolchildren; and finally, AI-powered business tools like Rytr, Copy.ai, and Jasper that can remove blog posts or newsletter content in minutes. In short, chatbots are here permanently.
So what are the implications of chatbots? Former computer science lecturer Graham Glass, now CEO and founder of Cypher Learning, believes that AI chatbot technology like ChatGPT is a game changer.
However, he warns that academic institutions will need to learn to spot AI-created counterfeits as students increasingly use the program to write essays and take certification exams. “Similar challenges will arise in business,” he says, “where generative AI applications will be recruited for everyday tasks.”
As a workforce, a chatbot is a boon. For example, you won’t find a chatbot ripping off piña coladas on the beach. But Glass says chatbots won’t make people obsolete.
“Many fear that generative AI could transform the job market by displacing large numbers of workers. Displacement will happen, but not to the dystopian, traumatic degree that some people think,” concludes Glass.
MORE: Snapchat launches its own AI chatbot powered by ChatGPT
MORE: Spotify AI with ‘stunningly realistic voice’ tells you what to listen to
Do you have a story to share?
Please contact us by emailing MetroLifestyleTeam@Metro.co.uk.