The viral bot Chat GPT was launched by Open AI, the AI startup behind the AI art creator DALLE.
The bot, which quickly gathered over 1 million users, is attracting more investors to generative AI.
If you haven’t heard about the GPT craze, here’s an explanation of how it works and which projects are utilizing it to replace people.
Conversations with the bot might conjure the sensation of speaking online for elder millennials who grew up with IRC chat rooms — a text instant messaging system. However, Chat GPT, the most recent in technology known as “big language model tools,” does not communicate with sentience and does not “think” in the manner that humans do.
According to experts, even while Chat GPT can explain quantum physics or produce a poem on demand, a full AI takeover is not near.
“There’s an adage that an endless number of monkeys will ultimately give you Shakespeare,” said Matthew Sag, an Emory University law professor who investigates the implications of copyright for training and exploiting massive language models like Chat GPT.
“There are a lot of monkeys here, doing great things — but there is an inherent difference between how people make language and how huge language models do it,” he added.
Large quantities of data and computer techniques are used by chatbots like GPT to generate predictions about connecting words in a meaningful way. They not only have a large vocabulary and knowledge but also grasp words in context. This allows them to mimic speech patterns while delivering encyclopedic knowledge.
Other technology companies, including Google and Meta, have created language model tools that utilize algorithms that listen to human instructions and construct complex solutions. In a groundbreaking step, Open AI also developed a user interface that allows the general public to play with it directly.
Some recent attempts to deploy chatbots for real-world service have proven problematic – with unexpected effects. This month, the mental health firm Koko chastised when its founder blogged about how the business employed GPT-3 in an experiment to respond to consumers.
On Twitter, Koko co-founder Rob Morris quickly clarified that users were not communicating directly to a chatbot, but that AI was used to “help create” replies.
The inventor of the contentious DoNotPay service, which says its GPT-3-powered chatbot assists customers in resolving customer service complaints, also claimed that an AI “lawyer” will provide real-time advice to defendants in actual courtroom traffic cases.
Other academics appear to be using generative AI technologies with more caution. Daniel Linna Jr., a Northwestern University professor who works with the non-profit Lawyers’ Committee for Better Housing, studies the use of technology in the legal system. He told Insider that he’s working on a conversation bot “Intervention,” designed to aid tenants.
The bot now employs technologies similar to Google Dialogueflow, a massive language model tool. Linna stated that he is testing Chat GPT to assist “Intervention” come up with better replies and composing more elaborate messages, while also assessing its limits.
“I believe there’s a lot of buzz surrounding Chat GPT, and technologies like this have promise,” Linna said. “However, it cannot accomplish everything – it is not magic.”
Open AI has admitted as much, noting on its website that “ChatGPT occasionally writes plausible-sounding but wrong or illogical responses.”
This is an astonishing post! Your writing is so engaging and your points are extremely well presented. Keep up the brilliant work!