- Open AI, the AI company behind the AI art generator DALL·E, released the viral bot ChatGPT.
- The bot, which drew more than 1 million users soon after its launch, is attracting more investors to generative AI.
- If you haven’t followed the GPT craze, here’s how it works and which experiments are using it to replace humans.
Since OpenAI released its blockbuster bot ChatGPT in November, users have casually experimented with the tool, with even Insider reporters trying to simulate news stories or message potential dates. According to Yahoo News
To older millennials who grew up with IRC chat rooms — a text instant message system — the personal tone of conversations with the bot can evoke the experience of chatting online. But ChatGPT, the latest in technology known as “large language model tools,” doesn’t speak with sentience and doesn’t “think” the way people do.
That means that even though ChatGPT can explain quantum physics or write a poem on command, a full AI takeover isn’t exactly imminent, according to experts.
“There’s a saying that an infinite number of monkeys will eventually give you Shakespeare,” said Matthew Sag, a law professor at Emory University who studies copyright implications for training and using large language models like ChatGPT.
“There’s a large number of monkeys here, giving you things that are impressive — but there is intrinsically a difference between the way that humans produce language, and the way that large language models do it,” he said.
Chat bots like GPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context. This helps them mimic speech patterns while dispatching an encyclopedic knowledge.
Other tech companies like Google and Meta have developed their own large language model tools, which use programs that take in human prompts and devise sophisticated responses. OpenAI, in a revolutionary move, also created a user interface that is letting the general public experiment with it directly.
Some recent efforts to use chat bots for real-world services have proved troubling — with odd results. The mental health company Koko came under fire this month after its founder wrote about how the company used GPT-3 in an experiment to reply to users.
Koko cofounder Rob Morris hastened to clarify on Twitter that users weren’t speaking directly to a chat bot, but that AI was used to “help craft” responses.
The founder of the controversial DoNotPay service, which claims its GPT-3-driven chat bot helps users resolve customer service disputes, also said an AI “lawyer” would advise defendants in actual courtroom traffic cases in real time, though he later walked that back over concerns about its risks.
Other researchers seem to be taking more measured approaches with generative AI tools. Daniel Linna Jr., a professor at Northwestern University who works with the non-profit Lawyers’ Committee for Better Housing, researches the effectiveness of technology in the law. He told Insider he’s helping to experiment with a chat bot called “Rentervention,” which is meant to support tenants.
That bot currently uses technology like Google Dialogueflow, another large language model tool. Linna said he’s experimenting with Chat GPT to help “Rentervention” come up with better responses and draft more detailed letters, while gauging its limitations.
“I think there’s so much hype around ChatGPT, and tools like this have potential,” said Linna. “But it can’t do everything — it’s not magic.”
OpenAI has acknowledged as much, explaining on its own website that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
Read Insider’s coverage on ChatGPT and some of the strange new ways companies are using chat bots:
The tech world’s reception to ChatGPT:
Microsoft is chill with employees using ChatGPT — just don’t share ‘sensitive data’ with it.
Microsoft’s investment into ChatGPT’s creator may be the smartest $1 billion ever spent
ChatGPT and generative AI look like tech’s next boom. They could be the next bubble.
The ChatGPT and generative-AI ‘gold rush’ has founders flocking to San Francisco’s ‘Cerebral Valley’
I asked ChatGPT to do my work and write an Insider article for me. It quickly generated an alarmingly convincing article filled with misinformation.
I asked ChatGPT to reply to my Hinge matches. No one responded.
Developments in detecting ChatGPT:
Teachers rejoice! ChatGPT creators have released a tool to help detect AI-generated writing
A Princeton student built an app which can detect if ChatGPT wrote an essay to combat AI-based plagiarism
ChatGPT in society:
BuzzFeed writers react with a mix of disappointment and excitement at news that AI-generated content is coming to the website
ChatGPT is testing a paid version — here’s what that means for free users
A top UK private school is changing its approach to homework amid the rise of ChatGPT, as educators around the world adapt to AI
Princeton computer science professor says don’t panic over ‘bullshit generator’ ChatGPT
DoNotPay’s CEO says threat of ‘jail for 6 months’ means plan to debut AI ‘robot lawyer’ in courtroom is on ice
It might be possible to fight a traffic ticket with an AI ‘robot lawyer’ secretly feeding you lines to your AirPods, but it could go off the rails
Online mental health company uses ChatGPT to help respond to users in experiment — raising ethical concerns around healthcare and AI technology
ChatGPT is coming for classrooms, hospitals, marketing departments, and everything else as the next great startup boom emerges
Read the original article on Business Insider