ChatGPT doesn’t come with an instruction manual. But maybe it should. Only a quarter of Americans who have heard of the AI chatbot say they have used it, Pew Research Center reported this week.
“The hardest lesson” for new AI chatbot users to learn, says Ethan Mollick, a Wharton professor and chatbot enthusiast, “is that they’re really difficult to use.” Or at least, to use well.
The Washington Post talked with Mollick and other experts about how to get the most out of AI chatbots — from OpenAI’s ChatGPT to Google’s Bard and Microsoft’s Bing — and how to avoid common pitfalls. Often, users’ first mistake is to treat them like all-knowing oracles, instead of the powerful but flawed language tools that they really are.
Here’s our guide to their favorite strategies for asking a chatbot to help with explaining, writing and brainstorming. Just select a topic and follow along.
AI chatbots can be impressive, especially once you start to learn how to coax better answers from them. But understanding their limitations is at least as important as discovering their strengths, Willison says.
It’s crucial to remember that they’re not human, and they’re not reliable sources of information, even about themselves. So if a chatbot makes a factual claim, verify it elsewhere. And if it’s acting like it has thoughts and feelings — or wants to break up your marriage — remember that it’s just playing off your prompts, drawing on billions of human interactions in its training data to predict the most likely response.
Similarly, if chatbots show cultural biases or say offensive things, it’s a reminder that they’ve ingested some of the ugliest material the internet has to offer, and they lack the independent judgment to filter that out.
AI may or may not be coming for your job. But if you familiarize yourself with its strengths and weaknesses, you’ll be better positioned to fend it off, or even turn it to your advantage.
About this story
Source link