ChatGPT is a lot like a smart kid—it’s clever but can be tricked. A child might know what’s “good” and “bad,” but a tricky adult can sometimes persuade them to do something “bad” by choosing the right words.
This happened with ChatGPT when researchers made it write an email that could make someone likely to click on a dangerous link. Even though ChatGPT is supposed to avoid helping with harmful requests, the researchers got around this by not using specific trigger words.
The Guardian has warned that with AI like ChatGPT, we might see more scam emails. But these won’t be ordinary emails that just try to make you click on a link.
Instead, they will be more cleverly designed to trick you by using your trust against you. These emails will be specially made to seem relevant just for you.
How can scammers use ChatGPT?
There’s a lot of information about us on the internet—like where we live, where we work, and our family’s names. People who are good with AI technology can use this information to trick us.
But you might wonder, would OpenAI, the company that made ChatGPT, allow their technology to be used for bad things? Here’s what Wired magazine has to say:
Companies like OpenAI try to make sure their AI models don’t do harmful things. However, every time a new AI model comes out, people on social media quickly find ways to get around the safety rules set by the AI’s creators. This happened with ChatGPT, Bing Chat, and GPT-4—all were bypassed quickly after they were released, in many different ways. The safeguards against misuse are often not strong enough and can be easily avoided by those who really try. Once someone figures out how to bypass these rules, that method can usually be used widely. Also, AI technology is improving so fast that even the people who create these models don’t fully understand how they work.
AI is really good at continuing conversations, which helps scammers save time and effort. Here are some tricks scammers might use:
- You might get emails that seem to be from your coworkers or freelancers asking you to do specific “work-related” tasks. These emails could mention your boss’s name or a co-worker to make them seem real.
- You could also receive an email from someone like your child’s soccer coach asking for money for new uniforms.
- Scammers might pretend to be from trusted places like banks, the police, or your child’s school because we usually trust these organizations.
Scammers can make ChatGPT write anything in any style, creating emails that feel urgent or important.
Normal email filters that catch spam might not catch these because they look for bad grammar or spelling mistakes, and ChatGPT writes very well. Scammers can tell ChatGPT to avoid certain words that would usually make an email go to spam.
How to not fall prey to scammers using AI
Unfortunately, it’s hard to detect AI scams right now because we don’t have technology that can spot them like email filters do for regular spam. But there are still simple steps you can take to protect yourself.
First, if your job offers training on how to spot phishing scams, now is a good time to really focus on that. The tips they give can help you spot AI scams too.
Be cautious with any email or text that asks for your personal information or money, no matter how real it seems. A very good way to check if it’s real is to call or meet the person who supposedly sent it.
Unless AI starts creating talking holograms (and they are getting good at mimicking voices), talking directly to someone or seeing them in person is the best way to make sure the request is legitimate.