ChatGPT is now being used to make scams much more dangerous
Phishing on the internet can now become a lot more dangerous, thanks to scammers having unfettered access to ChatGPT, AI-powered chatbots (opens in a new tab) that never seems to leave the headlines.
It is according to a report (opens in a new tab) published earlier this month by Norton cybersecurity researchers. In it, the company outlined three main ways that threat actors can abuse ChatGPT to carry out internet scams more effectively: through deepfake content generation, large-scale phishing, and creating malware faster.
Norton also argues that the ability to generate “high-quality misinformation or misinformation on a large scale” could aid bot farms in more efficient part theft, allowing agents to threatens to “sow distrust and shape stories in different languages” with ease.
Fight against misinformation
They say scammers looking to curate fake reviews can also have a field day with ChatGPT by generating them in bulk and in different tones.
Finally, the already well-known chatbot could be used in “harassment campaigns” on social media, to silence or bully people, Norton said, adding that the consequences could be “cool” cold”:
Hackers can also use ChatGPT in Cheat The campaigns are, in many cases, run by attackers with no native English, helping victims uncover an obvious phishing plot in the case of spelling and grammar mistakes. With ChatGPT, threat actors can create very convincing emails at scale.
Finally, encryption malware may no longer be reserved for seasoned hackers. With the right prompts, novice malware authors can describe what they want to do and get the code to work, the researchers said.
As a result, we could see an increase in the number and complexity of malware, they said. In addition, with ChatGPT’s ability to quickly and easily “translate” source code into less common programming languages, more malware can bypass anti-virus solutions.
As with any new tool before it, ChatGPT will most likely be used by scammers and hackers to achieve their goals. The answer to these new threats is up to the user, as well as the broader cybersecurity community, the researchers conclude.