Tech

AI phone scams sound scary. Do these 5 things to protect yourself and your family


Gettyimages-1798627099

Art imageMarie/Getty

You may have heard stories of families picking up the phone to hear the voices of sobbing, frightened loved ones, followed by the voices of kidnappers demanding immediate money transfers.

But there was no kidnapping under these circumstances. Those voices are real — they were just manipulated by scammers using AI models to create fake videos (like when someone changes by Joe Biden voice in the New Hampshire primary to discourage voters from voting). People often just need to make a quick call to prove that no child, spouse, or parent has been kidnapped, no matter how realistic these voices are.

Also: How to find and remove spyware from your phone

The problem is, by the time the truth comes out, the panicked families may have coughed up large sums of money for these guys. fake kidnapper. What’s worse is that as these technologies become cheaper and more widespread — and our data becomes more easily accessible — more people may become increasingly susceptible to scams. this island.

So how do you protect yourself from these scams?

How AI phone scams work

First, some background: how do scammers clone individuals’ voices?

While creating a deepfake video is much more complicated, an audio deepfake is easy to create, especially for a quick scam. For example, if you or your loved one posted a video to YouTube or TikTok videos, the scammer just father seconds of that recording to transcribe your voice. Once they have that copy, scammers can manipulate it to say anything.

Also: This AI-Generated Cryptocurrency Invoice Scam Almost Broke Me Down, and I’m a Security Expert

OpenAI has created a voice cloning service called Voice tools, but suspended public access to it in March, apparently due to the potential for abuse. Even so, there are already a number of free voice transcription tools of varying quality available on GitHub.

However, there are also guardrail versions of this technology. Use your own voice or a voice you have legal access to, the company Voice AI ElevenLabs allows you to create 30 minutes of cloned audio from a one-minute sample. Subscription levels allow users to add more voices, clone a voice in another language, and get more minutes of cloned audio — plus, the company has some security check in place to prevent fraudulent cloning.

Also: Traveling? Bring this $50 anti-spy camera and bug finder

In the right circumstances, AI voice cloning will be very useful. ElevenLabs offers an impressive variety of synthesized voices from around the world and in different languages ​​that you can use with just text prompts, which can help many industries reach a wide audience easily. easier.

As voice AI improves, there are fewer issues with lag or unusual pauses that can make detecting fakes more difficult, especially when scammers can spoof their calls look as if they come from a legitimate number. Here’s what you can do to protect yourself now and in the future.

1. Ignore suspicious calls

It may sound obvious, but the first step to avoiding AI phone scams is to ignore calls from unknown numbers. Sure, answering, identifying the call as spam, and hanging up may be simple enough — but you’re running the risk of leaking your voice data.

Also: The NSA recommends turning your phone off and back on once a week – here’s why

Scammers can use these calls for voice phishing or to spoof calling you specifically to collect the few seconds of audio needed to successfully clone your voice. Especially if the number cannot be recognized, decline without saying anything and look up the number online. This can determine the legitimacy of the caller. If you want to answer to test, say as little as possible.

You probably know anyone who calls you to ask for personal or banking-related information is not trustworthy. You can always verify the authenticity of a call by contacting the organization directly, by phone or other verified lines of communication such as text, support chat or email.

Thankfully, most cell services will now pre-screen unknown numbers and label them as potential spam, doing some of the work for you.

2. Call your loved one

If you receive an alarming call like someone you know, the fastest and easiest way to expose an AI kidnapping scam is to verify that your loved one is safe through a text or call. call. That can be difficult to do if you’re panicking or don’t have another phone available, but remember that you can send a message while still on the phone with the potential scammer.

3. Set the codeword

With loved ones, especially children, decide on a shared secret word to use if they’re having trouble but can’t say it. You’ll know it could be a scam if you receive a suspicious call and your supposed loved one can’t make out your codeword.

4. Ask questions

You can also ask the scammer impersonating your loved one for a specific detail, such as what they had for dinner last night, while you try to contact your loved one privately. Don’t budge: Chances are the scammer will give up and hang up.

5. Be conscious of what you post

Minimize your digital footprint on social media and public websites. You can also use digital watermarks to ensure your content cannot be tampered with. This isn’t foolproof, but it’s the next best thing until we figure out how to protect metadata from being changed.

If you plan to upload any audio or video to the internet, consider putting it over Anti-counterfeita free software developed by researchers from Washington University in St. Louis.

Also: How to find out if AirTag is tracking you

The software – source code available on GitHub – infuses the audio with additional sounds and interruptions. While these won’t disrupt the original speaker’s sound to humans, they will make it sound completely different to the AI ​​clone, thus hindering efforts to change it.

6. Don’t rely on deepfake detectors

Several services, including Pindrop Security, AI or Not, and AI Voice Detector, claim to be able to detect AI-controlled sounds. However, most require a subscription fee, and some experts don’t think they’re even worth your while. VS Subrahmanian, professor of computer science at Northwestern University, experiment 14 publicly available detection tools. “You can’t rely on deepfake audio detectors these days, and I can’t recommend one for use,” he told Poynter.

Manjeet Rege, director of the Center for Applied Artificial Intelligence at St. Thomas, added: “I can say that there is no tool yet that is considered completely reliable for the public to detect deepfake sounds.” “A combined approach using multiple detection methods is what I would advise at this stage.”

Also: 80% of people think deepfake will influence the election. 3 ways you can prepare

Meanwhile, computer scientists are working on better deepfake detection systems, such as the University at Buffalo Media Forensic Lab. DeepFake-O-Meter, preparing to launch soon. Until then, in the absence of a reliable, public service, trust your judgment and follow the steps above to protect yourself and your loved ones.

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button