How to Protect Yourself and Your Business From Innovative New AI Scams

Artificial intelligence (AI) and Machine Learning (ML) are two innovations changing the way organizations and employees work. While these technologies hold great promise and are, in many cases, assisting numerous industries in adding efficiencies and enabling cost reductions, they are not without risks. Cyber criminals, too, are adopting AI and ML within their own nefarious practices to enhance the success of their insidious practices and make their fraudulent activities more effective. 

 AI, technology that permits computers to think like humans and mimic human response, assists criminals by simplifying the development and deployment of fraudulent schemes. Hackers can subsequently employ information and software developed and honed with the assistance of AI technologies to perfect social engineering attacks, phishing efforts, malware attacks and other malicious activities.   

 ML, an artificial intelligence technology that permits software programs to learn from previous events and become more accurate, particularly assists cyber criminals by automating the process by which malicious programs and attacks are enhanced and improved. Corresponding threats become much more effective as a result.  

 In fact, the cyber threats generative AI platforms such as ChatGPT present are the “biggest issue we’re going to deal with this century,” Recorded Future News reports Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly stating at a cybersecurity conference in April 2023. The reason AI threats are so dangerous, the outlet states she noted, are due to the many ways criminals can employ the innovative technology generating compelling new attacks. 

 Just how are bad actors wielding AI and ML to assist their scams? From voice cloning to deepfakes, criminals are embracing these innovations in creative and surprising new ways. Here’s a look at each. 

Voice Cloning

Beware the next time you receive a call from a family member claiming they need money due to an emergency. Fraudulent and urgent calls stating a loved one has been kidnapped and ransom must be paid is another risk AI technologies are helping polish to make them appear more realistic and shock victims into making an ill-advised payment.  

 Among other techniques, malicious actors are employing AI to clone voices. Such scams have become so prevalent the Federal Trade Commission (FDC) issued a consumer alert regarding the practice in March 2023. The government agency requests individuals report such attempts. 

 The AI-enabled voice cloning works so well criminals are defeating biometric protections that use a user’s voice to access bank accounts and, subsequently, scam victims out of cash. To protect yourself, and your business, whenever you or staff (especially those working within accounting and finance departments) receive voice calls and email messages (or both) requesting electronic payments or cash transfers, add another fraud prevention step to your standard workflow to help prevent becoming a victim. 

 It is important to confirm the individual’s identity, especially as AI chatbots effectively mimic normal and natural conversation and new technologies perform well imitating specific voices. In the case of family, call the individual back at a known-good telephone number. If the person proves unavailable at that number, contact other trusted individuals who can assist getting in contact. In a business, consider requiring face-to-face meetings whenever authorizing payments or, at a minimum, implement and enforce accounts payable processes that require multiple confirmation steps to ensure payment requests are legitimate. 

 Your suspicions should immediately be raised whenever a request involves payment by a wire transfer, gift card or cryptocurrency payment. The same is true whenever unexpected or urgency enters the equation. Should any of those circumstances arise, suspend communications and activities until you can implement aggressive fraud prevention steps and ensure a request is legitimate and warranted. 


Deepfakes, the practice in which AI and ML are used to create realistic but fake audio tracks, images and videos, is another growing AI-related threat. The US Department of Homeland Security (DHS) published a document for educational and informational purposes that warns deepfake threats will increase as the corresponding required technologies become more readily available. 

 The DHS paper—Increasing Threat of Deepfake Identities—explores the various techniques used to create deepfakes, including: 

  • Face Swap – A common practice, which predates AI and ML technologies, face swaps involve replacing the face of one person with that of another. Today’s AI technologies, including Deep Neural Network (DNN) ML innovations, enable creating more convincing versions of these deepfakes. 
  • Lip Syncing – By using Recurrent Neural Networks (RNN) technology and Wav2Lip among other tools, deepfake creators can manipulate video recordings by replacing the voice within those recordings to say anything they wish to make it appear the subject is saying when, in fact, the subject did not make those statements. As AI and ML innovations make such realistic trickery easier to accomplish in believably convincing ways, the corresponding risks will only increase, the agency warns. 
  • Puppet – Generative Adversarial Network (GAN) technology enables deepfake creators to edit videos to make subjects appear to move in ways the subject did not actually move. The technology permits manipulating both facial and body movements in realistic ways. 

 DHS warns it is likely cyber criminals and other malicious parties will be “undeterred from creating synthetic media,” another term by which deepfakes are coming to be known. Noting criminals seeking to perpetuate financial fraud will disregard synthetic content rules and regulations, the agency predicts low-impact, high-probability attacks will likely outnumber high-impact, low probability attacks. Believing the public may prove wiser once deepfakes become more commonplace, the agency states malicious users may fear opportunities to defraud individuals are decreasing and, subsequently, feel compelled to attempt a “big score” quickly before the public becomes more resistant to these new AI- and ML-enabled threats. 

Individuals and businesses should be wary and on the alert for deepfake-based attacks. As the DHS notes, deepfakes can be used to generate misinformation about a company, its staff or its products and services. Deepfakes can also be used to bolster the effectiveness of traditional social engineering attacks, in which malicious actors seek to mislead an employee into completing fraudulent activities, including by studying an organization’s social media posts, organization structure and messaging and including that information within the attack to make the effort more convincing and realistic. 

 To assist reducing deepfake threats, DHS recommends users report suspicious, inappropriate and abusive behaviors by: 

  • Contacting law enforcement to report victimization. 
  • Reporting specific cases to the FBI. 
  • Alerting the Securities and Exchange Commission (SEC) when deepfake activities involve financial crimes. 
  • Reporting inappropriate or abusive behavior to corresponding social media platforms using those platforms’ established reporting processes.
  • Reporting underage victimization to the National Center for Missing and Exploited Children.

Remain Vigilant

Because of the innovations’ capabilities, cyber criminals will continue finding new and unique ways to leverage AI and ML technologies. From simplifying and helping automate identity theft to improving traditional cyber attacks, count on bad actors to incorporate AI and ML technologies within their efforts to steal identities, set up fraudulent bank accounts, imitate successful social media influencers and dupe an influencers’ or business’ customers into supporting a fake entity. 

 While online scams and cyber scams are sure to continue evolving, consumers and businesses can remain vigilant by remaining current with new fraudulent techniques and the methods criminals are employing to perpetuate cyber crimes. The following agencies and organizations all offer bulletins, updates and information to assist remaining vigilant, recognizing cyber threats and understanding how best to respond: