Malicious actors are already using AI technologies to attack users, networks and organizations.
- Artificial Intelligence (AI) is achieved when machines carry out tasks that are not pre-programmed, and in a way that we consider ‘smart. Machine Learning (ML), meanwhile, refers to machines using algorithms to recognize data patterns and “learn” from experience.
- Hackers are also leveraging AI and ML to search the Internet for vulnerable networks and, without having to report back or require human intervention, attack vulnerabilities the AI-powered efforts identify.
- Social engineering also used to require considerable time and often multiple deliberate machinations. With AI and ML pressed into service, that’s no longer true.
- AI and ML techniques can also be used to identify new zero-day exploits, then generate immediate attacks targeting those specific vulnerabilities.
- Fortunately, AI- and ML-powered strategies work both ways. Next Generation Firewalls, or NGFWs, the traditional hardware devices are being programmed with smarter AI- and ML-enabled services
If you view AI only as a futuristic still-to-be-realized technology—due possibly to such fantastical big screen depictions of the technology as portrayed empowering Terminator’s evil Skynet and Battlestar Galactica’s ruthless Cylons—the discomforting truth is you’re wrong. Whether you associate AI with an advanced aspirational future in which sophisticated robots perform everyday tasks or believe the technology possesses industrial promise today, malicious actors are already using AI to attack.
We’ll again emphasize that last sentiment due to its magnitude. Malicious actors are already using AI technologies to attack users, networks and organizations.
Most everyone is, by now, familiar with the damage and expense commonly resulting from computer virus, spyware and ransomware infections and disruptions. Bad actors have learned to refine phishing techniques, making ransomware efforts more effective than ever, judging by last year’s widespread headlines. Emboldened, hackers are becoming more successful seizing sensitive data, compromising systems and misdirecting payments and even stealing money.
Miscreants are now adopting both AI and ML techniques to power these malicious activities more effectively. Most worrisome, those efforts are working.
To better understand how hackers are using AI and ML to target you, your organization and its network, it helps to understand exactly what AI and ML really are and how they work.
Think of AI as machines performing actions that imitate human intelligence. According to one white paper, AI “is achieved when machines carry out tasks that are not pre-programmed, and in a way that we consider ‘smart.'” ML, meanwhile, refers to machines using algorithms to recognize data patterns and “learn” from experience.
Combine the two technologies—AI and ML—and you’ve got a potent pairing uniquely impactful to computing. And hackers are already taking advantage of the technologies’ benefits.
Malicious actors employ AI to generate more effective phishing messages. AI permits better targeting precise spear-phishing attacks in which a single user’s social media history can be exploited to help fool coworkers into revealing sensitive information, clicking on links that install compromising software or generating or redirecting electronic payments.
Hackers are also leveraging AI and ML to search the Internet for vulnerable networks and, without having to report back or require human intervention, attack vulnerabilities the AI-powered efforts identify. Then, rather than simply launching a static attack, the AI and ML technologies the hackers employ permit studying networks, monitoring firewalls for conflicts and other vulnerabilities and waiting for the best moment—such as when multiple compromising agents have been installed or a firewall is most at risk due to a missing patch—to launch an all-out offensive.
By enabling hackers to target specific users more effectively with precise information faster, AI and ML team to assist bad actors in managing one of hacking’s most damaging and effective weapons: social engineering. In such attacks, hackers strive to deceive users into revealing sensitive information or even performing seemingly authorized actions under false pretenses. While one of the most effective tools available to hackers to infiltrate networks, social engineering also used to require considerable time and often multiple deliberate machinations. With AI and ML pressed into service, that’s no longer true. These previously time-intensive tasks can now be performed in the background, by computers, with impressive speed and results. AI is reportedly even being used, now, to generate voice calls and deep-fake images that target specific victims with surprising success.
For example, hackers might adopt AI and ML technologies within sophisticated social engineering processes to monitor electronic interactions between an organization’s accountant and CEO, both of whom are listed and active on the Linked In social network. Once these clandestine efforts collect sufficient information, the AI attack proceeds with a computerized offensive in which the AI process sends the accountant an email appearing to be a legitimate payment request from the CEO. Such attacks can include an AI-generated telephone call to the accountant that uses a representation of the CEO’s voice to authorize the transaction, which is in reality a wire payment to a foreign bank account set up by the hackers.
AI and ML techniques can also be used to identify new zero-day exploits, then generate immediate attacks targeting those specific vulnerabilities. Taking advantage of ML-derived capabilities, the hackers can also program such attacks to randomly change the names of their executable files and even modify malicious payload behaviors to better escape detection and prevention, including by common anti-malware engines.
But there are yet additional ways hackers are leveraging AI and ML to attack organizations. By applying similar techniques to new viruses and worms, the AI- and ML-equipped payloads malicious actors’ infections deliver to networks that then spread throughout an organization permitting hackers to take control of systems, encrypt data and perform hosts of other malevolent actions, are subsequently capable of changing and evolving to much more effectively permit the new viruses and worms to better operate and evade detection and resolution. Worse, AI and ML working together permit these nefarious acts to evolve much more quickly and in much more precise ways, including by taking advantage of elements unique to each network, than was before possible.
Fortunately, AI- and ML-powered strategies work both ways. Cybersecurity developers are already building AI and ML techniques into anti-malware programs that use AI and ML technologies to improve predictive analytics, threat detection and corresponding automated responses. The techniques, in fact, are leading to a new type of protection known as Next-Generation Antivirus, or NGAV, of which Microsoft Defender for Endpoint is but one example.
The same technologies are being applied to firewall services. Known as Next Generation Firewalls, or NGFWs, the traditional hardware devices are being programmed with smarter AI- and ML-enabled services to not only run anti-malware detection and response actions at the network perimeter, but also to adopt AI- and ML-techniques to better learn from experience and improve threat detection and response, even as the very nature of those threats change and evolve.
There are additional steps, too, that help protect against all forms of intrusion and infection. Traditional best practices are still effective and, in fact, help reduce the threat surfaces AI- and ML-powered threats seek to exploit.
Ensuring centrally managed antivirus is deployed throughout an organization provides multiple advantages. Effective platforms report whenever an endpoint agent experiences trouble, thereby surfacing the need to resolve an issue, while simplifying spotting systems with missing or inactive scanners. Managed platforms also automatically typically update themselves and their agents, helping ensure the latest anti-malware information is available to each machine.
Deploying firewalls properly mated to the organization’s specific needs helps ensure effective perimeter protections are in place, too. Regularly updating the device’s firmware and maintaining active intrusion detection and prevention services are two more steps that help ensure known threats are blocked at the network’s edge, thereby preventing many issues.
Patching workstations, servers and applications is another best practice. Often overlooked, downloading and installing performance patches and security updates quickly upon release assists blocking many of the exploits AI- and ML-enabled malicious actors regularly attempt exploiting.
Enabling multi-factor authentication, while not a foolproof method of preventing bad actors’ social engineering or malevolent automated attacks, complicates such efforts and assists preventing unauthorized access. Regularly reviewing and updating sensitive processes, especially those surrounding electronic payments and funds transfers, pays dividends, too. In light of new AI- and ML-powered techniques that specifically target such activities, organizations should consider requiring face-to-face approval of such transactions, at least when sizable or above specific thresholds. Or, if the logistics of in-person authorization are too burdensome, requiring a secondary face-to-face Teams or Zoom call—in addition to requiring the accounts payable department’s regular secure process—can help prevent some malicious hacks from succeeding.
Last, investing in regularly scheduled user education is an intelligent move. Keeping threats and vulnerabilities front-of-mind is more important than ever, now. As social engineering, viruses, phishing, spyware and AI- and ML-powered attacks become more sophisticated and common, ensuring the organization’s users understand proper behaviors and the threat vectors offenders commonly attempt will better assist users in spotting fraudulent efforts and better equip them to thwart such fraudulent acts.