Google and Microsoft are raising everyone’s hopes by perfecting another AI-based application. Only this time, the technology is not being used to tell the weather or play music. It’s being used for something human beings cannot really do themselves – keep cybercriminals under control.
Table of Contents
A Different Kind of Use Case
“Machine learning is a very powerful technique for security—it’s dynamic, while rules-based systems are very rigid,” explains Dawn Song from the University of California. With hackers becoming more inventive and adaptable every day, this flexibility is of crucial importance.
But it’s even more important that AI is finding new productive fields of application. Machine learning and its defining capability have been used for data analysis many times before. However, it wasn’t until recently that we’ve started hearing about AI-powered cybersecurity at this scale.
Information security officers at Google and Microsoft as well as in other industry giants such as Amazon are hopeful that machine learning may finally change things for the better. Cybercrime can hardly be uprooted, but it can be controlled with greater confidence.
Before Machine Learning
In February 2019, Microsoft was able to retroactively discover a series of attacks on political think tanks from the EU. Security teams were two months late to prevent it, but that’s not the point. There’s now the technology to detect similar attempts on political targets.
The chances are better than ever before, according to Stephen Schmidt from Amazon. Before AI, the most we could hope for was to avoid phishing scams using our good judgment or to be notified when someone tried to access our bank accounts from a suspicious place.
Not only were these systems insufficient, but they were also inconvenient for the users themselves. Before AI, having our credit cards locked while on vacation meant that our banks’ security teams were cutting-edge. Random geo-blocking was the best we could ask for.
This posed a great challenge to modern-day cybersecurity scientists as well. Blocking a user from the system under suspicion of unauthorized access is easy; it’s discerning a user from a hacker that’s hard. AI and machine learning are currently helping tech leaders untie this knot.
That’s how Microsoft’s security team was able to prevent a cyberattack on one of the company’s major clients. An attempted login from Romania instead of the usual New York address raised a red flag. This prompted experts from Azure to block the entrance to their cloud.
But usual login locations are only a small part of greater behavioural patterns tracked by AI. Thanks to machine learning, technology can learn from user data. Besides, it can customize security protocols for each individual user based on their typical online behaviour and history.
Inhumane Amounts of Data
Learning to differentiate a legitimate user from hackers requires a security team to generate and analyze a massive amount of raw data. The latter “keeps growing at a rate that is too large for humans to write rules one by one,” says Mark Risher from Google.
Considering that millions of people log into Gmail every day, and taking into account that this is only one of many of Google’s services, Risher’s observation may actually be an understatement. It’s simply impossible for a human to track every login and monitor every user.
Luckily, no amount of data is too big for machine-learning algorithms to crunch. In Google, AI sorts out data not only on logins and on-site behaviour but also on previous cyber attacks. The latter allows it to stay one step ahead of the hackers, no matter how inventive they are.
Improved Internal Control
While Google keeps training algorithms to leverage hackers’ most powerful weapons against them, Microsoft and Amazon are developing AI-powered technology and security protocols for their biggest clients to use internally. So far, the results have been more than satisfying.
For instance, Microsoft’s Advanced Threat Protection helps Dutch insurance company NN Group NV manage access to around 27,000 employees and partners without compromising a bit of data. Over at Amazon, AI-based GuardDuty does the same for all customers’ systems.
Machine learning helps corporations improve internal control, but their security teams are right to worry. Just like failed hacking attempts can be used against cybercriminals, AI tools can be employed by hackers to bypass state-of-the-art security designed to fend them off.
But still, AI is making things much harder for hackers.
Cybercrime won’t be eliminated anytime soon, even though the future is looking much brighter with every new pattern detected and learned. In addition to regular antivirus, users are advised to protect themselves with the fastest VPN solutions just in case, given that nobody’s really safe for now.