Artificial Intelligence in Security is Not The TerminatorPublished May 11, 2017 by Karen Walsh • 5 min read
Artificial intelligence in security strikes fear into the heart of the average person. The term alone seems to indicate something straight out of a science fiction movie complete with Arnold Schwarzenegger’s voice growling grittily, “I’ll be baaack.” In the information security space, experts agree that artificial intelligence and its much-less-interesting-sounding-cousin machine learning will shape the future of the industry.
Artificial Intelligence in Security to Take on Hackers
Artificial Intelligence: Not The Terminator
People who grew up in the 1980’s assume that artificial intelligence automatically means robots coming to life to destroy the human population. While the idea of the singularity continues to haunt people, the current day reality is really much more boring. Artificial intelligence means nothing more than finding ways to engage machines in tasks otherwise done by people. These can be applied or general. Traditionally, applied means things that are automated while general would be any task available (think Roomba.) Machine learning is the more narrowed pursuit of training computers to think like humans when analyzing large amounts of data. This occurs through the use of neural networks. Neural networks use probability of compiled data to make decisions and categorize information. In doing so, they can act similarly to humans. Despite machine learning being a subset of artificial intelligence, the infosec space often uses the two terms interchangeably.
Artificial Intelligence in Security Assists CISOs
With the ability to analyze large amounts of data in a short time, artificial intelligence can be a major asset to CISOs. Unlike the science fictionalized concept of independently acting AI, artificial intelligence in security incorporates human interaction with the data to better predict suspicious activity. AI currently helps CISOs not remove humans from the mix. Doug Drinkwater at CSOOnline talked to author Martin Ford who noted, “…AI will be increasingly critical in detecting threats and defending systems. Unfortunately, a lot of organizations still depend on a manual process — this will have to change if systems are going to remain secure in the future.”
Some CISOs, though, are preparing to do just that.
“It is a game changer,” Intertek CISO Dane Warren said. “Through enhanced automation, orchestration, robotics, and intelligent agents, the industry will see greater advancement in both the offensive and defensive capabilities.”
Warren adds that improvements could include responding quicker to security events, better data analysis and “using statistical models to better predict or anticipate behaviors.”
Thinking from the statistical point of view, the larger number of data points, the greater the accuracy of the predictions. The goal of artificial intelligence in security, like the use of an automated GRC tool, is to streamline the process to obtain better results.
MIT’s Artificial Intelligence in Security
One of the most noteworthy updates to the use of artificial intelligence in security has come from the Massachusetts Institute of Technology (“MIT”). MIT’s new methodology utilizes active learning in which the computer logs data, sends it to a person who determines whether the data is good or not. It then incorporates feedback, which allows it to use a new model and that updates the algorithms.
According to the MIT website, their new AI methodology incorporates some heft learning that makes it more effective than any other strategies on the market.
AI2’s secret weapon is that it fuses together three different unsupervised-learning methods, and then shows the top events to analysts for them to label. It then builds a supervised model that it can constantly refine through what the team calls a “continuous active learning system.” Specifically, on day one of its training, AI2 picks the 200 most abnormal events and gives them to the expert. As it improves over time, it identifies more and more of the events as actual attacks, meaning that in a matter of days the analyst may only be looking at 30 or 40 events a day. “This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” says Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame.
AI2’s strength lies in being able to work with analysts, not replace them. Thinking in terms of science fiction, AI2 acts as the security version of the Jetsons’ Rosie. Rosie needs to learn how to make the sandwich and takes feedback on taste from the Jetsons, then renegotiates the information to make the sandwich better the next time. MIT’s AI2 does the same thing for information security. In the end–the way that the Jetsons might need to adjust the mustard on their sandwiches instead of building the whole thing from scratch–the artificial intelligence in security allows the analyst to focus on smaller details to determine whether the red flags are warranted.
MIT is not the only university working towards big data artificial intelligence in security solutions. At the University of Arizona, ongoing research intends to help support e-commerce security solutions which may be of interest to those who need to be PCI-DSS compliant. According to the University of Arizona website,
For cyber security, we conduct advanced research in autonomic intrusion detection, botnets and malware analysis, and cyber terrorism research. For e-commerce security, we are performing advanced research of relevance to fake e-commerce site detection, customer-based fraud detection, botnet related e-commerce transaction analysis, and social media analytics based e-commerce opinion mining.
KDD (Knowledge Discovery from Databases) techniques promise easy, convenient, and practical exploration of very large collections of data for organizations and users, and have been applied in marketing, finance, manufacturing, biology, and many other domains. Many of the KDD technologies could be applied in ISI studies (Chen 2006). Keeping in mind the special characteristics of crimes and security-related data, we categorize existing ISI technologies into six classes: information sharing and collaboration, crime association mining, crime classification and clustering, intelligence text mining, spatial and temporal crime pattern mining, and criminal network analysis.
PCI-DSS requirements protect customer information in the e-commerce space. Using data mining for large numbers of transactions can more easily show when customer behaviors seem abnormal. Being able to use artificial intelligence in security processes can help reduce the cost of PCI-DSS compliance while also adding value to the programs through better detection.
Artificial Intelligence in Security Needs to Be More Robust
Despite the advances in artificial intelligence and machine learning, the technology still has a long way to go before it can be relied on completely. Looking at the Google video searching AI, researchers at the Paul G. Allen School of Computer Science & Engineering at University of Washington noted how easily it can be deceived.
Machine learning systems are generally designed to yield the best performance in benign settings. But in real-world applications, these systems are susceptible to intelligent subversion or attacks,” said senior author Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab. “Designing systems that are robust and resilient to adversaries is critical as we move forward in adopting the AI products in everyday applications.”
“Such vulnerability of the video annotation system seriously undermines its usability in real-world applications,” said lead author and UW electrical engineering doctoral student Hossein Hosseini. “It’s important to design the system such that it works equally well in adversarial scenarios.”
“Our Network Security Lab research typically works on the foundations and science of cybersecurity,” said Poovendran, the lead principal investigator of a recently awarded MURI grant, where adversarial machine learning is a significant component. “But our focus also includes developing robust and resilient systems for machine learning and reasoning systems that need to operate in adversarial environments for a wide range of applications.”
While there may not seem to be a clear link between fooling Google’s video search AI and the use of artificial intelligence in security, infosec professionals recognize that while the capabilities exist the holes are easily exploitable at this point. Hacking a video search has no real security implications. This means that while that kind of machine learning may be fun, it does not need to be complete because it doesn’t cause monetary or reputational risk. Therefore, it may not seem like a one to one comparison with artificial intelligence in security. The important takeaway lesson, however, is that while machine learning can help fortify an organization’s security profile, it cannot do so without the help of human staff to steer it in the correct direction.
Machine learning and artificial intelligence in security promote the next wave of protection for companies and their customers. While information security professionals need to understand the ongoing research to be ahead of the curve, they also need to remember that the advancements intend to help, not replace.
Has your organization instituted machine learning to help with security concerns? How do you foresee the future of artificial intelligence in security? Tell us in the comments!