Chatbots: Phase 1 Of AI Innovation In Cyber Security?
While artificial intelligence (AI) is no doubt appealing to the enterprise – cutting down practitioner workload, streamlining protocols and offering cutting-edge approaches to what were once menial tasks – it is accompanied by its share of challenges.
For one, the current shape of AI in cyber security is far different than it may look in wider settings – say in weapons automation, retail or customer service. For perimeter defense of the enterprise, AI is in its infancy.
Security practitioners have cautioned chief information security officers (CISO) and members of the security team against transitioning all back-end defense to AI solutions. That’s because the technology is still relatively green – working out kinks and effectively pinging security teams for urgent notifications. The process, it seems, must still mature – although the concept of AI reducing workload in a field depleted of talent amid a skills shortage is profoundly appealing.
Security experts appear to caution end-users against full AI reliance, but they’re not telling them to abandon the principle altogether. In the short term, it seems, AI must reach advanced maturity and be able to function as a useful component atop various solutions.
In previously speaking with the Cyber Security Hub, IPsoft Chief Security Officer (CSO) John Alford acknowledged that the general shape of AI in cyber security today comes in the form of chat-bots that convert frequently asked questions (FAQs) into simulated conversations. They can reset forgotten passwords or grant additional access.
But is AI adding “security value” when it’s parsing questions for keywords and displaying generic content? Alford wasn’t sold. He said that instead of being a “glitzy front end for solutions that have existed for years,” AI must prove its value over time, when it can address dynamic issues.
Still, are these chat-bots a microcosm of the exponential AI growth to come? It’s certainly possible.
The CSO told Cyber Security Hub that the biggest challenge for AI as a whole is deriving signal from noise. Some solutions produce false positives or even false negatives. He cited time management as a difficulty in saying that security engineers must sift through thousands (or tens of thousands) of alerts and gigs of data to uncover actual issues.
The wider technology, then, will have to prove helpful in identifying the “bad guys” amid piles of unstructured data.
For enhancements, Alford said that end-users can rely on “good block lists” that add value and are far less expensive than solutions with the AI tag.
Still, the next biggest shift for AI in cyber security is the move beyond these chat-bots, offered up as, perhaps, an enterprise’s soft introduction to AI.
Alford said, “The breakthrough requires moving beyond the chat-bot AI prevalent today to a solution that is empowered with sufficient context and adaptive decision trees so that signal is maximized, noise is minimized and protection exceeds what can already be done with much cheaper and easily implemented blocking lists, threat-aware DNS, anti-malware, DLP, IPS, and such.”
See Related: The Boardroom Needs To Take Cyber Seriously
Despite its smaller sample size, chat-bots could be indicative of the innovation to come, from interaction to resolution. Despite its glossy appearance, the function is not catching on as quickly as some might’ve thought, either.
The adoption of such a technology has been slow, but also quite industry specific. For example, a recent survey of 500 senior marketers – mainly based in the U.S. and U.K. – from ClickZ and Freedman International suggests that only 7% of marketing decision-makers are currently using AI-powered chat-bots.
Twenty-seven percent of marketers said they’re thinking about the technology, and reasons for delay include internal teams not being prepared for the rollout, according to the survey, as relayed by MediaPost.
Yet, in a Gartner survey called “Predicts 2017: Artificial Intelligence,” experts project that by 2019, more than 10% of IT hires in customer service will mostly write scripts for bot interactions. Further, Gartner predicts that the bot scripters utilized for the technology will also handle “exceptions” – where the bot cannot identify the requisite steps and the resolution duties are passed to a human operator.
It’s feasible to envision, then, a security operation modeled off the same system, over time.
For more information on AI and enterprise security, be sure to check out Cyber Security Hub’s February Market Report, here.