How AI Can Improve Cybersecurity, Right Now



Oliver Tavakoli, CTO at Vectra Networks has written an insightful piece for Bloomberg Government on the application of AI technologies to the field of cyber security.

Tavakoli argues that AI and machine learning can best be used to augment cybersecurity capabilities, not replace human experts.  It appears that federal government agencies are becoming more keen on utilizing AI approaches to cybersecurity, but confusion over applications and product differentiation is hampering the early stages of this process. This may be a result of media and marketing tendencies to wrap AI and related technologies such as machine learning and deep learning into neat buzzwords which can imply a sort of all encompassing AI power exists to tackle any issue.

The truth to Tavakoli is that rather than putting our focus on hazy, distant ambitions of general AI completely replacing all human functions in the realm of cybersecurity, there are real opportunities to improve security systems right now, by utilizing already capable AI subsets like machine learning and deep learning. Besides improving detection and response time to threats, such AI subset application can effectively free human analysts from doing monotonous, repetitive tasks, and allow them to focus on higher value work.  Rather than looking at AI as a threat to their jobs, such technological improvements would amplify analyst's effectiveness in applying unique skill sets to the aim of thwarting attacks.  The net result of integrating human expert with AI automation would result in an improvement in cybersecurity.

As he states it:
"A new generation of security technologies wield machine learning as a flexible automation enabler.  Adopting and improving on decades-old "expert system" learning processes, security anomalies (false positive, true positive and unlabeled alerts) are initially responded to by a skilled security analyst, and their deduction processes and conclusions are learned by the system.  Thereafter, if a similiar (or highly correlated) event is uncovered again, the system can autmatically propose a triage solution to the analyst.  As trust in such systems grows, it is inevitable that the human analyst no longer needs to supervise the easy and known events, and instead can focus on the next toughest event- learning and applying that newfound knowledge and skill to the task at hand"

This seems an apt summary of what AI technology really means to us today.  It is likely not going to be the sensationalized, easy to market dream of full artificial intelligence, but the specific application of breakthroughs in the AI subsets, so called 'weak AI' that will impact us the most.

Read the full article here.

Comments