top of page
Writer's pictureMichel Helal

Cybersecurity, Artificial Intelligence and Maple Syrup


Is the maple syrup on your waffles genuine or artificial? How do you know? Years of research and complex molecular engineering have been invested in making a gooey liquid look, taste and behave like the real thing. Your taste buds may not know the difference between real and artificial syrup, but you can read the label on the back of the bottle to determine what is real and what is artificial.


Syrup and Artificial Intelligence are similar, but for AI there are often no labels to reveal what is real and what is artificial. If there was a label explaining the contents, benefits and vulnerabilities of AI, it might include the following items.


Benefits

The benefits of artificial intelligence for Cybersecurity have been discussed and known for many years. These benefits are usually around cost-reduction and efficiency, but the benefits of AI to Cybersecurity are more than just reducing costs:

  • AI will automate many activities in a way similar to the early days of computers when accounting processes were automated by using financial applications. AI may enable quicker and more efficient review of logs and analysis of endpoint data, improving threat detection and response time to threats. This has the potential to remove human error from the review and analysis process.

  • AI is adaptable to new security threats, and may be able to forecast as yet unknown security threats. Cybersecurity SIEM tools currently use a baseline of known use cases to guide analysis of security activities, but the use cases have to be adapted and updated for new situations. Once an AI algorithm has the baseline use cases, AI may have the ability to identify new use cases and determine their risk to the business. AI will also prioritize those risks and provide a likely response to new threats.

  • AI has the potential to learn the rhythm of a business and note any changes. New activities that are a legitimate addition to the standard business profile can be assessed by AI. AI can then analyze the new business profile and provide a report of risks, predict outcomes and recommend actions.

  • AI is used and will likely be a staple for meeting government regulations as well as Cybersecurity compliance frameworks such as PCI DSS and ISO 27001 and ISO 27002. It may even be used for Privacy regulations to prove compliance.

  • AI can be used to free Cybersecurity resources from manual (i.e. boring and repetitive) tasks so that they can work on more gratifying, interesting and valuable tasks.


Weaknesses

Existing cybersecurity tools have weaknesses; false positives, false negatives, incomplete or incorrect implementation. AI is no different and has its weaknesses too.

  • AI uses infrastructure and relies on the security policies just like any other application and is vulnerable to the same threats. All the controls and defenses that are implemented for non-AI operations must be implemented for AI operations.

  • Data poisoning: The old saying “garbage-in, garbage-out” applies to AI. AI algorithms may not know that it is being fed data to lead it to a specific response or conclusion. Confusing an AI algorithm can be fun and profitable. AI has the ability to adapt to new situations and generate appropriate responses, but it all depends on the data that is provided to the AI software, as well as the use cases that have been programmed into the AI algorithm.

  • Validation of use cases and AI analysis and establishing a series of tests with expected outcomes will be crucial to certifying that an AI algorithm is working as intended. Sort of a Turing test, but for reliability and dependability.

  • AI development appears to be sensitive to the developer’s personality. One is reminded of a quote from Yeats: “How can we know the dancer from the dance?”. A developer may create an AI algorithm that does not include items the developer does not think are important. Separating the creator of the algorithm from the algorithm itself may be difficult, which creates doubts about whether the algorithm is completely objective.

  • We are seeing a lot of breakthroughs and advancement in the AI field, but it is still early days for AI and much needs to be considered when implementing AI in Cybersecurity. It is prudent to implement AI with care and to monitor the results to be sure AI is operating as expected.

  • The same rules and protections for any other sensitive bit of infrastructure are applicable to AI. Patching, software bugs, network security, are all critical to maintaining AI. AI may be capable of alerting us to risks, but as yet it still takes time and resources to implement patches and test changes to be sure there are no adverse effects on business operations.


Summing Up

AI for Cybersecurity is exciting and has great promise, but we should not think that AI is going to solve all Cybersecurity problems. Nor is AI a set-it-up-once and forget it tool. AI requires monitoring and the same security policies and protections for any other sensitive bit of infrastructure; probably more so.


AI may be capable of alerting us to risks, but as yet it still takes time and resources to implement patches and test changes to be sure there are no adverse effects on business operations.


And we should not forget that such a powerful tool as AI can be used by threat actors to improve attack campaigns, make their attacks less costly, identify and prioritize victims, and predict chances of success.


AI is another tool that is to be used judiciously and with thought given to where it can be used most effectively.

Comments


bottom of page