SDLC for Trusted AI in cybersecurity

Gary Givental
6 min readJan 29, 2024

--

As enterprises and Managed Security Service Providers (MSSPs) increasingly deploy AI systems, the stakes in cybersecurity have never been higher. With a growing reliance on AI for automation of critical functions, from threat detection to incident response, the imperative question arises: Can we trust these AI systems to do the right thing?

In the cybersecurity realm, we’re in a relentless search for needles in a haystack. A shortage of skilled professionals and the rising complexity of systems only amplify the need for reliable AI. As we depend on AI’s growing intelligence, its trustworthiness must scale accordingly. We’re faced with the daunting task of ensuring these advanced systems do what they’re designed to do, especially when they operate as black boxes with a degree of non-determinism.

Beyond the Fear

So, how do we move beyond the fear and truly take advantage of this powerful technology? I propose organizations consider the following methodology:

  1. Risk versus Reward

In the world of cybersecurity, trust is a currency that’s hard-earned and easily spent. Adopting AI systems comes with its own calculus of risk versus reward. The fundamental question I’ve encountered in numerous customer interactions is the trustworthiness of AI — can it reliably detect true threats without fail? The answer lies not in blind faith but in a careful analysis of the potential risks and the substantial rewards that AI automation brings to the table. Do not be paralyzed by the risk of your AI application doing the wrong thing — use data to assess the true reward and potential risk impact. If the risks are low and rewards are high — move forward!

2. Humans to the rescue!

The best way to lean into minimizing risk is to take a look in the mirror. Yes — the answer is real people. Our human experts are not only pivotal in guiding these risk-reward conversations but also in the very process of training AI systems. Human in the loop controls are an absolutely must while the industry matures accuracy of AI, transparency, explainability and decision provenance.

3. Never Trust, Always Verify

The adage “never trust, always verify,” is key here. We need to establish the right metrics, audit processes and configuration controls to allow human experts to observe and operate the AI solution. This cautious approach means starting from a position of skepticism towards the system’s functionality and gradually building trust through rigorous verification.

The new AI-based SDLC

Gary Givental— AI SDLC loop

The journey from innovative AI solutions to robust, production-ready systems is marked by several critical steps. To support this, we developed the following process:

  • Proof of Concept & Impact Analysis: Evaluating the AI’s potential effectiveness in a controlled environment.
  • Passive Mode & Metrics: Observing the AI’s decisions without enacting them, to establish a baseline of trustworthiness.
  • Opt-In and Opt-Out Controls: Gradually introducing the system to real-world scenarios, with the flexibility to revert or adjust as needed.
  • Human in the Loop: Ensuring that every automated decision can be overseen or reversed by a human expert.
  • Audit Processes: Implementing rigorous checks to maintain accountability and refine the AI’s decision-making processes.

By adhering to these steps, we navigate the risk landscape, build up trust, and ultimately, access the rewards of AI automation in a production environment.

Lets dive deeper into each step!

1) Proof of Concept (POC) and Impact Analysis

The journey begins with a Proof of Concept (POC), where the AI’s functionality is reviewed to ensure it aligns with the intended design — whether that’s for enriching data, automating processes, or performing complex decision-making. Impact analysis follows, predicting the impact of this system in a production environment. Before we take this to production, it is wise to consider if the design will have enough of an impact to warrant — in other words, is this a great opportunity, or do we need to refine the approach and keep experimenting.

2) Passive Mode and Metrics: The Observation Phase

Transitioning from the POC, the AI system enters a Passive Mode — an observational phase where the system’s outputs are monitored and compared to actual outcomes without impacting live environments. During this step, we do additional engineering and development to build out the system and get it deployed to have access to decorating real data. This step is critical for gathering metrics on the AI’s performance, providing a non-intrusive glimpse into its efficacy. Key metrics include agreement rates with human analysts, automation volumes, and the accuracy of threat classifications.

3) Proof: Establishing Credibility

Gary Givental: What shall we measure?

Proof is the validation we seek through metrics gathered during Passive Mode. It demonstrates to the stakeholders, from SOC analysts to leadership, that the AI system can deliver on its promises. This stage is about solidifying the AI’s value proposition, showcasing tangible benefits such as time savings, cost efficiency, manual labor avoidance and operational efficiency.

4) Opt-in Controls, Enablement, and Opt-out Options

Opt-in controls allow for a selective and controlled introduction of AI capabilities to a production application. In this phase, we can develop incremental enablement waves to apply the new system to small batches of clients. We also develop surgical opt-out controls to allow a safety mechanism to withdraw or restrict the automation based on AI.

Gary Givental — AI SDLC workflow

5) Human in the Loop: The Safety Net

Human oversight is an indispensable part of the process. By keeping a human in the loop at all stages, the AI’s decisions are vetted and verified, ensuring reliability and allowing for real-time correction and guidance. This step reinforces the system’s accountability and nurtures the AI with expert knowledge and ethical considerations. We also leverage an Expert System to provide additional human-defined configuration.

6) Audit Process and Feedback Loop: Continuous Improvement

Finally, the Audit Process and Feedback Loop close the cycle of trust-building. Regular audits provide scrutiny of the AI’s decisions, flagging any discrepancies or anomalies. The feedback loop then facilitates continuous learning and improvement of the AI system, ensuring that the AI evolves in sync with the dynamic threat landscape and organizational needs. The Audit process also allows us to identify opportunities for additional tuning of existing security systems, as well as insight into threats affecting our clients.

Summary

As we navigate the complexities of integrating AI into cybersecurity, the emphasis on maintaining ‘opt-in’ and ‘opt-out’ controls, coupled with the non-negotiable presence of a ‘human in the loop’ and a robust ‘feedback loop,’ is paramount. We’re evolving a collaborative environment where expert systems and machine learning coexist, supported and validated by solid metrics and thorough audits. This disciplined approach helps us transition from a position of skepticism — ‘never trust, always verify’ — to a more mature, trust-based model, with the confidence that our systems will perform as intended.

The exploration of large language models builds on this foundation, allowing us to choose data-driven decisions over fear-driven reactions. In cybersecurity, where the mantra ‘move quickly and break things’ can have dire consequences, we advocate for a more measured approach: decorate, validate, and control.

Complimentary AI technologies product a better outcome! — Author

In conclusion, the process of developing and deploying AI systems in cybersecurity is a little bit akin to parenting.

We must ‘teach by example, verify by observation, and trust by results.’ — Gary Givental

This philosophy allows us to be proactive in building and refining AI systems with reliable metrics and audit processes that foster trust.

In closing:

“Inaction breeds doubt and fear. Action breeds confidence and courage. If you want to conquer fear, do not sit at home and think about it. Go out and get busy.” -Dale Carnegie

--

--

Gary Givental
Gary Givental

Written by Gary Givental

Software Engineer, AI geek, and longtime yogi and martial artist.

No responses yet