CIOs Turn to NIST Frameworks to Manage Generative AI Risks Effectively

September 30, 2024
CIOs Turn to NIST Frameworks to Manage Generative AI Risks Effectively

In the landscape of rapidly advancing technology, generative AI presents numerous unprecedented opportunities and challenges for businesses. Discover Financial Services is one such organization that is navigating these waters carefully. The financial institution has been systematically integrating generative AI solutions across its operations but with an emphasis on stringent risk management frameworks. By applying specific guardrails based on risk, Discover ensures that technology deployment aligns with its standards, expectations, and policies. This cautious but forward-looking approach is instrumental as CIOs strive to harness AI’s potential while minimizing associated risks.

The National Institute of Standards and Technology (NIST) has emerged as a pivotal entity in guiding enterprises through the complexities of generative AI adoption. In July, NIST released a draft of their generative AI risk management framework, aimed at helping organizations like Discover manage AI’s multifaceted risks. Discover’s CIO Jason Strle finds the NIST guidelines particularly compatible with financial risk management practices, further underscoring the framework’s utility across different industries. As businesses cautiously integrate AI into their operations, NIST’s comprehensive approach offers a dependable starting point for risk mitigation.

1. Pinpoint Capability-Induced Risks

Understanding where generative AI introduces risks is the first crucial step for any organization looking to employ this advanced technology effectively. Discover’s CIO, Jason Strle, has translated NIST’s voluntary framework into actionable steps for his organization. The first of these steps involves identifying where capabilities create risk. This preliminary phase entails a thorough evaluation of all AI applications within the organization to determine the potential risks they pose. Whether it’s customer-facing interactions or internal back-office tasks, every AI application comes with unique challenges that need specific attention.

By segmenting tasks and functions into smaller, manageable parts, organizations can better identify areas where generative AI capabilities might introduce vulnerabilities. Discover aims to ensure that no aspect of their operation is overlooked, thus maintaining a broad yet detailed perspective on risk. This meticulous approach not only helps in understanding the potential pitfalls but also in framing suitable risk mitigation strategies. The focus is on preemptive identification, which allows organizations to act before risks translate into tangible threats. This stage, therefore, serves as a foundational element for the subsequent steps in the NIST framework.

Moreover, pinpointing capability-induced risks is not a one-off task but an ongoing process. As AI applications evolve, new risks may emerge that were previously unforeseen. Continuous monitoring and reassessment are essential to keeping the risk management strategy relevant and effective. This aligns with the NIST framework’s recommendation of ongoing evaluation and modification, ensuring that the risk management approach evolves alongside the technological advancements. It also creates a culture of vigilance and responsiveness, which is crucial for businesses operating in the dynamic landscape of artificial intelligence.

2. Demonstrate Comprehension of Risk Assessment and Mitigation

The second step in NIST’s framework, as adopted by Discover, involves demonstrating a deep understanding of how to assess and mitigate these identified risks. This involves not just the application of theoretical knowledge but also practical evidence that the organization can handle these challenges effectively. For Strle, this means proving the organization comprehends how to quantify and mitigate risks, which is an extension of traditional operational risk management paradigms familiar to banks and financial institutions.

Quantification of risk is a multifaceted exercise that requires advanced tools and methodologies. It is crucial to determine the potential impact of identified risks in numerical terms. This process often involves statistical models and simulations, which can provide a clearer picture of what’s at stake. For financial institutions, such as Discover, these models are instrumental as they translate abstract risks into measurable data points that can inform decision-making processes. This quantitative approach often involves cross-functional teams, incorporating insights from various departments to create a holistic risk profile.

Mitigation, on the other hand, revolves around implementing strategies designed to minimize or eliminate the risks. Discover’s approach includes setting up specific guardrails, continuously monitoring AI outputs, and ensuring a ‘human in the loop’ strategy. Human oversight is crucial in early stages to ensure that AI outputs are checked for accuracy and alignment with organizational policies. This mitigates risks such as data biases, hallucinations, and incorrect information generation, which could have severe repercussions if left unchecked.

Furthermore, NIST’s guidelines encompass over 200 risk-mitigating actions, offering a menu of strategies that organizations can adapt based on their specific needs and risk tolerance levels. From establishing minimum performance thresholds to defining approval policies for deployment, these actions provide a robust framework for addressing various aspects of risk. By integrating these strategies, Discover and other enterprises can secure a more controlled and reliable generative AI deployment. Moreover, these measures extend beyond initial implementation, requiring continuous fine-tuning and recalibration to remain effective.

3. Conduct Daily Oversight

The final step that Discover incorporates from the NIST framework is the daily monitoring of AI applications. Continuous oversight ensures that potential risks are promptly identified and managed, preventing them from escalating. For Discover, this involves a robust system of checks and balances where AI applications are constantly evaluated against predefined criteria. Regular reviews and audits are conducted to assess performance, accuracy, and adherence to company policies. This proactive approach helps catch anomalies early, thus mitigating their impact on the business.

In practice, daily oversight requires a combination of automated monitoring tools and human supervision. Automated systems can track performance metrics in real-time, flagging any deviations that warrant closer inspection. Human supervisors, meanwhile, provide the necessary contextual understanding that automated systems might lack. This hybrid approach leverages the strengths of both technology and human judgment, creating a more resilient oversight mechanism. For example, issues like data integrity, privacy concerns, and biased outputs can be better managed through continuous human involvement, ensuring that AI applications remain aligned with ethical and operational standards.

Moreover, daily oversight aligns with NIST’s broader risk management philosophy of ongoing evaluation. Risks associated with generative AI are not static; new vulnerabilities can emerge as the technology and its applications evolve. Therefore, businesses must remain vigilant, adapting their oversight mechanisms to stay ahead of potential threats. This involves not just monitoring current performance but also anticipating future risks based on trends and insights gained from ongoing operations. By maintaining this level of vigilance, organizations can sustain the effectiveness of their risk management strategies over the long term.

Effective risk mitigation also includes preparing for worst-case scenarios. For Discover and other organizations following NIST guidelines, having an off-switch or a contingency plan if AI applications go awry is essential. This ensures that the business can quickly revert to safer operational modes without compromising service quality or customer trust. Therefore, daily oversight isn’t just about monitoring current performance but also about being prepared for unforeseen challenges. It creates a safety net that allows organizations to experiment and innovate with AI technologies while keeping risks within manageable limits.

Executives across the board are contending with evolving regulations and an increasing spotlight on AI ethics and governance. From stringent European Union AI regulations to proposals like California’s Senate Bill 1047, businesses are preparing for a future where compliance and ethical AI deployment will be non-negotiable. As regulatory landscapes shift, the importance of stringent oversight and compliance becomes even more critical. Commitment to daily oversight, thus, helps organizations like Discover stay prepared and compliant with both current and forthcoming AI regulations.

Conclusion

The second step in NIST’s framework, as adopted by Discover, involves showcasing a thorough understanding of how to assess and manage identified risks. This goes beyond theoretical knowledge and requires practical proof that the organization can handle these challenges effectively. For Strle, this means demonstrating the organization’s capability in quantifying and mitigating risks, building on traditional operational risk management principles known in banking and finance.

Quantifying risk is a complex task that demands advanced tools and methodologies. It’s vital to measure the potential impact of risks numerically, often employing statistical models and simulations to clarify what’s at stake. For financial institutions like Discover, these models are essential as they convert abstract risks into quantifiable data, guiding informed decision-making. This quantitative approach typically involves cross-functional teams, leveraging insights from various departments to form a comprehensive risk profile.

Mitigation involves strategies to lessen or eliminate risks. Discover’s approach includes specific guardrails, continuous monitoring of AI outputs, and maintaining a ‘human in the loop’ strategy. Human oversight, especially in the early stages, is crucial to ensure AI outputs are accurate and align with company policies, mitigating risks like data biases, incorrect information, and AI ‘hallucinations,’ which can be detrimental if unchecked.

Additionally, NIST’s guidelines offer over 200 risk-mitigating actions, providing a range of strategies organizations can tailor to their needs and risk tolerance. These actions, from establishing performance thresholds to defining deployment approval policies, form a robust framework for risk management. By integrating these strategies, Discover and other companies can ensure a safe and reliable AI deployment. These measures require continuous fine-tuning and adjustments to stay effective.

Subscribe to our weekly news digest!

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for subscribing.
We'll be sending you our best soon.
Something went wrong, please try again later