Don Cox
Chief Information Security Officer, American Public Education, Inc.

Don Cox is a seasoned cybersecurity and IT executive with over 20 years of experience driving digital transformation, risk management, and enterprise security strategy. As the CISO and VP of IT Service Management at American Public Education, Inc. (APEI), Don leads cybersecurity, compliance, and IT operations, ensuring resilience in an evolving threat landscape. A strategic leader with expertise in healthcare, product development, and logistics, he has collaborated with federal agencies on cybercrime investigations. Recognized for visionary leadership, Don is passionate about AI, innovation, and fostering a security-first culture to enable business growth and operational excellence.

The Growing Need for AI Governance, Risk, and Compliance

Artificial intelligence (AI) is rapidly transforming enterprises by automating complex processes, enhancing predictive analytics, and improving decision-making. However, AI’s expansion brings significant challenges, including data privacy risks, security vulnerabilities, regulatory compliance issues, and ethical concerns. As organizations embed AI into their core business functions, they must establish a structured AI Governance, Risk, and Compliance (AI GRC) framework to ensure that AI-driven systems are transparent, secure, and aligned with business objectives.

For a successful AI GRC implementation, collaboration between the Chief Information Officer (CIO) and the Chief Information Security Officer (CISO) is essential. The CIO is responsible for AI innovation, strategy, and integration into the business, while the CISO ensures security, compliance, and risk management. When these two executives work together, they can develop a robust AI governance structure that enables innovation while safeguarding against AI-related risks.

Aligning AI Strategy with Security and Compliance

The first step in establishing an AI GRC framework is defining a shared strategy that balances innovation with risk management. CIOs must ensure AI deployments align with business priorities, while CISOs must enforce security and compliance protocols to protect data and maintain regulatory adherence.

A comprehensive AI policy framework should be developed to define governance structures, outline AI use cases, and set clear accountability for AI-related decisions. This framework should specify how AI models are developed, deployed, and monitored to ensure fairness, transparency, and security. Establishing a cross-functional AI governance committee that includes IT, security, legal, compliance, and business leaders will help enforce these policies, ensuring that AI initiatives support organizational objectives while minimizing security threats.

Identifying and Mitigating AI-Related Risks

AI introduces a range of unique risks, from algorithmic bias and adversarial attacks to model drift and regulatory violations. To mitigate these risks, CIOs and CISOs must collaborate to conduct AI risk assessments that evaluate vulnerabilities across the AI lifecycle. These assessments should focus on potential biases in AI models, security threats related to adversarial manipulation, and compliance risks stemming from data privacy regulations.

CISOs should lead threat modeling exercises to identify potential cybersecurity weaknesses in AI applications. By integrating security measures such as data encryption, role-based access controls, and real-time AI activity monitoring, organizations can reduce exposure to AI-driven security threats. Meanwhile, CIOs should work with IT teams to embed security and bias detection mechanisms directly into AI development workflows, ensuring that AI models remain ethical and trustworthy.

Monitoring AI decision-making processes is also critical. Organizations should establish a system that continuously evaluates AI outputs for unintended bias or anomalies. If discrepancies are detected, corrective measures must be implemented promptly to ensure that AI remains aligned with compliance requirements and business ethics.

Ensuring AI Compliance with Evolving Regulations

As AI regulations become more stringent, enterprises must ensure their AI implementations comply with legal and industry standards. Regulatory frameworks such as the EU AI Act, NIST AI Risk Management Framework, GDPR, and other industry-specific mandates dictate how AI should be governed, monitored, and reported.

The CISO plays a vital role in overseeing AI compliance efforts, working closely with legal and regulatory teams to ensure adherence to established standards. Meanwhile, the CIO must ensure that compliance requirements are built into AI deployments from the outset, rather than being addressed after implementation.

To maintain regulatory alignment, organizations should establish a structured compliance review process that evaluates AI systems against applicable laws. AI models used for decision-making should be transparent and explainable, allowing regulatory bodies and internal auditors to assess their fairness and legality. In addition, organizations should develop third-party risk management protocols to ensure AI vendors and partners meet the same security and compliance standards required internally.

Implementing AI Monitoring, Auditing, and Incident Response

AI governance does not end once an AI system is deployed. Continuous monitoring is essential to detect security threats, model drift, compliance violations, and operational failures. By implementing an AI performance monitoring framework, organizations can track AI systems in real time, flagging potential anomalies that could compromise their integrity.

Regular AI audits should be conducted to ensure that models function as intended. These audits should examine AI decision-making patterns, verifying that AI-generated outcomes are consistent, fair, and unbiased. The results of these audits should be reviewed by IT, security, and compliance teams, who can make necessary adjustments to maintain AI reliability.

In addition to ongoing monitoring, organizations must develop an AI incident response plan that outlines how to address AI-related security breaches and compliance failures. This plan should detail procedures for identifying AI vulnerabilities, escalating security concerns, and implementing remediation strategies. A well-structured response plan will enable enterprises to swiftly address AI failures while minimizing operational disruptions.

 

Building an AI-Responsible Culture

Ensuring that AI is used responsibly across the enterprise requires a culture of awareness and accountability. CIOs and CISOs should work together to educate employees, business leaders, and IT teams about AI risks, ethical considerations, and compliance requirements.

Training programs should be introduced to help IT and security teams understand AI security risks, bias detection techniques, and regulatory frameworks. Business leaders should also receive AI literacy training to help them make informed decisions about AI implementations.

Encouraging responsible AI experimentation can further reinforce governance best practices. Organizations should establish AI sandboxes—controlled environments where teams can test AI models while ensuring security and compliance protocols are in place. This allows for innovation without exposing the organization to undue risks.

By fostering a company-wide AI-aware culture, enterprises can empower employees to use AI responsibly while maintaining alignment with security and regulatory expectations.

Conclusion

As AI becomes increasingly integrated into enterprise operations, the collaboration between CIOs and CISOs is more critical than ever. The CIO’s role in AI adoption and innovation must be balanced by the CISO’s focus on security, compliance, and risk management. Together, these executives can build a comprehensive AI GRC framework that promotes AI-driven growth while protecting the organization from AI-related threats.

By establishing governance policies, conducting risk assessments, ensuring regulatory compliance, monitoring AI performance, and fostering an AI-responsible culture, CIOs and CISOs can ensure that AI remains an asset rather than a liability. A strong partnership between these two leaders will not only enhance AI security and compliance but will also enable organizations to leverage AI’s full potential with confidence and accountability.

Content Disclaimer

Related Articles