eBook: Preparing for the age of AI
Discover how to navigate AI adoption responsibly and transform operations across your organization with these practical insights.
Managing Consultant at Avertium
Organizations are rapidly adopting artificial intelligence to enhance efficiency, inform decision-making, and integrate AI into core business operations. However, as AI adoption rapidly grows, so does the risk of shadow AI. Shadow AI is the process of utilizing AI intelligence tools and models without IT or corporate oversight or approval.
In addition to the inherent risks associated with unapproved software, the adoption of new AI technologies presents further challenges in security, compliance, and governance that organizations must proactively manage. With Shadow AI, organizations are exposed to potential data leakage, regulatory violations, and unsupervised AI-powered decision-making, often without adequate oversight or transparency regarding AI utilization.
This article explores shadow AI risks and how to mitigate them.
Shadow AI describes the use of artificial intelligence tools within an organization without proper authorization or approval, frequently circumventing established IT protocols and executive oversight, as well as security standards. Similar risks occur when workforce members use unapproved software. There are significant risks when shadow AI models, chatbots, or automation tools are used without IT or management being aware of them. While some traditional software might store information, AI models and tools can access and use data in new and different ways. Sensitive information could even become part of the model’s training data, taking it from inside the company to the public domain, where no control over the security configurations exists.
Without a proper governance structure and monitoring program, organizations may unknowingly expose personally identifiable information (PII), protected health information (PHI), customer information, or proprietary information. Managing AI has rapidly become a crucial part of an organization’s operational governance practices to identify and mitigate risks while still enabling innovation and efficiency gains.
The rapid proliferation of artificial intelligence tools has made AI more accessible than ever, empowering employees to integrate advanced models into their daily work without IT’s discernment. Understanding the underlying causes is essential for building resilient strategies that balance innovation with security and compliance:
Accessibility
AI tools are more available now than ever before. Some good examples of this are free access to tools such as Gemini or ChatGPT, or the integrated free version of Copilot that is bundled with the Windows 11 operating system. Employees are now able to incorporate AI models into their everyday tasks with ease. Open-source artificial intelligence, such as large language models (LLMs) and image-generation tools, makes it much easier for anyone to get started. It is a misconception to assume that workforce members are not making use of these publicly available and free tools.
Overly restrictive controls
Some organizations block access to public open-source tools, including AI models, to maintain compliance and security. However, without strong AI governance, employees may bypass these restrictions to work efficiently. A balanced approach to security, compliance, governance, education, and innovation is essential.
Slow formal adoption
While employees are experimenting with AI to enhance their personal lives, some organizations are slow to formally recognize or implement AI in the workplace. This gap can lead to employee frustration and may encourage them to seek out alternative methods to boost their work performance.
SaaS provider AI implementation
Many third-party SaaS providers are integrating AI into existing platforms, often without requiring separate purchases, approvals, or notifications to customers. As a result, companies may be introducing AI into their environment without realizing the extent of the exposure. This makes it difficult for IT and management to track and assess AI-related risks.
Insufficient Governance and Policies
Without established AI governance structures in place, AI tools and models enter the organization without security or compliance due diligence efforts. Many organizations lack, or are early adopters, in implementing formal policies and procedures for AI use, which creates inconsistencies in how different departments or team members adopt and manage AI solutions. As with any procurement procedures, the security and functionality of AI components must be assessed to reduce or eliminate uncontrolled adoption.
Workforce Education
Employees often lack the training to use approved AI tools responsibly and ethically, which leads to unintentional or accidental risks. Workforce members may upload or prompt AI with sensitive or classified information or may trust AI outputs without verification. A lack of AI education and training across the workforce contributes to the uncontrolled spread of shadow AI.
AI Value Gap
AI tools and models typically promote high-value efficiency gains but may not align with the business objectives. Organizations must strategically evaluate the overall operational effectiveness and use cases of introducing AI into business practices, and develop formal AI governance, risk, and compliance programs to ensure effective implementation of what is needed and approved.
AI tools and models used without proper oversight pose significant challenges. When employees leverage unapproved or non-governed AI solutions, organizations face a complex array of risks that extend beyond traditional IT concerns:
Proactively understanding and preparing for the undeniable changes that AI has introduced is key to ensuring a solid AI governance, risk, and compliance program. This technology cannot be ignored. Improving your shadow AI security by identifying and addressing all associated risks is imperative. Here’s what you can do today:
Get Executive Sponsorship and form a team: Assemble a cross-functional team with executive leadership buy-in, then define clear objectives. Identify all current AI uses and risks.
Define scope and objectives: Identify the business needs, assess current AI adoption, and set clear goals that are assigned to team members. Ensure recurring team meetings occur that are documented, and update/revise goals throughout the process. Goals should include an assessment of what use cases the company wants to strive for (e.g., internal use only, AI for software development use, AI for public-facing customer software, etc.).
Adopt a framework: Educate your team members on the various publicly available AI risk management frameworks (e.g., NIST AI RMF, ISO 42001, ISO 23894, etc.) and choose one to align the business security and compliance programs to adopt.
Conduct a Gap Analysis: Once a framework has been adopted, review all existing compliance and security controls against the standards of the chosen framework to identify baseline gaps.
Define acceptable AI use: Define what tools, platforms, models, etc., are acceptable and approved by the organization for the workforce to use, and ensure business justification is tied to each approval. Formalize and document this list as part of the overall organization’s acceptable software inventory. To assist with AI tool or model approval, a thorough cost/benefit analysis should be conducted.
Establish Roles and Responsibilities: Assign ownership for AI tool approvals, for AI monitoring processes, for AI data quality oversight, and for third-party vendor management for partners that utilize or could use AI.
Update Policies and Procedures: Create AI-specific policies, especially an Acceptable Use Policy (AUP) for employees, and revise existing policies to incorporate appropriate AI use standards.
Implement controls: Develop and implement formal AI approval processes, identity and access management controls, and AI logging and monitoring capabilities, data classification and labeling, data protection mechanisms, AI model data integrity checks and evaluation, etc. - as examples.
Educate the workforce: Revise current training programs to include acceptable, ethical, and proper use of AI within the organization. Ensure training includes an evaluation tool of comprehension. If applicable, ensure developers, data scientists, etc. are also educated on general security best practices for the use of AI (e.g., OWASP GenAI Security Project, OWASP Top 10 for LLM Applications, OWASP AI Security Guidance, etc.).
GRC: Ensure the creation of the AI programs includes components to cover all essential pillars of governance, risk, and compliance.
Document the programs: Maintain detailed documentation of all GRC objectives, decisions, monitoring, and assessment processes, and publish this to the workforce.
Test the AI Programs: Select a single department or location to implement the necessary AI rules and objectives, before implementing a full organizational-wide rollout
Monitor the Programs: Develop and implement ongoing program monitoring by conducting formal AI risk assessments. Update and revise necessary documentation and controls as necessary, or when major changes occur within the IT infrastructure.
Avertium’s Professional Services can help organizations create and implement an AI GRC program to advance you beyond one-off compliance audits, reducing shadow IT security risks and moving to a continuous and proactive approach.
AI Governance, Risk, and Compliance (GRC) Services: Avertium offers consultative GRC services that identify the status of responsible, ethical, and secure AI adoption within an organization. AI GRC consulting services include evaluating existing AI-related policies and providing a roadmap of recommendations for creating a comprehensive AI GRC program, including proactively managing and mitigating shadow AI risks.
Policy and Procedure Development: Avertium will assist the organization in revising existing policies and procedures or creating new documents to help build the foundation for an AI GRC Program.
NIST AI RMF Assessment: For organizations that have chosen to align with the NIST AI RMF voluntary framework, Avertium can conduct a structured approach to identifying, assessing, managing, and monitoring AI-related risks throughout the AI system lifecycle, following the four core functions of Govern, Map, Measure, and Manage This assessment is intended for organizations that develop, deploy, or currently use AI systems, and ensures that your organization’s policies are always aligned with the latest compliance mandates.
Microsoft Purview Services: Avertium’s Microsoft Purview services help organizations leverage Purview to monitor AI usage, protect sensitive data, and maintain compliance with evolving standards, which is essential for mitigating the risks associated with unauthorized or unsupervised AI adoption. From introductory workshops to Purview solution deep dives, Avertium helps organizations leverage Microsoft Purview for a proactive approach in building resilient governance structures and maintaining trust with stakeholders while enabling responsible innovation.
Related Resource: