AI Governance · Two Separate Problems

AI Governance

Your people are using AI tools right now, and you probably do not know how. That is one problem. Autonomous agents making decisions is another. I have watched companies try to solve both with the same policy. It never works. This page gives you a framework for each.

The critical distinction: AI usage governance covers how your people use AI tools. Agent governance covers how autonomous agents operate. I have seen companies write a single "AI policy" that tries to cover both. Six months later, shadow AI is everywhere and nobody knows what the agents are doing. Conflating these leads to frameworks that solve neither problem.

AI Usage Governance

How your people use AI tools in their daily work. Someone pasted customer data into ChatGPT today. Your leadership team has no idea. Your Head of People or COO typically owns this layer. You set the expectation.

The Reality of Shadow AI

Your employees are using ChatGPT, Claude, Gemini, and Copilot right now. Copying code. Pasting customer conversations. This is not a broken culture. It is people trying to do better work with the tools available. But your compliance team cannot govern what it cannot see.

Data Leakage Risk

Once data enters a public AI tool, it is no longer private. It trains future models. It could reach competitors. The question is not whether your organization has leaked data. It is how much.

The Productivity Paradox

AI tools increase productivity. Without governance, they increase risk faster. Your best people use these tools most and have access to your most sensitive information.

The AI Usage Policy (One Page, Four Questions)

Answer these four questions, document the answers, and you have a policy:

1. What Tools Are Approved?

You cannot ban AI usage. You can clarify which tools are sanctioned, which are permitted with restrictions, and which are prohibited. Example: ChatGPT approved for brainstorming. Claude approved for code review. No external tools for customer data.

2. What Data Can/Cannot Go In?

Classify your data. Public information can go into any tool. Internal strategy should not. Customer data, financial information, and source code require tool-specific review. Employee personal data: never.

3. What Outputs Require Human Review?

AI outputs are not facts. They are approximations. Code requires code review. Marketing copy requires human review. Technical documentation requires fact-checking. Define what needs verification before it ships.

4. Who Do You Ask When Unsure?

Policies create edge cases. Your security team is the escalation point. Make it easy for employees to ask before they make a mistake, not after.

AI Usage Maturity Model

Maturity is not about company size. A 50-person company can be at Integrated. A 5,000-person company can be at Ad Hoc. Maturity depends on decisions you make, not headcount you have.

Stage Reality What You Need
Ad Hoc Individual employees using AI tools on their own. No coordination. Invisible to leadership.
  • AI usage policy (one page)
  • Tool inventory (what are people actually using?)
  • Data classification framework
Sanctioned Company has approved specific AI tools. Teams are using them consistently. Some governance exists.
  • Expanded policy (tool-by-tool guidance)
  • Output review standards
  • Usage tracking and audit logs
Integrated AI tools are embedded in core workflows. Decisions are made faster. Productivity has increased.
  • Dependency risk assessment (what breaks if tools go down?)
  • Fallback procedures
  • Skills taxonomy (what skills does AI replace vs. augment?)
AI-First Every process starts with the AI question. Work architecture is designed around AI-human collaboration from the beginning.
  • Work architecture map (how is work distributed between AI and humans?)
  • Human-AI collaboration standards
  • Continuous governance and evolution process

AI Agent Governance

What happens when AI makes decisions on its own. A fundamentally different governance problem. I have sat in rooms where a CEO found out an AI agent was scoring job candidates and nobody in leadership had approved it. Your technology leader typically owns this layer. You and your COO set risk tolerance and escalation boundaries.

Tool vs. Automation vs. Agent

The distinctions matter. They require different governance approaches:

A Tool

Something a human uses. GitHub Copilot suggests code. The developer decides whether to accept, reject, or modify. The human has full agency.

An Automation

Executes predefined logic without human input. If (condition), then (action). A workflow automation that closes completed Jira tickets. A process that moves emails to folders based on keywords. Zero agency.

An Agent

Makes decisions with meaningful autonomy. An agent that schedules meetings evaluates calendars, duration, and project timelines to find optimal times. No predefined rules for every scenario. That is the difference.

Agent Risk Classification

Classify agents by their impact on people. Impact determines governance intensity.

Tier Definition Examples Governance
Tier 1: Operational Low people impact. Operational efficiency only. Email classification. Log analysis. Report generation. Meeting scheduling. Standard IT oversight. Quarterly performance review.
Tier 2: People-Adjacent Moderate people impact. Decisions affect workflow but not employment. Project resource allocation. Customer prioritization. Help desk triage. Monthly output review. Human override mandatory. Semi-annual bias check. Audit trail required.
Tier 3: People-Critical High people impact. Decisions affect employment, compensation, or evaluation. Hiring scoring. Performance evaluation. Compensation adjustment. Promotion ranking. Full lifecycle management. Quarterly bias audits. Human-in-the-loop mandatory. Regulatory compliance required.

Agent Lifecycle Management

Not a one-time setup. Continuous governance across six stages.

Stage 1

Scoping

Define what the agent will do. Classify its risk tier. Identify stakeholders. Document assumptions about data quality, fairness, and safety. Establish baseline metrics.

Stage 2

Onboarding

Test the agent against real-world scenarios. Verify it meets safety and fairness criteria. Build feedback loops. Train users on the agent. Establish human oversight protocols.

Stage 3

Monitoring

Track agent performance and decisions. Watch for drift (is it behaving differently than expected?). Monitor for bias. Collect user feedback. Maintain audit logs.

Stage 4

Performance Review

Regular audits of agent decisions. Are fairness assumptions holding? Is accuracy consistent across demographic groups? Are edge cases being handled correctly? Escalate problems.

Stage 5

Escalation / Override

Humans must be able to override agent decisions. Escalation paths must be clear. When an agent makes a decision that could harm someone, escalation should be automatic, not optional.

Stage 6

Evolution / Retirement

As new data comes in, agents may need to be retrained. Business requirements change. Agents may become obsolete. Have a plan to retire agents safely. Archive historical decisions.

Bias Monitoring Framework

Agents perpetuate bias. This is documented, not theoretical. You need a monitoring framework.

Tier Audit Cadence Fairness Testing Audit Logging Auditor Remediation SLA
Tier 1 Annual Performance by user role Decision metadata only IT/Operations 6 months
Tier 2 Semi-annual Performance by demographic group (if available) Full decision context logged Internal compliance 3 months
Tier 3 Quarterly Fairness testing across protected classes. Disparate impact analysis. Full audit trail. Every decision. Every override. External auditor (legal/compliance required) 30 days

Regulatory Awareness

Agent governance is not optional. It is regulated. Here is what you need to know.

EU AI Act

High-risk AI systems require risk assessment, transparency, and human oversight. Hiring and HR systems are explicitly high-risk. Non-compliance carries significant fines.

GDPR Article 22

Individuals have the right to not be subject to automated decision-making that produces legal effects. Humans must be able to review and override agent decisions.

NYC Local Law 144

Employment-related AI systems must be audited for bias. Employers must notify candidates when AI is used in hiring. Audit results must be published.

NIST AI Risk Management Framework & ISO/IEC 42001

Voluntary frameworks that set standards for AI risk management, transparency, and accountability. Increasingly expected by investors and customers.

Your people using AI tools requires a policy. Your agents require a lifecycle. The frameworks are different. The stakes are different. Treat them differently.