Your Employees Are Using AI Anyway. Now What?
ChatGPT is already in your organization—whether you know it or not. Here's how to set guardrails without killing productivity.

The Elephant in the Browser Tab
Here's something every manager should know: your employees are almost certainly using ChatGPT, Claude, or similar AI tools. They're using them to draft emails, summarize documents, write reports, and answer questions.
This is happening whether or not you have an official AI policy. Whether or not you've approved any AI tools. Whether or not you even know about it.
The question isn't whether to allow AI—that ship has sailed. The question is how to manage it intelligently.
The Real Risks
Let's be clear about what you're actually worried about:
Data Leakage When someone pastes confidential information into ChatGPT, that data goes to OpenAI's servers. Depending on your settings and their policies, it might be used to train future models. This is a legitimate concern for sensitive investment data, client information, or proprietary strategies.
Inaccuracy AI confidently produces wrong answers. If someone uses AI to generate content and doesn't verify it, you might end up with errors in reports, correspondence, or analysis. This is embarrassing at best, harmful at worst.
Overreliance If people use AI as a crutch instead of developing their own skills, you might end up with a team that can't function when the AI is wrong or unavailable.
Compliance Depending on your regulatory environment, there may be specific rules about automated systems, record-keeping, or data handling that AI use could implicate.
What Doesn't Work
The Total Ban
Some organizations try to ban AI entirely. This rarely works. The tools are too accessible, too useful, and too tempting. All a ban does is push usage underground, where you have zero visibility and zero ability to set guardrails.
Ignoring It
Pretending AI isn't being used doesn't make the risks go away. It just means problems will surface eventually, and you won't have policies or processes to address them.
Surveillance
Monitoring every browser tab and keystroke is invasive, damages trust, and is probably less effective than you think anyway. People will find workarounds.
What Does Work
Set Clear, Reasonable Policies
Create an AI use policy that's: - Specific: What's allowed, what's not, and why - Practical: Rules that make sense and can actually be followed - Communicated: Everyone knows the policy exists and understands it
Example policy elements: - You may use AI for drafting and brainstorming - You may not input confidential client data, material non-public information, or proprietary investment strategies - All AI-generated content must be reviewed and verified before use - Disclose AI assistance when relevant (e.g., in formal reports)
Provide Approved Tools
If you give people tools that address their needs within acceptable guardrails, they're less likely to go outside those boundaries.
Options include: - Enterprise AI platforms: Microsoft Copilot, Google Duet, etc. These offer better data protection and administrative controls than consumer versions. - Specialized tools: AI systems designed for specific use cases (document analysis, research, etc.) that are configured to protect your data. - Self-hosted solutions: Running AI models on your own infrastructure, so data never leaves your control.
The right choice depends on your sensitivity requirements and budget.
Train Your Team
People need to know: - What AI is good at and what it's bad at - How to write effective prompts - How to verify AI output - What the risks are and how to avoid them
This isn't a one-time training. AI is evolving fast, and your team's understanding should evolve with it.
Create Feedback Loops
You want to know how AI is being used—not to punish people, but to learn and improve. Create ways for people to share: - What's working well - What they wish they could do but can't under current policies - Mistakes or near-misses
Use this feedback to refine your approach.
The Opportunity Cost
While you're figuring out guardrails, consider what you're leaving on the table. AI can meaningfully improve productivity for routine tasks. Every month you delay is a month of manual work that could have been automated.
The goal is to find a balance: protect against real risks while enabling legitimate productivity gains.
A Practical Approach
- Week 1 — Survey your team anonymously. Find out how AI is actually being used today.
- Week 2 — Draft a policy based on your findings and your risk tolerance. Get legal and compliance input if relevant.
- Week 3 — Communicate the policy. Not in a scary way—frame it as "here's how to use AI effectively and safely."
- Week 4 — Evaluate approved tools. If there are use cases you want to enable, find solutions that work within your guardrails.
- Ongoing — Train, refine, and iterate based on experience.
Bottom Line
Your employees are using AI. The question is whether they're using it safely and effectively. Get ahead of this by setting reasonable policies, providing good tools, and training your team to use AI as the productivity tool it can be—while avoiding the real risks.
Want to discussthis further?
We're always happy to talk through challenges like these. No pitch—just a conversation.
Get in touch