The shadow AI problem
Unlike Slack or Microsoft 365, which are deployed through IT procurement, ChatGPT often enters organizations through individual use. Employees sign up with personal accounts, paste organizational data into prompts, and use the output in their work. This is shadow AI — AI tools used without organizational oversight, procurement review, or compliance documentation.
The compliance implications are significant. Every prompt containing personal information, client data, strategic documents, or internal communications constitutes a cross-border transfer of that data to US-based infrastructure operated by a US-incorporated company. Under Law 25, each of these transfers should be documented in a Transfer Impact Assessment. In practice, organizations can't assess what they don't know about.
The training data question
OpenAI's data practices have evolved and vary by product tier. For free and Plus consumer accounts, OpenAI has historically used conversation data to improve its models — meaning data entered into ChatGPT could influence future model outputs. For Team, Enterprise, and API accounts, OpenAI states that customer data is not used for model training.
This distinction matters enormously for compliance. If an employee pastes client personal information into a consumer ChatGPT account, that data may be incorporated into OpenAI's models and become irrecoverable — it cannot be deleted in any meaningful sense because it has been absorbed into model weights. This goes beyond a data transfer problem into a data retention and deletion problem that most privacy frameworks are not designed to address.
Organizations should verify which OpenAI product tier is in use and understand the specific data handling terms for that tier. The difference between consumer and enterprise accounts is not just a pricing question — it is a fundamental compliance boundary.
What data flows through ChatGPT
The data risk from ChatGPT is uniquely unpredictable because users determine what they input. Unlike a CRM that holds defined data categories, ChatGPT can receive anything an employee decides to paste or type. In practice, organizations have found employees submitting: client emails for summarization, internal strategy documents for analysis, code containing proprietary logic, HR documents for rewriting, financial data for interpretation, and personal information of clients, employees, and partners.
This makes ChatGPT exposure inherently harder to assess than a tool with a defined data scope. The TIA cannot specify which data categories are processed because any category might be.
ChatGPT Enterprise and API: better but not sovereign
OpenAI's Enterprise tier and API access provide meaningful improvements: data is not used for training, conversations are encrypted in transit and at rest, and SOC 2 Type II compliance is available. These are genuine safeguards that make enterprise deployment defensible.
But the structural jurisdiction issue remains. OpenAI is US-incorporated. Data is processed on US infrastructure (primarily Azure, which means the infrastructure is also under Microsoft's US jurisdiction). There is no Canadian data residency option. A CLOUD Act order could compel OpenAI to produce conversation logs, uploaded files, and any retained data.
The compliance position
Organizations have three practical options:
Formalize and assess: Deploy ChatGPT Enterprise or API access through official procurement, complete a TIA, establish usage policies that restrict input of personal information and sensitive data, and train employees on acceptable use. This is the defensible path for organizations that want to use AI.
Block and restrict: Prohibit ChatGPT use on organizational networks and devices. This is simple in policy but difficult in practice — employees can use personal devices, and enforcement is nearly impossible without invasive monitoring.
Ignore and hope: This is what most organizations are currently doing. It is the least defensible position. When a regulator or auditor asks about AI tool usage and data transfers, having no policy, no assessment, and no documentation is a clear compliance failure.
The minimum defensible position is a documented policy acknowledging AI tool usage, a TIA covering the organizational ChatGPT deployment, and training on what data categories should not be entered into any AI tool.
Microsoft 365 (Copilot) → · Google Workspace (Gemini) → · Notion →
OpenAI is US-incorporated and subject to the CLOUD Act. BC public bodies using OpenAI services with personal information should exercise particular caution — AI processing can involve data in ways that are difficult to scope, and OpenAI's data handling practices have been subject to regulatory scrutiny. A thorough FIPPA PIA is essential. Read the full FIPPA SaaS compliance guide → · Download PIA template →