Corporate structure
OpenAI's structure is uniquely complex. The OpenAI Foundation (nonprofit) controls OpenAI Group PBC (for-profit Delaware Public Benefit Corporation) through special voting rights. The Foundation appoints all PBC board members and can replace them at any time. A Safety and Security Committee remains under the Foundation's governance. This structure was approved by the Delaware and California Attorneys General in October 2025 after nearly a year of negotiations.
For sovereignty purposes, the relevant fact is: the operating entity is a US-incorporated corporation. The nonprofit governance layer does not change the CLOUD Act jurisdiction.
🍁
Your Prompts & Data
Conversations, files
Unpredictable inputs
🏢
OpenAI Group PBC
Delaware PBC
Microsoft ~27% ownership
⚖️
US Legal Process
CLOUD Act · Subpoena
Full data access
The shadow AI problem
Unlike Slack or Microsoft 365, which are deployed through IT procurement, ChatGPT often enters organizations through individual use. Employees sign up with personal accounts, paste organizational data into prompts, and use the output in their work. This is shadow AI — AI tools used without organizational oversight, procurement review, or compliance documentation.
Every prompt containing personal information, client data, or internal documents constitutes a cross-border transfer to US-based infrastructure. Under Law 25, each transfer should be documented. In practice, organizations can't assess what they don't know about. This makes ChatGPT exposure inherently harder to assess than tools with defined data scopes — any data category might be input at any time.
The training data question — tier matters
OpenAI's data practices vary dramatically by product tier. This is the single most important distinction:
| Tier | Data Training | CDN Residency | EKM (BYOK) | Retention Controls |
| Free / Plus | May train models | No | No | No |
| Team | Not used | No | No | Limited |
| Enterprise | Not used | Available (CDN) | Available | Full |
| Edu | Not used | Available (CDN) | Limited | Full |
| API Platform | Not used | Available (CDN) | Available | Zero retention |
If an employee pastes client personal information into a consumer ChatGPT account, that data may be incorporated into OpenAI's models and become irrecoverable — it cannot be deleted because it has been absorbed into model weights. This goes beyond a data transfer problem into a data retention and deletion problem that most privacy frameworks are not designed to address.
Canadian data residency — what it covers
Since October 2025, eligible ChatGPT Enterprise, Edu, and API customers can store customer content at rest in Canada. This covers conversations, uploaded files, custom GPTs, and image generation artifacts. However:
- Storage only, not inference: Inference residency (GPU processing) is currently available in the US and Europe only — not Canada. Data stored in Canada may be processed on US or EU infrastructure.
- System data excluded: Account data, billing, metadata, usage statistics, and logs are not covered by data residency and may be stored globally.
- Connectors and integrations: Data flowing through connectors may be limited to US residency regardless of your workspace configuration.
- New workspaces only: Data residency can only be configured for new workspaces — existing workspaces cannot be retroactively moved.
Quebec Law 25
Quebec organizations must complete a Transfer Impact Assessment. The TIA should document: which OpenAI product tier is in use, whether Canadian data residency is configured, whether EKM is enabled, what policies govern employee prompts, and the shadow AI risk. The minimum defensible position is a documented AI usage policy, a TIA covering the organizational deployment, and training on what data categories should not be entered. Upper Harbour provides compliance-ready TIA documentation starting at $99.
Alberta POPA
Alberta public bodies must complete a PIA. The shadow AI problem is particularly acute in government: employees using consumer ChatGPT accounts to draft communications, summarize documents, or analyze data may be transferring citizen personal information without any organizational awareness. The PIA Research Tool generates these answers automatically.