AI adoption is no longer a future-state discussion for businesses. It is already happening. Informally, inconsistently, and largely outside organisational control.
This is not a technology issue. It is an operating model, governance, and risk management issue.
The most material risk facing organisations today is not that AI will replace people, but that people are already using generative AI in ways the organisation hasn’t designed, approved, or governed.
Quietly. Individually. And often with good intentions.
Do Organisations Actually Learn from Experience?
Every board has heard the phrase: “fail forward, learn fast.” But learning at organisational
scale does not happen through intent or encouragement. It happens through systems and
process design.
The uncomfortable truth is this: Organisations rarely learn from experience fast enough.
They repeat what feels safe, scalable, and familiar. Until the cost is no longer tolerable.
Generative AI exposes this weakness brutally.
Instead of organisations deliberately redesigning work to embed AI responsibly, individuals do what they’ve always done when systems lag behind reality. They improvise.
How AI Is Entering the Business, Without Permission
AI is not entering through enterprise strategy or approved platforms. It is entering through:
Free browser-based tools
Personal accounts
Work done under time pressure
This means organisations are “adopting” AI, but without visibility, consistency, or control.
From an executive perspective, this is no different to unmanaged shadow IT. Except faster, cheaper, and far harder to detect. And all your employees are doing it.
What Employees Are Already Doing Today
This is happening now, whether policies exist or not.
Pasting client emails into free AI tools to draft responses
Using personal AI accounts to summarise contracts and proposals
Asking generative AI to rewrite financial models or assumptions
Drafting HR communications and performance feedback with AI
Uploading internal documents to “see what AI suggests”
To the individual, this feels efficient and harmless.
To the organisation, it creates cumulative exposure and large-scale risk!
The Business Impact Executives Must Understand
1. Data Is Leaving the Organisation. By Design, Not Malice.
Free generative AI tools operate entirely outside corporate firewalls, policies, and monitoring.
Once data is entered:
The organisation loses visibility
Control over downstream usage is gone
Auditability disappears
The LLM learns more about you
This affects:
Client confidentiality
Intellectual property
Commercially sensitive information
Most importantly, boards should note there is often no intent to misuse data. Only urgency to get work done. However, the end result remains the same. Negligence. Intent does not mitigate exposure.

2. Regulatory and Contractual Risk Is Created Accidentally
Employees do not need to act irresponsibly to create legal risk. Uploading personal or sensitive information into external AI tools can constitute:
Unauthorised data processing
Cross-border data transfers
Breach of POPIA, GDPR, or sector-specific regulations
Violation of client and supplier agreements
From a governance perspective, these risks sit not with the individual, but with the organisation and its leadership.
3. Decision-Making Quietly Degrades
At scale, unmanaged AI use leads to invisible variability.
Different teams; use different tools, receive different answers, apply different levels of trust.
The result is fragmented decision-making that is; hard to explain, impossible to reproduce, and difficult to defend. This undermines consistency, confidence, and accountability. Core executive concerns.
4. Trust Is Lost Before AI Is Even Formally Introduced
When unmanaged AI produces unexpected outcomes, trust evaporates quickly.
Once lost, trust is:
Slow to rebuild
Expensive to repair
Often used as justification to halt innovation entirely
This is rarely caused by the technology itself, but by lack of upfront design and governance.
Learning in the Flow of Work Requires Work to Be Understood
“Learning in the flow of work” has become a familiar phrase, but it is often misunderstood.
You cannot embed AI into workflows you do not understand.
Real AI literacy begins with:
Recognising what data is captured, where and for what reason
Observing how that data actually flows through daily work
Identifying where decisions are made
Understanding what information is genuinely required
Designing where AI may assist and where it must not act
This is an engineering discipline, not an experimentation exercise.
In previous transformation work, significant operational savings were achieved not by new technology, but by architecting processes intentionally. AI requires the same discipline, only faster.
AI Should Live Inside the Workflow, Not Around It
When AI is embedded properly:
It uses approved data sources
Operates within clear permission boundaries
Supports consistent, explainable decisions
Augments people without exposing the organisation
When it is not, it leaks through browser tabs and personal accounts. Unseen, unmanaged, and increasingly relied upon.
The Hardest Risk: Human Bias
Even with governance, AI challenges another organisational reality: confirmation bias.
People trust what reinforces their experience. They resist what contradicts it.
When left unmanaged; AI is ignored when inconvenient, overridden without reflection, or
quietly sidelined.
AI most often fails not through error, but through gradual disengagement.
A Practical Executive Level Checklist
If a business’s executive cannot confidently answer these questions, AI risk already exists:
Visibility & Control
Do we know where generative AI is currently being used?
Are employees using AI tools outside approved environments?
Data & Compliance
What data is explicitly prohibited from external AI tools?
Can we evidence due diligence if challenged by a regulator or client?
Operating Model
Is AI embedded in workflows, or used ad hoc?
Can AI-influenced decisions be explained and repeated?
Governance & Trust
Have we defined how AI is allowed to assist, recommend, or act?
Are accountability and decision rights clear?
Closing Perspective
Organisations do not fail because of AI. They fail because they allow AI to enter the business without intention, design, or governance.
AI literacy is not just training people to write better prompts. It is not purchasing tools. And it is not experimentation without boundaries. It is the deliberate redesign of work, controls, and trust so AI accelerates value creation without eroding compliance, confidence, or reputation.
If nothing changes, nothing changes. Except for the risk.
Written by Craig McKenzie and Devaan Parbhoo

