You Can’t Delegate AI Transformation
- Posted by Lucy Dinu
- On 12/02/2026
BCG published a report in September 2025, according to which leaders who personally experiment with AI tools, achieve outcomes up to 12 times better than those who delegate exploration to their teams.
12 times!
The technology is the same across organizations, but the difference consists of how leaders interpret, guide, and integrate it into the organization.
Most executives still approach AI as observers, where they attend demos, approve budgets, mandate training programs, and expect value to emerge downstream. Then six months later, when little has changed, confusion starts to set in. The tools work for other companies, so why not theirs?
The difference in the companies that achieve 12 times better outcomes lies in who is willing to struggle with AI. And most leaders are not.
The Training Reflex and why it fails
Organizations respond to AI the same way they respond to every technology: they buy tools, run pilots, train employees, and encourage experimentation.
On paper, this makes perfect sense. Social media reinforces the idea that AI offers immediate and effortless productivity gains, vendor demonstrations show seamless workflows with pristine data, and AI is evolving so quickly that leaders want teams to stay current, test possibilities and surface opportunities.
The implication: value is simply waiting to be unlocked.
Reality however looks different.
Employees return from training to fragmented data, undocumented processes, unclear priorities, workflows that exist more in people’s heads than in systems and success metrics that are poorly defined (if at all). In this type of environment, experimentation happens in a vacuum, completely disconnected from meaningful outcome.
At the same time, employees are asked to explore tools that are fundamentally reshaping or eliminating their roles – often outside of core working hours – while still meeting existing performance expectations.
It is not a surprise that most experimentations revolve around low-risk applications such as drafting emails, summarizing documents, and polishing language for performance reviews.
Meanwhile, the leadership expresses frustration. Tools were provided and training was delivered. Why isn’t anything changing? Why isn’t innovation happening?
Well, because leadership has not clarified what problem the organization is actually trying to solve with AI.
AI as an Innovation Capability
The organizations that see real value from AI don’t necessarily try harder or have secret frameworks. They just lead differently.
They recognize that AI adoption is not primarily about tools, but about how leaders make sense of emerging capabilities and translate them into organizational direction.
There are four leadership behaviors that consistently distinguish progress from stagnation.
1. Leaders develop a realistic sense of what is possible
Executives who engage directly with AI tools – spending entire weekends exploring the ins and outs– quickly learn the difference between controlled demonstrations and operational reality. They experience the real constraints related to data quality, integration complexity, risk tolerance, and human judgement.
This is important because decisions, made without this grounding and fundamental understanding, are often based on abstractions rather than real context. Most leaders approve solutions that appear impressive but fail to align with how work is actually done in their organizations.
Hands-on experience builds discernment, which prevents organizations from investing in solutions that cannot scale within their real conditions.
2. Leaders stop funding solutions to problems they don’t have
Decades of research on digital and organizational transformation point to the same conclusion: how leaders interpret technology matters more than the technology itself.
Interpretation shaped by personal experience fundamentally differs from interpretation shaped by a presentation. The presentation focuses on the capability of the technology, and tries to fit technology into an organization.
Leaders who struggle with tools themselves- who encounter the limitations, the errors, the character limits, or unexpected behavior- they start developing a sharper sense for which problems are worth solving, which require foundational work first, and which should not be pursued at all.
Some call this skepticism, but it is a capability that protects organizational focus, resources, and builds trust with teams who see their time directed toward work that actually matters.
3. Leaders create shared learning
When leaders dive into exploring the tools themselves, they talk differently: they ask better questions, they admit uncertainty, they don’t take every output as trustworthy and they change the quality of conversation inside the organization.
They start talking about their failures, their learning, their curiosities and things that surprised them.
This vulnerability and realism rather than abstract enthusiasm, encourages employees to become more willing to share their own challenges and ideas rather than hide them. Over time, learning becomes collective, increases in speed, and everyone gets on the same page.
Just as important: leaders who struggle to articulate what they need from an LLM, often struggle to articulate expectations to their teams and AI makes this issue visible.
This clarity, or lack of it, shapes how experimentation scales across the organization.
4. Leaders understand the human cost of change.
AI adoption has placed a significant cognitive and emotional demand on people. It challenges competence, identity, and perceived value. For many people, work is not just income, but their whole identity. Asking employees to experiment with a technology that threatens that identity, especially without visible leadership participation, creates fear and uncertainty.
But leaders who have experienced the learning curve firsthand and understand the implications, are better equipped to set realistic expectations, protect psychological safety, and guide change without eroding trust. They understand that enthusiasm alone does not sustain experimentation. Trust, support and credibility do.
This leadership capability can only be developed through direct exposure to unfamiliar situations that require judgement, empathy, adaptation and accountability. This in itself cannot be outsourced.
What Leadership Engagement Looks Like in Practice
In practice, effective leadership engagement with AI has nothing to do with the technology, or its performance, but everything to do with discipline and behavior change.
Some organizations have started implementing regular leadership experimentation sessions, where executives block time to work on real problems, using the same tools their teams are expected to adopt. They document what worked, what didn’t, what questions they had to ask or reformulate, what surprised them, and what gave them the most headache.
Most importantly, they share these learnings with their teams.
The goal here is not technical mastery. The CEO or CFO does not need to understand model architecture of GPU power, but they do need to understand where tools add value, where they introduce risk, and that the quality of the input shapes the quality of the output.
A Non-Negotiable Starting point
Block 90 minutes this week. Select one real problem you are accountable for. Use the AI tool your organization is rolling out. Work on your task.
You will encounter friction. That is the point. That is the learning.
Then ask your leadership team what they learned from doing the same experiment and if they cannot answer, the reason your AI initiatives are stalling will be clear.
AI does not replace leadership, but it exposes if leadership is present.
The question you should be asking right now is “How do we develop leaders capable of navigating continuous technological change?”
