
Bridging the Gap: Seven Leadership Practices for Successful AI Transformation | LSE AI Leadership Accelerator
- FourthRev Team
The promise and reality of AI in business
Record levels of investment show that business leaders recognise AI’s game-changing potential. Yet the gulf between promise and impact remains stubbornly wide:
- Only 1% of organisations describe their AI deployments as “mature.”
- 74% of companies still struggle to turn AI into measurable value.
- 42% of firms abandoned most AI initiatives in 2025, up from 17% last year.
By contrast, Deloitte’s year-end State of Generative AI pulse shows the opposite end of the spectrum: 74% of the most advanced Gen-AI initiatives are already meeting or exceeding ROI targets, while only one in five projects overall achieves that level of success — evidence of a widening leader–laggard gap.
Real-world lessons from LSE and Deloitte
The insights shared in this blog are drawn from a recent live event hosted by the London School of Economics and Political Science (LSE) and FourthRev, which brought together LSE faculty experts and an industry leader from Deloitte to explore what it really takes to translate AI strategy into business impact.
The session also introduced the new LSE AI Leadership Accelerator, a programme purpose-built for senior professionals and consultants who need to lead or advise on AI transformation. Developed in collaboration with experts from Deloitte’s Office of Gen AI, as well as Heads of AI at Inchcape Digital, Thredd and others, the Accelerator focuses not on technical training, but on strategic leadership, organisational change, and responsible implementation — the very themes explored during the webinar panel.
The industry quotes that follow are taken directly from this event. They illustrate, in the speakers’ own words, the real barriers organisations are facing and the leadership mindsets required to overcome them. You can watch the full recording of the session below:
The surprising truth: It’s rarely a technical problem
A growing body of research confirms what many executives feel intuitively: technology isn’t the main barrier.
According to BCG’s 2024 AI Radar, around 70% of adoption challenges stem from people and process issues, not technical limitations like algorithms or infrastructure.
Dr Dorottya Sallai, an Associate Professor (Education) of Management at the LSE Department of Management, calls AI adoption a “cultural transition.” As Dr. Sallai explained during the webinar:
“With every digital transformation, it all depends on the people. You can introduce a new system, but if people don’t use it, or they revert to the old systems, it’s not going to work. With generative AI and AI more generally, it’s more complex because it is not like traditional technology; it will impact a lot of different aspects of the organisation — the decision-making processes. The biggest issues are the psychological dimension, the cultural dimension, and the leadership dimension.”
Dr. Dorottya Sallai, Associate Professor (Education) of Management, LSE Department of Management
Put simply, strategic misalignment, cultural resistance, and leadership inertia — not model accuracy — explain most tech failures.
In many organisations, technical teams build capable models but struggle to communicate business value. Business teams, meanwhile, face internal friction: employees worry about job security, and executives grapple with scepticism rooted in legacy systems and past failures.
Perhaps most telling is McKinsey’s 2025 finding: “The biggest barrier to scaling AI is not employees — who are ready — but leaders, who are not steering fast enough.” Their research shows that leaders are twice as likely to blame employee resistance as to acknowledge their own gaps in guiding transformation.
Seven leadership practices that drive AI success
Based on comprehensive research across multiple industries and feedback from organisations actively implementing AI today, seven leadership practices consistently emerge as critical factors in successful AI transformation. These practices address the most common challenges we hear from executives: lack of strategic vision, organisational resistance, and difficulty demonstrating tangible value.
1. Foster trust and transparency
Trust is the foundation of successful AI adoption. When employees don’t understand AI’s purpose or implementation, resistance naturally follows. Harvard Business Review puts it succinctly: “Employees won’t trust AI if they don’t trust their leaders.”
Effective leaders:
- Communicate openly about AI initiatives and their intended impact
- Acknowledge uncertainties and address concerns proactively
- Involve employees in AI experiments and decision-making
- Demonstrate how AI decisions are made and can be verified
- Establish formal ‘responsible AI’ processes
The impact is measurable: According to IBM research, organisations that implement formal responsible AI frameworks report significantly higher workforce adoption and engagement.
What’s more, employees back what they understand. Deloitte’s C-suite ethics survey shows 88% of companies now communicate openly about how they use AI, and 52 % involve the board when drafting AI-ethics policy.
In short, when leaders communicate clearly and implement responsible AI principles, they create the trust and clarity teams need to adopt AI confidently.
2. Lead with a clear vision and business case
LSE researchers warn that “AI strategies without a compelling ‘why’ rarely survive first contact with reality.” The best leaders link every AI use case to a strategic outcome and publish an enterprise AI roadmap.
Deloitte finds C-suite alignment is a top-three predictor of scaling success. AI initiatives succeed when they’re connected to strategic business outcomes rather than implemented as technology experiments. Leaders must articulate a compelling ‘why’ behind AI adoption.
Successful organisations:
- Connect AI directly to business strategy and priorities
- Develop compelling narratives about AI’s purpose
- Establish clear ROI metrics and milestones
- Focus on solving real business problems rather than deploying technology for its own sake
IBM emphasises that the most successful AI adopters ground their initiatives in clearly defined business strategies, supported by structured roadmaps that align AI efforts with enterprise goals — a sharp contrast to the ad hoc approaches seen in less mature organisations.
Strategic programme managers emphasise that developing validated narratives around AI adoption — supported by concrete business cases — is essential for earning stakeholder buy-in, particularly in organisations grappling with legacy systems or recovering from previous failed technology initiatives.
3. Establish strong governance and ethics
Ethics is now a revenue issue. Deloitte reports that 55% of C-level executives say robust AI guidelines are very important to growth, and 49% already have formal policies in place (another 37% are nearly ready). When the CEO or board takes direct oversight, McKinsey observes a 3.6× boost in bottom-line impact.
As AI becomes more embedded in critical business processes, governance becomes increasingly important. Leaders must establish appropriate guardrails while enabling innovation.
Effective governance includes:
- Clear ‘guardrails’ for appropriate AI use cases
- Multi-disciplinary oversight spanning technical and business perspectives
- Responsible AI principles addressing bias, privacy and transparency
- Proactive compliance management
- Executive-level accountability
This level of executive oversight has become a consistent marker of maturity. Companies that embed governance at the highest levels tend to achieve stronger alignment, greater impact, and more sustainable results.
IKEA provides an instructive example, having established a multidisciplinary AI governance team, comprising technologists, legal experts, policy professionals, and designers, that ensures AI initiatives align with business priorities and uphold responsible AI principles.
4. Invest in people and skills
AI fluency must extend across the enterprise. Nearly two-thirds of organisations prefer up-skilling existing employees over external hiring for new AI roles; almost half are already reskilling staff for Gen-AI. LSE’s “human-centred” guidance stresses training in critical thinking, change management, and prompt engineering to turn fear into curiosity.
AI fluency is rapidly becoming as important as digital literacy. Organisations must build capabilities across all levels, not just among technical specialists.
Leading organisations:
- Develop organisation-wide data literacy programmes
- Provide role-based AI capability training
- Establish AI academies and learning paths
- Build communities of practice to share knowledge
- Train employees in “prompt engineering” for generative AI
BCG research indicates that organizations with a strategic focus on AI —allocating substantial resources and upskilling their workforce — achieve significantly higher ROI on their AI investments compared to their peers.
Technical leaders increasingly highlight the importance of soft skills alongside technical fluency — including the ability to frame AI solutions in terms of business value, communicate outcomes to stakeholders, and align innovations with strategic goals. As AI becomes more integrated into decision-making, this blend of business and technical acumen is proving essential for driving adoption and delivering real impact.
5. Encourage experimentation and learn from failure
Innovation demands controlled testing environments. Deloitte’s Gen-AI survey shows 76% of leaders will give AI projects at least 12 months to resolve ROI or adoption challenges before shrinking budgets, signalling patience for iterative learning.
Innovation requires experimentation, and experiments sometimes fail. How organisations handle failure often determines their long-term success with AI.
Successful approaches include:
- Creating safe spaces for AI experimentation
- Destigmatising failure as an essential part of the learning process
- Applying agile methodologies to AI projects
- Systematically reviewing lessons learned across the organisation
- Celebrating both successes and valuable failures
BCG’s research identified a counterintuitive insight: Organisations that acknowledge and ‘celebrate failures’ in AI pilots correlate with higher long-term value creation, likely because they learn faster and iterate more effectively.
The hidden cost of hesitation
While experimentation is essential, many organisations remain cautious — particularly those navigating economic pressure or still carrying the weight of past transformation failures.
George Johnston from Deloitte and offered a candid reflection on this dilemma during the LSE webinar:
“You don’t experiment for nothing… there is a cost to these things. You’ll ask, is now the time to be spending money on something that may not work? Or shall we hold on to that for the time being, understand what others are doing? We don’t necessarily need to be the first mover.”
George Johnston, AI Technology, Media and Telecoms (TMT) EMEA leader, Partner at Deloitte
Pausing may feel prudent, but in fast-moving fields like AI, the bigger risk is falling behind while others build capability, confidence, and momentum.
6. Show visible leadership and align from the top
Culture follows example. High-achieving companies are three times more likely to trust AI insights over ‘gut feel,‘ yet they also invest heavily in change-management and training to channel that confidence productively. Cross-functional steering committees, and leaders who personally use AI tools, signal that transformation is non-negotiable.
George noted in the event:
“When I think about some of the experimentation that’s not worked so well… probably the most common factor that we’ve seen is that there has not been sufficient senior sponsorship, and/or there’s a limited path to that scaling. That tends to be where things fail — not understanding the end-to-end process of how you are going to transform, and having the senior buy-in.”
George Johnston, AI Technology, Media and Telecoms (TMT) EMEA leader, Partner at Deloitte
When leaders demonstrate personal commitment to AI adoption, it signals importance to the entire organisation. Alignment at the top is particularly crucial given AI’s cross-functional nature.
Effective leadership alignment includes:
- Ensuring C-suite consensus on AI strategy and priorities
- Establishing cross-functional steering committees
- Creating dedicated transformation offices when appropriate
- Leaders personally using and championing AI tools
- Regular board-level engagement on AI progress
IBM found that in the most successful AI organisations, the C-suite and IT leaders work in lockstep — a sharp contrast to companies where AI remains siloed in technical teams.
7. Prioritise ethical leadership & responsible AI
Beyond compliance, ethics is a brand differentiator. According to Deloitte, board-level involvement in AI-ethics policy is becoming standard practice, with 52% of boards always engaged. Organisations with mature ethical frameworks are also 2.5× more likely to earn customer trust. LSE experts add that open dialogue about risks creates the psychological safety needed for rapid, responsible experimentation. As Dr. Dorottya Sallai put it:
“My advice would be — equip yourself with the knowledge. If you have the knowledge and you understand what’s happening, you will be able to take leadership. I think the biggest challenge for leaders today is to understand what’s going on.”
Dr. Dorottya Sallai, Associate Professor (Education) of Management, LSE Department of Management
As AI systems become more powerful and influential, ethical leadership has emerged as a critical practice for sustainable success. Organisations leading in this area recognise that responsible AI isn’t just about risk management — it’s about competitive advantage and long-term viability.
Effective ethical leadership includes:
- Defining and enforcing clear ethical boundaries for AI use
- Ensuring diverse perspectives in AI development and governance
- Proactively addressing potential biases in data and algorithms
- Creating transparent processes for addressing ethical concerns
- Aligning AI initiatives with organisational values and societal expectations
Ethical leadership in AI isn’t just about doing the right thing — it’s increasingly essential for driving adoption, earning customer trust, and unlocking long-term value. Deloitte finds that organisations with clear AI governance structures are more likely to see real business impact, while the World Economic Forum warns that consumers are paying closer attention to how companies design and deploy AI.
As AI scales, so do the consequences of getting ethics wrong — making proactive, transparent leadership non-negotiable.
The business impact of effective AI leadership
Combine these practices and the numbers tell a compelling story: effective AI leadership isn’t a soft skill — it’s a measurable differentiator. Organisations that lead in this space grow revenue 1.5× faster, achieve higher ROI, and stay ahead of disruption.
The gap between leaders and laggards continues to widen, creating urgency for organisations to address leadership capabilities around AI implementation. As Dr. Sallai highlighted, leadership today demands not just strategy — but the willingness to stay ahead of fast-moving change.
Developing AI leadership capabilities: LSE AI Leadership Accelerator
Most AI leadership programmes still focus on coding or data science. The LSE AI Leadership Accelerator is different: it equips executives to apply these seven practices, turn strategic intent into business value, and join the elite 20% of transformations that succeed. Participants leave with:
- Practical tools to implement the seven leadership practices in their organisations
- A board-ready AI business case and implementation roadmap
- Toolkits for responsible AI governance and culture change
- Direct feedback from Deloitte and LSE faculty on live projects
- A peer network of leaders closing the AI value gap
Transforming organisational approaches to AI requires intentional development of leadership capabilities. Many professionals are now aiming to break through the next leadership ceiling — shifting from tactical roles into strategic influence by mastering AI implementation.
The LSE AI Leadership Accelerator helps leaders close the implementation gap by focusing on real-world business cases, responsible governance, and the human of AI transformation — exactly where most initiatives fall short.
As the research clearly demonstrates, the gap between AI potential and business value is primarily a leadership challenge, not a technical one. Organisations that develop strong AI leadership capabilities position themselves to capture the substantial value that AI offers while avoiding the pitfalls that have derailed so many initiatives.
Ready to bridge the gap between AI potential and real business impact?
To learn more about the LSE AI Leadership Accelerator and how it can help your organisation bridge the AI implementation gap, download the programme brochure.