top of page

The Long Road from AI Hype to Reality



Gartner’s 2023 Hype Cycle portrays transformational generative AI at the Peak of Inflated Expectations with the mainstream impact on business around 2025-2028.

A BCG survey reveals a troubling adoption gap, as it indicates that while CEOs declare AI adoption as a priority, only a tiny percentage of companies have started training their workforce.

It should be noted that many employees recognise the need for training and upskilling for AI, as can be assumed by Coursera's statistics about people attending courses.

 

This disconnection between hype and reality that implied educated growth has parallels in history in other fields.

-      Dot-com bubble (late 1990s): Forecasts of rapid ecommerce transformation led to speculation and high valuations before many internet startups failed by 2000. Gradual growth ultimately returned.

-      Cleantech/Biofuels (late 2000s): Forecasts of quickly reaching scale with renewables were recalibrated after recognising longer timelines for commercially viable production.

-      Human genome sequencing (early 2000s): Bold predictions that unlocking the genome would quickly revolutionise medicine led to a plunge in biotech stocks when actual translation proved challenging.


The AI hype cycle has also peaked and troughed over decades without matching initially sky-high visions. i.e.:

-      Late 1980s: Hype surrounding parallel computing and neural networks with claims these breakthroughs would achieve advanced AI. The “AI winter” ensued when projects like the ambitious Fifth Generation project failed to realise predictions instantly.


-      Early 2000s: New hype that statistical machine learning and big data would unlock transformative capabilities. But longer timelines emerged for translating lab breakthroughs into deployed business solutions.


-      2010s: Deep learning advances sparked predictions that AI would rapidly match and exceed human performance. Yet robust general intelligence still remains a formidable challenge.

 

From historical and current examples, it can be deduced that long-term progress relies not on hype-driven development but on gradual integration, empowering employees to leverage AI safely.

 

Hype risks distraction from operational consistency and undermines capability-building among workforces increasingly expected to use AI tools. An iterative approach focused on gathering user feedback suits the complexity of emergent technologies and builds necessary trust.

 

BCG’s data shows most employees feel unprepared and under-supported as organisations charge ahead with AI initiatives. But, sustainable growth trajectories come from people-centric progress along the technology adoption curve, not sudden transformation.

 

In conclusion...

AI hype cycles tend to peak with inflated expectations before plunging into “AI winters”, revealing longer timelines for impact.


My questions  

  • How can you, as a leader, find a balance between AI aspirations and responsible development timelines?

  • How can you, leader, iteratively integrate AI that empowers users and builds trust and capability?

My two cents

I believe bridging the divide requires companies to align stakeholder capabilities with realistic roadmaps for integration. Step-by-step immersion enables responsible adoption practices to take root alongside AI. And it is through this longer-term, focused trajectory that AI can eventually meet business reality.

Have you, as a leader, other ideas to share?

Opmerkingen


Recent Posts
No tags yet.
bottom of page