AI impacts on business: Interview with Francois Calderon-BCG.
I want to thank François Candelon for dedicating his time to this Interview with “burning Questions" about AI and its utilisation in Business.
François Candelon is Managing Director & Senior Partner at BCG and Global Director at BCG Henderson Institute.
We have asked Francois five compelling questions about AI.
1. Is it correct to assume that AI will always be a Minimum Viable Product due to its ongoing development?
I would not consider AI as a Minimum Viable Product; instead, I would describe it as an “entity” that adapts and evolves continuously.
This progressive development requires companies also be adaptive in their approach to governance and must continuously ensure that their AI solutions are aligned with the company’s values, culture, and goals.
This can be challenging, especially with black-box models where there is no clarity on how the model makes its decisions. Yet, it is imperative that companies are diligent in their approach to ensuring their AI products are fully aligned with the company’s expectations.
2. What is your perspective on AI’s potential as a support system for personnel development? Is it a realistic option?
My view about the impact of AI on personnel development is very positive. I believe it is a realistic option, especially with the rise of generative AI, which can create personalised content that matches an employee’s learning needs.
Consider, for instance, the possibility of creating an AI model that acts as a teacher who has a Montessori approach to education that involves self-direct learning through activities, practice, and collaborative games. The AI model could assist personnel in identifying their unique areas for professional development and adaptively creating content that matches their development needs and preferred learning style.
3. Are You confident AGI (Artificial General Intelligence) Is reachable?
I am not certain if true AGI is feasible or not, but I am confident that it will not be achieved in the very short term. However, I believe that it is very important that we are actively investigating the potential implications of AGI well in advance of its realisation.
These implications are ethical but also practical. How will humans behave when they rely too heavily on AI? What happens when humans lose certain knowledge and capabilities because it has been outsourced to AI?
I’m currently involved in an experiment that tests how humans work with AI for work-related tasks. One concerning preliminary finding is that when people trust AI, they tend to instantly lose their ability to think critically and scrutinise the output of AI. The result is obvious errors in their outputs even while they feel very confident in their solutions. In fact, users who did not use AI had higher quality outputs than those who did.
This finding is very preliminary, but it was a big surprise, and it suggests that when people rely too much on AI, they can get lazy and lose their drive for critical thinking and cognitive effort. This has big implications and risks that need to be explored for the future with AGI.
4. How can AI provide valuable support for transformational activities before being integrated? Can it serve as an advisor or consultant?
The extent to which AI, before being incorporated, can act as an advisor or consultant for transformational activities depends heavily on the use case.
Suppose external data can inform the initiatives. In that case, it may be possible to use AI trained on external data to create insights and advise on how to implement the AI solution for internal use. However, this also depends heavily on the complexity of the use case because the AI needs to be able to provide valuable insight into the transformation.
I think that the most important factor is defining how the collaboration between humans and AI could progress. Such collaboration will depend on the value the AI provides to the human and how the human can inform the AI in a real-world context. An effective and insightful model that can be adapted to the real context of the organisation could be powerful for accelerating transformational goals.
5. What are your views about integrating AI into a company’s governance structure?
Integrating AI into a company requires a centralised governance model with a Chief AI Officer or a similar position. This person should have an overarching view of all aspects of AI integration across the organisation and determine the necessary policies and guidelines for all implementations. They should also be responsible for ensuring that all AI implementations are aligned with the corporate culture and values.
It is crucial to remember that designing AI solutions requires a delicate balance of technical, ethical and change management trade-offs that need to be carefully evaluated for each implementation.
Digital transformations are not only about algorithms. In fact, only 30% of the value of AI is realised because of effective algorithms and technology. 70% of the value generated from any digital or AI strategy is realised because of effective change management. This means that the Chief AI Officer needs to be focused beyond just the technology and instead focus on cultural initiatives to have a responsive and fully engaged workforce.