What kind of manager would have complete confidence in an AI system? Which guys would rule out AI in favor of their own conclusions? When it comes to high-level strategic decisions, many leaders will continue to follow their gut, not the machine. Is this a good thing?
AI is starting to play a key role in many areas: customer personalization, sales recommendations, financial portfolio recommendations, air collision prevention, semi-autonomous vehicles, and medical screening. Such actions require decisions on the spot, often involving low-level functions that move from one system to another. It should be noted that the main business cases promoted so far are relatively tactical solutions. “Rotational automation is not the big opportunity here. It’s better strategic thinking, innovation, decision making ”, Dion Hinchliffe, analyst at Constellation Research, Remarks.
The higher-level, more strategic decisions that shape a company’s direction represent the last great frontier of AI in business. And, to this day, there is no shortage of skepticism among policy makers when it comes to strategic AI.
Faced with identical AI releases, many businessmen are still making their own decisions, a recent study concludes. The “human filter makes all the difference in the decisions of AI-based organizations,” according to Philip Meissner and Christoph Keding, both of ESCP, in a survey of 140 executives published in MIT Sloan Management Review.
The researchers introduced participants to what was an allegedly AI-generated recommendation of a new technology that would allow them to seize potential new business opportunities, and asked them to what extent they trusted the AI recommendation. . It turns out that many did not fully trust the result and always went with their own choices. On the other hand, some executives were all too willing to rely on AI. The researchers divided the respondents into three types of decision-makers: “skeptics”, “interactors” and “delegators”. Skeptics “seem reluctant to lose their autonomy in the process”, while delegates “who usually postpone decisions are happy to delegate responsibility for decision-making to AI.”
Skeptics in the group “don’t follow the AI-based recommendations, preferring to control the process themselves,” Meissner and Keding say. “These managers do not want to make strategic decisions based on the analysis carried out by what they perceive as a black box that they do not fully understand. Skeptics are themselves very analytical and need to understand the details before embarking on the decision-making process. When using AI, skeptics can fall prey to a false illusion of control, causing them to over-trust their own judgment and underestimate that of AI.
On the other end of the spectrum, delegators “largely shift their decision-making power to AI in order to reduce their perceived individual risk. For these executives, the use of AI dramatically increases the speed of the strategic decision-making process and can break a potential blockage. However, delegators can also abuse AI to avoid personal liability; they could rely on his recommendations as a personal insurance policy in the event of a problem. This transfer of risk from the decision-maker to the machine could induce unjustified risk-taking for the company.
“These different decision-making archetypes show that the quality of AI recommendation itself is only half of the equation for assessing the quality of AI-based decision-making in organizations.” , say Meissner and Keding.