AI Maximalists and AI Minimalists, Who Will Prevail or is Balance the Key?
- Feb 20
- 4 min read
Written by Annette Densham, Chief Storyteller
Multi-award-winning PR specialist Annette Densham is considered the go-to for all things business storytelling, award submission writing, and assisting business leaders in establishing themselves as authorities in their field.
Decades ago, American sociologist and communications scholar Everett Rogers showed in Diffusion of Innovations that innovators and early adopters make up a small minority. They exist to test, break, and explore. The early and late majority are where technology becomes stable, repeatable, and commercially viable. Across the past century, from electricity and telephones to PCs, cloud, and smartphones, the same pattern repeats. The pioneers pay the chaos tax so everyone else can get the dependable upside.

Rogers noted the early majority are deliberate. “They adopt innovations only after observing that the innovation has been adopted by others in their system.”
Inbal Rodnay, a technology adoption expert, sees the same pattern playing out with AI. “Innovators are supposed to experiment. They try things early, they break things, and are very vocal about it. That’s their job, but it’s not yours.”
The danger of early adoption isn’t that it never works. It’s that the cost of learning is high, unpredictable, and usually underestimated. Early adopters absorb immature tooling, incomplete integrations, shifting pricing models, and fast-moving risk. In consumer tech, that’s annoying. In professional services, it’s dangerous.
Gartner has consistently found that most AI initiatives fail to deliver sustained value. By 2026, more than 80% of organisations will have used generative AI tools, but fewer than 30% will see measurable, ongoing business impact. Most spend is lost to duplicate tools, short-lived pilots, and solutions that never make it into real workflows.
What’s different this time is the environment around adoption. Social media has seen speed and visibility rewarded more than judgement.
“Now you can access powerful tools in minutes, so adoption feels harmless and reversible, even when the consequences aren’t,” Inbal says.
“Social media has changed the adoption curve. Early adopters used to experiment behind the scenes. Now experimentation is public and performative, and it makes sensible leaders feel like they’re falling behind.”
“AI looks cheap to try, but it’s expensive to clean up. The licence cost is small, but the risk, rework, and governance debt show up later. People think they’re buying efficiency, but if the tool doesn’t fit how work actually happens, you end up creating more work, not less.”
Early adoption focuses on what’s possible, not what’s sustainable. But the other extreme carries risk too. Firms that delay too long don’t stay neutral. They lose context, capability, and credibility.
McKinsey’s research shows generative AI has already reset expectations around speed and responsiveness. Clients may not articulate it, but they feel it. Faster responses, clearer summaries, and better first drafts are becoming the baseline.
Inbal is blunt about the cost of opting out. “If you completely opt out, you don’t stay safe. Your clients are using AI, your team is experimenting, and you lose visibility and control.”
When leadership says they’re not using AI, it doesn’t remove AI from the business, it removes oversight. Inbal says shadow AI fills the gap as people use whatever is easiest and fastest to get through their workload, often without telling anyone.
“Judgement calls get made without guidance because staff have no shared standard for what is acceptable. People have to guess what’s safe and what’s not. You end up with inconsistent decisions across the firm, and no one can confidently say what good looks like,” Inbal says.
This isn’t really a question of maximalists versus minimalists.
The early majority, what Inbal calls the confident majority, avoids both traps. They don’t rush, and they don’t freeze. They build literacy before scale. “Start with what you already have in your stack,” Inbal says. “Try something small, contained, and reversible, with a clear definition of success. Then you decide: this works, or not yet. And you move on.”
“You don’t need AI to be magic,” she adds. “You need it to be useful.” That mindset reframes progress away from speed and towards judgement. AI doesn’t remove responsibility, it redistributes it. Someone still owns the work. Someone still supervises the output. Someone still explains the decision years later if it’s challenged.
Most organisations don’t fail with AI because they chose the wrong tool. They fail because they chose the wrong posture - too eager, fearful, too much faith or none at all.
“The winners are the confident majority. They don’t get distracted by hyperbole or paralysed by fear. They stay curious, set boundaries, invest in AI literacy, and move when the capability is stable enough to be useful and defensible,” Inbal says. “The confident majority are the key to widespread adoption.”
Annette Densham, Chief Storyteller Multi-award-winning PR specialist Annette Densham is considered the go-to for all things business storytelling, award submission writing, and assisting business leaders in establishing themselves as authorities in their field. She has shared her insights into storytelling, media, and business across Australia, the UK, and the US, speaking for the Professional Speakers Association, Stevie Awards, Queensland Government, and many more. Three-time winner of the Grand Stevie Award for Women in Business, gold Stevie International Business Award, and a finalist in Australian Small Business Champion awards, Annette audaciously challenges anyone in small business to cast aside modesty, embrace their genius, and share their stories.










