Designing AI for Different Brains and the Missing Layer in Ethical AI
- 2 days ago
- 4 min read
Sarah McLoughlin is the creator of Strategic Self-Advocacy™, founder of EduLinked and EduPsyched, and developer of Microsoft-supported digital tools that turn burnout into strategy across disability, education, and mental health systems.
Big idea: If people cannot understand or use AI, it is not ethical. This article explores how AI systems are often designed with specific thinking patterns in mind, overlooking neurodivergent users. It highlights the missing layer in AI ethics, emphasizing the importance of accessibility, human oversight, and inclusive design. Learn why creating truly ethical AI systems requires prioritizing comprehension, expression, and integrity for all users, ensuring AI is both effective and inclusive.

A familiar moment
The screen lights up, and the answer looks clear. You read it, and then read it again. Something feels off. It’s not wrong, just not quite right. You pause.
What is happening?
This is not a mistake. It’s a mismatch. The system wasn’t designed for how your brain works.
For neurodivergent people, this is a familiar experience:
the language might move too fast
the structure assumes straight-line thinking
too much is packed into one sentence.
The system is working, but it is not working for you.
Why this matters
AI is often called "ethical," but discussions around it tend to focus on issues such as:
bias
safety
rules
These are important, but they miss a key aspect: Can people actually use and understand the system?
If not, the system fails, regardless of how well it follows the rules.
How AI is designed today
AI is not neutral. It is designed for specific ways of thinking, such as:
fast reading
strong text skills
step-by-step thinking
confidence with language
If your processing matches these assumptions, AI tends to feel easy to use. But if not, you must adapt.
What gets left out
People think and communicate in different ways.
People:
need more time to process
use visuals or symbols
find dense text difficult
think in non-linear ways
feel overwhelmed under pressure.
Yet, AI systems still expect people to read quickly, write clearly, and understand instantly.
This works for some users, but not for all. That’s not a small issue. It’s a design problem.
Understanding vs. Correctness
AI is often judged by the quality of its answers, but there’s another question: Can people actually understand those answers?
Tools like plain language, Easy Read, and symbol-supported communication can help. But there’s a challenge. When you simplify something, it becomes easier to understand but may lose its meaning. There’s a balance to be found between clarity and accuracy. If we ignore this balance, we may end up with answers that look clear but aren’t fully reliable.
When AI changes meaning
It’s easy to assume that if something sounds clear, it is correct. But that’s not always true, especially when context is reduced or removed during transformation.
AI changes meaning when it rewrites or simplifies content without preserving the context or original source, such as when it summarizes, simplifies, or rewrites information. This can:
remove important details
change tone
lose authorship
In important situations, this matters. AI should help people understand, not quietly change the story.
Who is doing the thinking?
As AI improves, more responsibility shifts to the user. You have to:
decide if the output is correct
interpret unclear parts
notice what’s missing
manage the risks
The system may look smart, but you’re doing the work.
Why human oversight matters
There’s pressure to automate everything, but full automation doesn’t work in high-stakes contexts where meaning, identity, or consequences are involved. Human oversight is essential. Without it:
mistakes stay hidden
harm increases
trust breaks down
What ethical AI should include
Ethical AI is not just about features, it’s about system design. Work in this space is already happening, as seen in ethical AI research and system design at EduLinked. Here’s a simple framework for ethical AI:
Comprehension: People must understand the output. This includes plain language, Easy Read, and clear definitions.
Expression: People must be able to express themselves in different communication styles and multiple formats.
Integrity: Meaning must stay visible. This includes tracking authorship and showing changes.
Consent: People must remain in control with clear permission and the ability to make changes.
Oversight: Human review and accountability are essential to ensure responsibility.
The real question
The real question is not whether AI is accurate or efficient, but, "Who can use it, and who cannot?"
Final thought
We don’t just need smarter AI. We need AI that:
Can be understood
Can be questioned
Works for different ways of thinking
Keeps meaning intact
If someone cannot understand an AI system, it is not ethical. It’s exclusion, at scale.
Read more from Sarah Ailish McLoughlin
Sarah Ailish McLoughlin, Neurodivergent and Disabled Founder
Sarah Ailish McLoughlin is the neurodivergent founder behind EduLinked and EduPsyched, and the creator of the Strategic Self-Advocacy™ framework. Her work transforms lived experience into trauma-informed, policy-smart tools that restore clarity and agency. Through digital apps, therapeutic messaging, and emotionally literate reform training, she helps carers, educators, and system-changemakers navigate complexity without self-erasure. Her Microsoft-backed NDIS Navigator app and emotional literacy campaigns are reshaping advocacy, access, and wellbeing across Australia.










