top of page

The True Problem With AI Lies in Human Choices, Not the Technology Itself

  • Writer: Brainz Magazine
    Brainz Magazine
  • Jun 29, 2025
  • 4 min read

Updated: 3 days ago

Shardia O’Connor explores identity, power, leadership, and social conditioning through a values-led, critical lens.

Executive Contributor Shardia O’Connor

Artificial Intelligence (AI) often sparks fear: job losses, privacy breaches, and unfair decisions dominate headlines. Yet, the real problem behind AI lies not within the technology itself but in how humans build, deploy, and govern it. AI is a mirror reflecting human values, biases, and power structures; our ethical failures, not machines, create risk.


A man in a suit faces a glowing blue hologram in a dimly lit room with a dark background, creating a futuristic and mysterious mood.

Fear of AI or fear of ourselves?


Public concern about AI is widespread. Surveys from institutions like the Pew Research Centre (2023) highlight that most adults in the US and UK worry more about AI’s risks than its benefits. These anxieties stem from deeper issues: fears about social inequality, loss of control, and shifting power dynamics. AI does not have intent or consciousness. Instead, it operates through algorithms trained on human-generated data, inheriting both knowledge and bias (Bender et al., 2021).


Algorithmic bias and social inequality


Algorithmic bias is a major challenge. Buolamwini and Gebru’s (2018) landmark study showed that facial recognition systems perform worse on women and people of colour because of unrepresentative training data. These errors have real consequences, from wrongful arrests to exclusion from opportunities.


Similarly, AI-driven risk assessment tools in criminal justice have been criticised for racial biases, disproportionately penalising minority groups (Angwin et al., 2016). These cases expose how AI magnifies existing social inequalities rather than creating new ones.


Economic disruption and unequal impact


AI-driven automation threatens jobs globally. A comprehensive OECD report by Arntz, Gregory, and Zierahn (2016) estimated that about 14% of jobs in developed countries face a substantial risk of automation, affecting low-skilled workers the most. Acemoglu and Restrepo (2020) add that while AI can increase productivity, it may deepen wage inequality and displace workers, underscoring the need for proactive social policy.


Addressing these challenges requires investment in education, retraining programs, and robust social safety nets, not just technological innovation.


Transparency and inclusive governance


AI often functions as a “black box,” making decisions that users cannot easily understand or challenge (Burrell, 2016). A lack of transparency erodes trust and accountability.


Inclusive design and participatory development can reduce bias and improve fairness. Holstein et al. (2019) argue that involving diverse stakeholders throughout AI development leads to more ethical and effective systems.


Ethical AI requires sociopolitical commitment


Ethical AI is not just a technical issue but a societal one. Jobin, Ienca, and Vayena (2019) catalogued over 80 ethical AI guidelines worldwide, emphasising principles like fairness, accountability, and transparency. However, implementing these values requires interdisciplinary collaboration and governance frameworks that account for long-term impacts (Mittelstadt et al., 2016).


Regulatory proposals such as the EU’s AI Act show the potential for legislation that categorises AI risk and bans harmful uses, like mass biometric surveillance (European Commission, 2021).


AI reflects society’s values


AI does not work in a vacuum. Noble (2018) proves how search engine algorithms perpetuate racial and gender biases, reinforcing systemic inequality under the guise of neutrality. This “technological bias” calls for critical reflection on whose interests AI serves.


Therefore, the real question is not whether AI is dangerous but how humans choose to develop and govern it.


Conclusion: Responsibility lies with us


AI is a powerful tool shaped by human choices. Without ethical stewardship, it risks deepening social divides and reinforcing injustice. Yet, with a commitment to fairness, transparency, and inclusive governance, AI can advance societal good.


The future of AI is a human question about power, values, and justice.


Follow me on Facebook, Instagram, LinkedIn, and visit my website for more info!

Read more from Shardia O’Connor

Shardia O’Connor, Cultural Consultant

Shardia O'Connor is an expert in her field of mental wellbeing. Her passion for creative expression was influenced by her early childhood. Born and raised in Birmingham, West Midlands, and coming from a disadvantaged background, Shardia's early life experiences built her character by teaching her empathy and compassion, which led her to a career in the social sciences. She is an award-winning columnist and the founder and host of her online media platform, Shades Of Reality. Shardia is on a global mission to empower, encourage, and educate the masses!

References:


  • Acemoglu, D. and Restrepo, P. (2020) ‘Robots and jobs: Evidence from US labour markets’, Journal of Political Economy, 128(6), pp. 2188–2244. https://doi.org/10.1086/705716

  • Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016) ‘Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Accessed: 27 June 2025).

  • Arntz, M., Gregory, T. and Zierahn, U. (2016) ‘The risk of automation for jobs in OECD countries: A comparative analysis’, OECD Social, Employment and Migration Working Papers, No. 189. Available at: https://doi.org/10.1787/5jlz9h56dvq7-en (Accessed: 27 June 2025).

  • Bender, E. M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021) ‘On the dangers of stochastic parrots: Can language models be too big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623. https://doi.org/10.1145/3442188.3445922

  • Buolamwini, J. and Gebru, T. (2018) ‘Gender shades: Intersectional accuracy disparities in commercial gender classification’, Proceedings of Machine Learning Research, 81, pp. 1–15. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 27 June 2025).

  • Burrell, J. (2016) ‘How the machine “thinks”: Understanding opacity in machine learning algorithms’, Big Data & Society, 3(1), pp. 1–12. https://doi.org/10.1177/2053951715622512

  • European Commission (2021) ‘Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’, COM(2021) 206 final. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2021%3A206%3AFIN (Accessed: 27 June 2025).

  • Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudík, M. and Wallach, H. (2019) ‘Improving fairness in machine learning systems: What do industry practitioners need?’, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–16. https://doi.org/10.1145/3290605.3300831

  • Jobin, A., Ienca, M. and Vayena, E. (2019) ‘The global landscape of AI ethics guidelines’, Nature Machine Intelligence, 1(9), pp. 389–399. https://doi.org/10.1038/s42256-019-0088-2

  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. (2016) ‘The ethics of algorithms: Mapping the debate’, Big Data & Society, 3(2), pp. 1–21. https://doi.org/10.1177/2053951716679679

  • Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Tags:

 
 

This article is published in collaboration with Brainz Magazine’s network of global experts, carefully selected to share real, valuable insights.

Article Image

Why Performance Isn’t About Talent

For years, we’ve been told that high performance is reserved for the “naturally gifted”, the prodigy, the born leader, the person who just has it. Psychology and performance science tell a very different...

Article Image

Stablecoins in 2026 – A Guide for Small Businesses

If you’re a small business owner, you’ve probably noticed how much payments have been in the news lately. Not because there’s something suddenly wrong about payments, there have always been issues.

Article Image

The Energy of Money – How Confidence Shapes Our Financial Flow

Money is one of the most emotionally charged subjects in our lives. It influences our sense of security, freedom, and even self-worth, yet it is rarely discussed beyond numbers, budgets, or...

Article Image

Bitcoin in 2025 – What It Is and Why It’s Revolutionizing Everyday Finance

In a world where digital payments are the norm and economic uncertainty looms large, Bitcoin appears as a beacon of financial innovation. As of 2025, over 559 million people worldwide, 10% of the...

Article Image

3 Grounding Truths About Your Life Design

Have you ever had the sense that your life isn’t meant to be figured out, fixed, or forced, but remembered? Many people I work with aren’t lacking motivation, intelligence, or spiritual curiosity. What...

Article Image

Why It’s Time to Ditch New Year’s Resolutions in Midlife

It is 3 am. You are awake again, unsettled and restless for no reason that you can name. In the early morning darkness you reach for comfort and familiarity, but none comes.

5 Essential Areas to Stretch to Increase Your Breath Capacity

The Cyborg Psychologist – How Human-AI Partnerships Can Heal the Mental Health Crisis in Secondary Schools

What do Micro-Reactions Cost Fast-Moving Organisations?

Strong Parents, Strong Kids – Why Fitness Is the Foundation of Family Health

How AI Predicts the Exact Content Your Audience Will Crave Next

Why Wellness Doesn’t Work When It’s Treated Like A Performance Metric

The Six-Letter Word That Saves Relationships – Repair

The Art of Not Rushing AI Adoption

Coming Home to Our Roots – The Blueprint That Shapes Us

bottom of page