top of page

The True Problem With AI Lies in Human Choices, Not the Technology Itself

  • Writer: Brainz Magazine
    Brainz Magazine
  • Jun 29
  • 4 min read

Shardia O'Connor is a mental well-being advocate and cultural consultant. She is best known for her hosting and writing skills, as well as her sense of "fashion." Shardia is the founder of her online media platform, Shades Of Reality, and the owner of Thawadar Boutique LTD.

Executive Contributor Shardia O’Connor

Artificial Intelligence (AI) often sparks fear: job losses, privacy breaches, and unfair decisions dominate headlines. Yet, the real problem behind AI lies not within the technology itself but in how humans build, deploy, and govern it. AI is a mirror reflecting human values, biases, and power structures; our ethical failures, not machines, create risk.


A man in a suit faces a glowing blue hologram in a dimly lit room with a dark background, creating a futuristic and mysterious mood.

Fear of AI or fear of ourselves?


Public concern about AI is widespread. Surveys from institutions like the Pew Research Centre (2023) highlight that most adults in the US and UK worry more about AI’s risks than its benefits. These anxieties stem from deeper issues: fears about social inequality, loss of control, and shifting power dynamics. AI does not have intent or consciousness. Instead, it operates through algorithms trained on human-generated data, inheriting both knowledge and bias (Bender et al., 2021).


Algorithmic bias and social inequality


Algorithmic bias is a major challenge. Buolamwini and Gebru’s (2018) landmark study showed that facial recognition systems perform worse on women and people of colour because of unrepresentative training data. These errors have real consequences, from wrongful arrests to exclusion from opportunities.


Similarly, AI-driven risk assessment tools in criminal justice have been criticised for racial biases, disproportionately penalising minority groups (Angwin et al., 2016). These cases expose how AI magnifies existing social inequalities rather than creating new ones.


Economic disruption and unequal impact


AI-driven automation threatens jobs globally. A comprehensive OECD report by Arntz, Gregory, and Zierahn (2016) estimated that about 14% of jobs in developed countries face a substantial risk of automation, affecting low-skilled workers the most. Acemoglu and Restrepo (2020) add that while AI can increase productivity, it may deepen wage inequality and displace workers, underscoring the need for proactive social policy.


Addressing these challenges requires investment in education, retraining programs, and robust social safety nets, not just technological innovation.


Transparency and inclusive governance


AI often functions as a “black box,” making decisions that users cannot easily understand or challenge (Burrell, 2016). A lack of transparency erodes trust and accountability.


Inclusive design and participatory development can reduce bias and improve fairness. Holstein et al. (2019) argue that involving diverse stakeholders throughout AI development leads to more ethical and effective systems.


Ethical AI requires sociopolitical commitment


Ethical AI is not just a technical issue but a societal one. Jobin, Ienca, and Vayena (2019) catalogued over 80 ethical AI guidelines worldwide, emphasising principles like fairness, accountability, and transparency. However, implementing these values requires interdisciplinary collaboration and governance frameworks that account for long-term impacts (Mittelstadt et al., 2016).


Regulatory proposals such as the EU’s AI Act show the potential for legislation that categorises AI risk and bans harmful uses, like mass biometric surveillance (European Commission, 2021).


AI reflects society’s values


AI does not work in a vacuum. Noble (2018) proves how search engine algorithms perpetuate racial and gender biases, reinforcing systemic inequality under the guise of neutrality. This “technological bias” calls for critical reflection on whose interests AI serves.


Therefore, the real question is not whether AI is dangerous but how humans choose to develop and govern it.


Conclusion: Responsibility lies with us


AI is a powerful tool shaped by human choices. Without ethical stewardship, it risks deepening social divides and reinforcing injustice. Yet, with a commitment to fairness, transparency, and inclusive governance, AI can advance societal good.


The future of AI is a human question about power, values, and justice.


Follow me on Facebook, Instagram, LinkedIn, and visit my website for more info!

Read more from Shardia O’Connor

Shardia O’Connor, Cultural Consultant

Shardia O'Connor is an expert in her field of mental wellbeing. Her passion for creative expression was influenced by her early childhood. Born and raised in Birmingham, West Midlands, and coming from a disadvantaged background, Shardia's early life experiences built her character by teaching her empathy and compassion, which led her to a career in the social sciences. She is an award-winning columnist and the founder and host of her online media platform, Shades Of Reality. Shardia is on a global mission to empower, encourage, and educate the masses!

References:


  • Acemoglu, D. and Restrepo, P. (2020) ‘Robots and jobs: Evidence from US labour markets’, Journal of Political Economy, 128(6), pp. 2188–2244. https://doi.org/10.1086/705716

  • Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016) ‘Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Accessed: 27 June 2025).

  • Arntz, M., Gregory, T. and Zierahn, U. (2016) ‘The risk of automation for jobs in OECD countries: A comparative analysis’, OECD Social, Employment and Migration Working Papers, No. 189. Available at: https://doi.org/10.1787/5jlz9h56dvq7-en (Accessed: 27 June 2025).

  • Bender, E. M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021) ‘On the dangers of stochastic parrots: Can language models be too big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623. https://doi.org/10.1145/3442188.3445922

  • Buolamwini, J. and Gebru, T. (2018) ‘Gender shades: Intersectional accuracy disparities in commercial gender classification’, Proceedings of Machine Learning Research, 81, pp. 1–15. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 27 June 2025).

  • Burrell, J. (2016) ‘How the machine “thinks”: Understanding opacity in machine learning algorithms’, Big Data & Society, 3(1), pp. 1–12. https://doi.org/10.1177/2053951715622512

  • European Commission (2021) ‘Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’, COM(2021) 206 final. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2021%3A206%3AFIN (Accessed: 27 June 2025).

  • Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudík, M. and Wallach, H. (2019) ‘Improving fairness in machine learning systems: What do industry practitioners need?’, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–16. https://doi.org/10.1145/3290605.3300831

  • Jobin, A., Ienca, M. and Vayena, E. (2019) ‘The global landscape of AI ethics guidelines’, Nature Machine Intelligence, 1(9), pp. 389–399. https://doi.org/10.1038/s42256-019-0088-2

  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. (2016) ‘The ethics of algorithms: Mapping the debate’, Big Data & Society, 3(2), pp. 1–21. https://doi.org/10.1177/2053951716679679

  • Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

bottom of page