top of page

Cognitive Bias In Tech – Embracing Responsibility In Technological Development

Written by: Will Soprano, Executive Contributor

Executive Contributors at Brainz Magazine are handpicked and invited to contribute because of their knowledge and valuable insight within their area of expertise.

 
Executive Contributor Will Soprano

How the human condition is being programmatically hardwired into algorithms, Ai, and big data.


A silhouette of a young man piecing together a jigsaw puzzle. in brain icon concept

Author’s Note: I originally wrote a paper on cognitive bias in tech in 2017, well before generative Ai came to the world, focusing on the responsibility humans have in technology. I left the topic alone, and then ChatGPT and generative Ai ripped through the fabric of society… The real zenith for generative Ai in the last year is showing us that we have a big problem. And no, that problem isn’t AI taking over the world. But rather we have a problem deeply rooted in the very software we work with (not just Ai), in the learning models and datasets we use to train algorithms, and the way that we develop them.


When we created the internet so many years ago, we thought it would liberate us by democratizing information and connecting a global society, geography be damned. And in 2017 when I first put pen to paper about cognitive bias in tech, I focused on algorithms and globalization.


And then 2022 happened. Open Ai released generative Ai and a combination of fascination and fear ripped through this global society. But Ai isn’t new. Quite the contrary – we began researching AI in the 1950s (about 22 years after we began researching algorithms). From then until 2022, artificial intelligence, machine learning, and algorithms were the hidden secret of tech. Hidden deep below the UI and UX – underneath the incredibly researched experiences designed to captivate are the very heart and soul of these technologies.


What seemed a problem for the researchers as we unwittingly became points in a dataset used against us (big data, learning models, facial recognition, approval algorithms, etc) has become quite seriously an opportunity for humanity to collectively grow together.


Humanizing technology: Navigating the intersection of cognitive bias and technological advancement


The relationship between humans and technology is more important than the applications themselves. Consider advancements in Augmented Reality, for example. With wearable tech that now allows us to add overlays on the world around us we could be a more human-centric workforce – or we could use that same innovative technology to become more disconnected and fear driven.


Technology is created by humans, with cognitive bias. Without cross checking the bias, our technologies are then programmatically hardwired with them. And that was before generative Ai.


Artificial intelligence is technically just an algorithm. Algorithms are simply “if” statements of math. These algorithms are the very backbone of every single application that we have ever invented.


That’s the problem in a nutshell. But now we need to understand the parts of the whole. And the best way that I know to reach understanding is by asking questions. So I’ll be asking questions of the different parts, and I’d like to invite you to do the same as you read. Or if you’re looking for a quick solution, scroll to the bottom. Let’s get started…


Cognitive bias: A Human condition


Our brains are eerily similar to computers. We have short and long term memory, functions, automations, limitations and can even overheat. One of the things that separates our brains from computers is that there’s no advanced analytics for the human condition – wherein we grow, advance, and otherwise rearrange the way that we do things. And it’s the human condition that impacts our brain’s functions today and evolutionarily.


Today we’re talking about just one portion of the brain – cognitive bias. Theoretically cognitive bias is an evolutionary advantage that we developed to help us make quick decisions in survival situations – you might be familiar with this as “fight or flight”. However, cognitive bias is present in all of our decisions, mostly because in our modern world we are not faced with such dire consequences of death by bear, as our ancestors once did.


So as we’ve evolved cognitive bias has been used more frequently for things that are not life or death but appear as such to our current set of circumstances. What once was a way for us to stay alive in the face of imminent death has “evolved” to be something that can take in information one day, and the next day use that very same information to make a decision with only the context of the bias. Racism is a causation of cognitive bias. But so is that unsafe feeling you get when you see someone holding what appears to be a weapon approaching you.


So cognitive bias isn’t bad, but it often is misused. Moreover, research shows that we tend to rely on our cognitive biases more when we are stressed, as the brain seeks out a short cut to quickly end the perceived “threat”. Can you imagine how stressed a programmer with little sleep under strict deadlines and massive financial pressure might be?


Technology: Programmatically scaling the human condition


Algorithms are the backbone of technology – the programmatic approach that we use to arrive at outcomes and decisions within software. These algorithms have become widely used by individuals (with generative Ai), and are seen in our daily lives on social media. But these algorithms are not new, and some of the most harmful are the ones you do not see: credit card applications, healthcare, facial recognition, job applications, voice recognition, college applications, etc. These algorithms are gatekeeping the most important places that we use to fundamentally live, grow and progress in society as it is today – and they’re ripe with cognitive bias.


Algorithms: The role of programming and the impact of human bias


Algorithms themselves are not biased – they’re not some sort of brain that creates for itself. And that is the trap that many people fall into. The trap is the thinking that, if the system isn’t human, then it can’t be biased – “it must be just what the data says”. However, the algorithm is biased – because it’s created by humans that have cognitive biases. We are programmatically replicating the human experience in our technologies.


That programmatic part is important. Because our human condition offers space for growth, change, adaptation, etc. But not software. Unless someone actively changes the algorithm it will forever stay as-is. Humans are flawed, and we’re programming systems that allow us to scale those flaws, which are harming people as we speak. At scale.


Cognitive biases are not bad – quite the opposite. They can be used for bad, but humans have something that machines do not: the ability to change and grow in our core. What we know to be true today can change tomorrow. Machines cannot do this. To be clear, I’m not suggesting that we abandon algorithms, but would it hurt to have cross checkers?


Deciphering data: How does big data impact us?


A trendy term – but do we know what Big Data is? To understand big data we should know what small data is. Small data is sets of data small enough for a human to comprehend them, with very detailed context, used then to extrapolate for a larger group. Big data is the opposite; limited context but a huge set of data points collected, stored, and analyzed by algorithms to derive insights, trends and patterns. In other words, big data is about machines and small data is about people.


And yet we use big data to determine very personal and important decisions of people. Would you trust an inherently flawed process to determine essential, legal, and life altering decisions? Well, we currently are. These systems are being used in applications like loans, college, job, and insurance applications – along with criminal and legal determinations.


Training algorithms and big data: How does human judgement impact technology?


While algorithms and big data can’t evolve to new ideas like humans can, they do require training to work well. That means they need huge volumes of data, or datasets, to function. But where are we getting the data sets? What’s included in the data? And who determines what data is used in the learning process?


With Ai not being a new endeavor (originally begun in the 1950’s), we can actually look back at the people that have been working with these concepts. Knowing they are human, meaning they operate out of cognitive bias, then isn’t it possible that the very data sets of our programs are flawed from the beginning?


We’re training these machines to make very human – and vital – decisions. Things like loan, job, college and insurance applications – and criminal investigations via facial and voice recognition. We’ve already gone over the flaws in algorithms, so what of the data sets training the programs?


Machines and software cannot be unbiased because they are created by humans. And humans have biases. The problem here isn’t that humans have bias, but that we program them into our technologies.


Balancing innovation and responsibility: Navigating the duality of technology advancement


One of the greatest gifts that software development has given to the world is its ability to move fast, break things and celebrate failure. Afterall, while these weren’t celebrated before they have always been pillars of innovation.


But every great asset can also be the darkest characteristic. How do we maintain the great asset and minimize the dark characteristics? This is the opportunity that we have before us – to ask this question of technology. All technology – not just Ai but application systems, facial recognition, etc. Ask our technologies if they’re cross-checking for bias in their development. Ask if they’ve accounted for enough inclusion in their datasets when training large learning models.


I’d like to think that someone reading this isn’t just a user but someone working in product, test or development. Someone that can ask these questions as the work is being done – as boundaries are being pushed and features being released. As we build the future we desire with spirited and ferocious speed, will you invite people who are different from you to test and implement?


How about the software engineers building the algorithms that aggregate, sort and otherwise deliver information on your favorite app, social network, or search engine? Are we considering their stress levels so that they have a better chance to not operate out of their own cognitive biases while programming tech?


Cognitive bias is a very human, primal, and vital part of our existence. In its most basic function it keeps us alive in the face of danger, but its most corrupt function is the root of things sinister like racism. I hope that as we move fast and break things we can recognize our own differences. That no two people or groups are the same. And to build the better future that we all want we must ask still more questions while building things that democratize and globalize information. If these technologies are to be sustainable and tools for humanity we have to build relationships between the user and the builder. We can see just how technologies can be made better. And it all starts with us, humans – humanity.


Learn more about Will Soprano on LinkedIn and his personal blog.


Will Soprano Brainz Magazine
 

Will Soprano, Executive Contributor Brainz Magazine

From writer to all things dev & tech Will has spent a lifetime trying, failing, learning and growing. In nurturing his ability as a writer he found that he had a knack for supporting software developers & connecting orgs across functions. As his career arc was hitting its first peak he found himself broken physically, emotionally, and professionally. That was the beginning of his personal growth. After years of trial and error he finally realized that sobriety was the answer. With nearly 4 years sober, he's not just a new person socially but professionally as well. The mental health community and his peers professionally have responded to his willingness to serve and authenticity.

CURRENT ISSUE

  • linkedin-brainz
  • facebook-brainz
  • instagram-04

CHANNELS

bottom of page