top of page

Societal AI

Written by: Salim Sheikh, Executive Contributor

Executive Contributors at Brainz Magazine are handpicked and invited to contribute because of their knowledge and valuable insight within their area of expertise.

 

This article seeks to promote an understanding of the potentially transformative impacts and consequences of Artificial Intelligence (AI) on people and society. AI — which is actually an “umbrella term” encompassing automation, machine learning, robotics, computer vision and natural language processing — impacts every aspect of our lives, ranging from customer services, retail, education, healthcare, autonomous cars, industrial automation, and more. It has become increasingly integrated into our society, automating tasks, accelerating computational and data analytics-based solutions whilst also assisting (and in some cases, displacing) humans with decision making.

For some, AI is synonymous with terms like ‘the fourth industrial revolution’ (or “4IR”). Interestingly, previous industrial revolutions have brought about huge social and economic change giving rise to more financial opportunities and more time for leisure activities, etc.


At the same time, AI is also fuelling anxieties and ethical concerns. There are questions about the trustworthiness of AI systems, including the dangers of codifying and reinforcing existing biases, such as those related to gender and race, or of infringing on human rights and values, such as privacy. Concerns are growing about AI systems exacerbating inequality, climate change, market concentration and the digital divide.


AI is in fact a new technology that promises and delivers great benefit to portions of society but harms certain groups. This is often referred to as “unintended consequences”.


The potential benefits of AI to society will be blunted if human biases find their way into coding. Hence, engineers tasked with designing AI algorithms and developing “intelligent systems”, and the like, should accept more responsibility for considering potential unintended consequences of their work.


A start in this direction would be integrate social sciences into engineering and computer science curricula. The key word here is “integration”. Students from both disciplines would greatly benefit by learning from each other, as would the faculty members who assemble the course syllabus and deliver lectures and seminars. Students majoring in the social sciences might discover interesting technological issues of societal importance while engaged in projects with engineering students. Likewise, engineering students would learn to value the social dimensions of innovation and gain a heightened awareness of potential unintended consequences of their work on both society and our environment.

We should be well past the days when the development of technology is separated from human needs, desires, and behaviour. This is why engineers should engage with the social sciences, and vice versa.


One important question we all need to seriously consider is “How can society regulate the way AI (and emerging technology) alters, augments, and enhances our lives – safely?”.

The response is not simple; it requires new paradigms, language, and regulatory frameworks that promote the idea of Artificial Social Intelligence. Hence, I have coined the term “Societal AI” that represents AI as a domain underpinned by principles and laws that govern social interactions between humans and AI.


“Societal AI” is about incorporating human-centred perspectives and humane requirements (including constraints) when designing AI algorithms, agents, and systems. Not only in terms of the capabilities of AI technology but more importantly what we are doing with it; potentially meshed with the behaviours, attitudes, intentions, feelings, personalities, and expectations of people.


At the same time, we cannot afford to leave important decisions and principles that affect fairness, accountability, transparency, and ethics (FATE) to businesses, governments, and policy writers. Instead, as citizens, we must influence how AI is leveraged to help shape and influence “AI for Social Good” – for the benefit of society.


Rise of AI and Digitalisation


Explainable AI


Research into Explainable Artificial Intelligence (or XAI) has been rapidly rising in direct response to calls for increased transparency and trust in AI. This is mainly due to AI being used in sensitive domains with societal, ethical, and safety implications. With this increased sensitivity and increased ubiquity comes inevitable questions of trust, bias, accountability, and process — i.e., how did the AI system come to a certain conclusion?


This affects a diverse set of industries and domains including autonomous driving, weather simulations, medical diagnosis, conversational systems (chatbots and digital assistants), facial recognition (and, worryingly, “deep fakes”), business process optimisation (through AI-assisted Robotic Process Automation or RPA), and cybersecurity.


Only interpretable models can be explained, and explainability is paramount when decision-making in medicine (diagnosis, prognosis, etc.) must be conveyed to humans.


Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. While limitations still exist, research in XAI continues to evolve with techniques and methods trending in areas such as data visualisation, query-based explanations, policy summarisation, “human-in-the-loop” collaboration and verification.


A major limitation of the research is that many approaches were either not tested with users or when they were, limited details of the testing were published, failing to describe where the participants were recruited from, how many were recruited, their ethnicity, gender, etc.


Not only do we need to contend with specific issues affecting the use of AI such as fairness, privacy, and anonymity, explainability and interpretability, but also broader societal issues, such as ethics and legislation.


Ultimately, the true value of AI will be determined by the humans who design and use it — requiring new or extended legislation, policies, and procedures, and use cases. Human action and innovation will determine how, how far and to whom, “Societal AI” is leveraged.


Responsible AI


The emergence of AI has been accompanied by rising public anxiety concerning its potentially damaging effects: for individuals, for vulnerable groups and, more generally, for society. If AI is to be a force for good which enables, rather than undermines, benefits for individuals and society, then it is imperative that we acquire a deeper understanding of these concerns as well as understand our responsibility for any adverse consequences.


This gives rise to three defining principles of “Responsible AI,” namely,


1. Accountability

  • Explaining and answering for one’s own actions.

  • Explanation and justification in terms of social values and norms.

  • Associated with liability.

2. Responsibility

  • More than a “tick box” exercise of ethical questions and considerations.

  • AI systems should be capable of answering their decisions and identifying errors or unexpected results.

  • As the “chain of responsibility” grows, then the actions of each stakeholder must be transparent along with fair use of data (free from bias, manipulation, etc.).

3. Transparency

  • Methods are needed to inspect algorithms and their results to address “black box” issues.

  • Data governance mechanisms are required to ensure that data used to train algorithms and guide decision-making is collected, created, and managed in a fair and clear manner whilst enforcing privacy and security.

  • Not just limited to algorithms but also to applications, processes, and people.

Experts at Harvard University on “Explanation and the Law” identify approaches to improve transparency of AI systems, and note that each entails trade-offs. See table below.


These approaches are:

  1. theoretical guarantees

  2. empirical evidence

  3. explanation

Approach

Description

Well-suited

contexts

Poorly suited

contexts

Theoretical

guarantees

In some situations, it is

possible to give

theoretical guarantees

about an AI system

backed by proof.

The environment is

fully observable (e.g.,

the game of ‘Go’) and

both the problem and

solution can be

formalised.

The situation cannot

be clearly specified

(most real world

settings).

​​Statistical evidence/

probability

​Empirical evidence

measures a system’s

overall performance,

demonstrating the

value or harm of the

system, without

explaining specific

decisions.

​Outcomes can be fully

formalised; it is

acceptable to wait to

see negative

outcomes to measure

them; issues may only

be visible in

aggregate.

The objective cannot

be fully formalised;

blame or innocence

can be assigned for a

particular decision.

Explanation

​Humans can interpret

information about the

logic by which a

system took a

particular set of inputs

and reached a

particular conclusion.

Problems are

incompletely specified;

objectives are not

clear and inputs could

be erroneous.

Other forms of

accountability are

possible.

Source: adapted from Doshi-Velez et al. (2017), “Accountability of AI under the law: The role of explanation”, https://arxiv.org/pdf/1711.01134.pdf.


Given how rapidly society is advancing, we require a range of legal ‘models of responsibility’ to help manage highly complex socio-technical systems that have involved by “many hands” i.e., multiple organisations, individuals, and interacting software and hardware components.


These models will be based on:

  • intention/culpability

  • risk/negligence

  • strict responsibility

None of these models are self-evidently the ‘correct’ or ‘best’ model for allocating and distributing the various threats, risks and harms associated with the operation of advanced digital technologies.


Before we conclude on the topic of Responsible AI, it is important to also breakdown issues relating to bias – which has several forms, as outlined below.


(AI) Bias


AI bias, also known as algorithmic bias, is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous or incorrect assumptions in the data analysis process or from the use of incomplete, faulty, or prejudicial data sets to train and/or validate AI systems.


AI bias often stems from problems introduced by the individuals who design and/or train AI systems and often leads to the creation of algorithms that reflect unintended cognitive or social biases or prejudices. AI systems are making their way into the military, banking, and bio-medical sector and assisting humans continuously.


We can classify the source of bias in AI systems in three ways.

  • Bias in the data

  • Bias in the human

  • Bias in the process

This is expanded further in the sub-sections below.

Bias in the data


When the data sample does not represent all the dimensions of actual data, there is a huge chance of the algorithm producing biased output based on trained data.


Bias in the humans


Individuals that are training the algorithms have their own biases which are often closely tied to their ethnic, cultural, and linguistic values. Many of these biases involuntarily enter into AI training and result in biased output and the individuals, therefore, can create algorithms that reflect unintended cognitive or social biases or prejudices.


Bias in the process


When the AI training process does not meet certain requirements or criteria there is a significant chance of producing biased output by algorithms. For example, an algorithm predicting weather conditions in the United Kingdom cannot be trained on the weather data collected from South America. So, the AI training process should be continuously monitored and follow specific protocols.


Employment and Social Stability


The rise of AI, and in particular digitalisation, has major implications for labour markets. Assessing the impact of AI will be crucial for developing policies that promote efficient labour markets for the benefit of workers, employers, and societies as a whole.

In its Future of Jobs Report 2018 the World Economic Forum cited one set of estimates indicating that while 75m jobs may be displaced, 133m could be created to adapt to “the new division of labour between humans, machines and algorithms”.


AI algorithms and systems can affect employment in two main ways:

  1. by directly displacing workers from tasks, they were previously performing (displacement effect)

  2. by increasing the demand for labour in industries or jobs that arise or develop due to technological progress (productivity effect).

As businesses’ reliance on AI increases, it is clear that a redistribution of labour is inevitable. To deal with the shift in skills that this implies, retraining the workforce is critical.


Presently, AI can replace human labour in routine tasks, whether manual or cognitive, but (as yet) cannot replace human labour in non-routine tasks. The media is correct in claiming that the next wave of AI will revolutionise medicine, law, finance, and transportation by processing data more efficiently than humans.


Demand for ‘middle management’ roles is falling with more businesses seeking to shift to “flatter, faster, leaner” organisational structures by leveraging AI and automation for greater efficiency. The COVID19 pandemic has further exposed the need for greater operational speed requiring critical decision making on the ground in real time. Thus, traditional middle management roles — required to communicate, direct, and control — have been increasingly displaced and become redundant.


This gives rise to questions about the future of leadership and the purpose of a “leader”.


One thing is clear: businesses can no longer afford to be bureaucratic with hierarchies devoted to compliance and positional power. Instead, employees who are hands-on contributors will be capable of being fast tracked to team leadership and manager roles —incentivising individuals to do more thereby leading to a more dynamic and loyal workforce.


Conversely, the rise of AI could potentially lead to the development of new jobs. To enable this, we need to deepen our understanding by promoting further social dialogue among all involved parties (researchers, policymakers, industry representatives and trade unions, governments, politicians and so on). This is a vital first step to better grasp the challenges and opportunities of this new industrial revolution. We must act swiftly to fully assess and understand the implications of AI. Afterall, the speed with which AI technology advances may introduce disruptive forces in the market earlier than some people expect.


Privacy and Personal Liberty


AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools will have a negative impact on human rights.


As AI evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed. This technology also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data.


The discussion of AI in the context of the privacy debate often brings up the limitations and failures of AI systems, such as Amazon’s failed experiment with a hiring algorithm that replicated the company’s existing disproportionately male workforce.


Different applications and uses of AI can affect the right to privacy in different ways:

  • AI-driven consumer products and autonomous systems are frequently equipped with sensors that generate and collect vast amounts of data without the knowledge or consent of those in its proximity.

  • AI methods are being used to identify people who wish to remain anonymous.

  • AI methods are being used to infer and generate sensitive information about people from their non-sensitive data.

  • AI methods are being used to profile people based upon population-scale data.

  • AI methods are being used to make consequential decisions using this data, some of which profoundly affect people’s lives.

Each of the above impacts on privacy in significant ways: privacy is indispensable for the exercise of a range of human rights, such as freedom of expression, freedom of association, as well as being fundamental for the exercise of personal autonomy and freedom of choice, as well as broader societal norms.


Advocates and authorities using the international human rights framework are increasingly recognising and acknowledging the impact that new forms of data processing have on fundamental rights, including the right to privacy. With respect to profiling, for example, which may involve the use of AI methods to derive, infer or predict information about individuals for the purpose of evaluating or assessing some aspect about them, the United Nations Human Rights Council noted with concern in March 2017 that ‘automatic processing of personal data for individual profiling may lead to discrimination or decisions that otherwise have the potential to affect the enjoyment of human rights, including economic, social and cultural rights.’


International human rights authorities have also moved towards recognising a right to anonymity under the rights to privacy and freedom of opinion and expression. This has implications for AI used to identify individuals online, in their homes and in public spaces. The UN Special Rapporteur on Freedom of Expression, for instance, has repeatedly identified this relationship and emphasised that state interference

with anonymity should be subject to the three-part test of legality, necessity, and proportionality, as is any other interference with these rights.


To protect the liberties of individual citizens and communities, we call upon civil society to:

  • Actively engage further: Ensure the mitigation of any potential negative impact on fundamental rights like freedom of expression and privacy. This will likely involve a detailed understanding of AI, the actors developing it, and the context in which it is deployed.

  • Collect case studies of ‘human rights critical’ AI: It is vital to collect case studies to truly comprehend the countless ways in which AI impacts human rights. These case studies should include examples from around the world.

  • Build civil society coalitions and expertise networks: The digital age and advances in AI have upended many social norms and structures that evolved over centuries. Principal among these are core values such as personal privacy, autonomy, and democracy. This presents new dangers to social values and constitutional rights. Therefore, it is important to develop knowledge-exchange initiatives and facilitate joint-strategy development amongst organisations operating across society. So far, academia and industry have taken the lead in moving the debate on the societal impact of AI forward. While individuals and society play a crucial role in these debates, it is the voice of those working on AI that need to be heard to improve understanding of AI’s impact on civil liberties.

Societal AI


While AI holds great promise for society, the speed of its advancement has far outpaced the ability of businesses and governments to monitor and assess the outcomes properly.


AI technologies have the potential to do so much good in the world: identify disease in people and populations, discover new medications and treatments, make daily tasks like driving simpler and safer, monitor and distribute energy more efficiently, and so many other things we have not yet imagined or been able to realise. Replication of human capabilities using data and algorithms has ethical consequences. Algorithms are not neutral; they replicate and reinforce bias and misinformation. They can be opaque. And the technology and means to use them rests in the hands of a select few, at least today.


Conversely, autonomous AI is power that can be abused by powerful persons to control the people; put them in slavery. Applying the Marcus Flavius Quintilianus’ principles to the role of AI – we should propose a code of ethics of AI to evaluate that each type of application is oriented toward the well-being of the user:

  1. do not harm the user,

  2. benefits go to the user,

  3. do not misuse her/his freedom, identity, and personal data, and

  4. decree as unfair any clauses alienating the user’s independence or weakening his/her rights of control over privacy in use of the application.

The sovereignty of the user of the system must remain total.


Without these types of principles, the relative ambiguity of regulatory oversight throughout the world prevents AI from directly reflecting society’s needs. It is important that organisations take steps to enable and highlight trustworthiness to all stakeholders and build the reputation of the organisation’s AI.


Broadly democratic societies with an emphasis on human rights might encourage regulations that push AI in directions that help all sectors of the nation. By contrast, authoritarian societies will set agendas for AI that further divide the elite from the rest of civil society and use autonomous AI to cultivate and reinforce divisions. We see both tendencies today; the dystopian one has the upper hand especially in places with the largest populations. It is critical that people who care about future generations speak out when authoritarian tendencies of AI appear.


We are already seeing a decline in democratic institutions and a rise in authoritarianism due to economic inequality and the changing nature of work. If we do not start planning now for the day when AI results in complete disruption of employment, the strain is likely to result in political instability, violence, and despair.


“Societal AI” posits that human-machine interaction will result in increasing precision and decreasing human relevance unless specific efforts are made to design in ‘humanness’.


For instance, AI in the medical field will aid more precise diagnosis, will increase surgical precision, and will increase evidence-based analytics. If designed correctly, these systems will allow the humans to do what they do best –provide empathy, use experience-based intuition, and utilise touch and connection as a source of healing. If human needs are left out of the design process, we will see a world where humans are increasingly irrelevant and more easily manipulated.


We could see increasing under-employment leading to larger wage gaps, greater poverty, and homelessness, and increasing political alienation. We will see fewer opportunities for meaningful work, which will result in increasing drug and mental health problems and the further erosion of the family support system. Without explicit efforts to humanise AI design, we will see a population that is needed for purchasing, but not creating. This population will need to be controlled and AI will provide the means for this control: law enforcement by drones, opinion manipulation by bots, homogenous communities and cultures through synchronised messaging, election systems optimised from big data and GPS systems dominated by corporations that have benefited from increasing efficiency and lower operating costs.


If we truly want AI to act as humane technology that enables and betters life for everyone in our society, a greater understanding of its uses and applications is necessary. Moreover, this will require contributions and participation from researchers and practitioners alike in a variety of fields including (and not restricted to) current and emerging information technologies, humanities, social sciences, arts, and sciences (i.e., physical sciences, earth sciences and life sciences).


Closing Thoughts


Artificial Intelligence (AI) is a permanent reality of our everyday lives; its application and use will undoubtedly further enrich the lives of all people across society. We must all seek ways to enforce transparency and accountability.


With constant change becoming the new normal, people across society need to better understand the potential of AI to continuously innovate and enable digital living.


Ultimately, “AI must serve people, and therefore, AI must always comply with people's rights” – as stated by Ursula von der Leyen in a speech about “Shaping Europe’s Digital Future”.


While veteran leaders and experts may have the benefit of experience, they are weighed down by legacy beliefs. Many of their assumptions about customers, technology, and the competitive environment were forged years or decades earlier, and reflect a world that no longer exists.


Conversely, while modern day AI designers can upload all forms of information to AI agents and systems, it is still a “machine” and a “tool”. That said, the evolution of AI must be progressed with extreme caution.


But how? We need to adapt to “pay it forward” thinking and also reframe the problem — the goal is to maximise contribution, not compliance. We also need to embed new human-centric principles in every structure, system, process, and practice. If we are truly serious about creating a future built on “Societal AI” that is fit for human beings and fit for the future, nothing less will do.


Before we wrap up, ask yourself the question: what more can (or should) you do to influence the role of AI and its impact on society? How can you help shape “Societal AI”?


Consider for a moment the implications of not democratising AI. Stay with me. It is analogous to voting, which is at the heart of democracy.


Imagine it is Election Day and you have not yet cast your vote. While you might think your vote will not affect the outcome of the election in any significant way, in reality, every vote is important.


As democracy means ‘government by the people’s consent’, every citizen has an opportunity to express their voice, to choose their representatives, and take a stance on social and political issues.


More than half of all citizens in the world are currently able to exercise the right to elect their leaders. Libertarian opponents of voting emphasise that ‘in a free society everyone has the right not to vote if they so choose’. However, not voting could ultimately lead to decisions and policies that potentially under-represent or, worse, misrepresent the rights and concerns of all citizens.


The bottom-line? No single country or government has all the answers to these challenges. We therefore need international co-operation and multi-stakeholder responses to guide the development and use of AI for the wider good. Otherwise, AI may potentially be anything but responsible, ethical, or fair.


Follow me on LinkedIn, Twitter and visit my website for more info!

 

Salim Sheikh, Executive Contributor Brainz Magazine

Over the past 25 years, Salim has built a career in consulting, working both client- and supplier-side as an interim CIO/CTO and a Business Change / Transformation Consultant. facilitating digital and technology transformations programmes that have included rescue & recovery ("turnaround"), process optimisation & improvement and organisational change – across diverse industries in the UK, Europe, Nordics, Turkey, UAE, US, and Australia.


Salim is an Oxford University alumni who also has strong academic roots in Artificial Intelligence (AI). He is a mentor in the “Responsible Tech Program” managed by “All Tech Is Human” where he advocates “AI for Social Good” and “AI for All”.


He authored "Understanding the Role of Artificial Intelligence and Its Future Social Impact" which is available via IGI Global (https://bit.ly/34cfJVf) and Amazon (https://lnkd.in/gbk-zba).

Comments


CURRENT ISSUE

Jodie Solberg (1).jpg
  • linkedin-brainz
  • facebook-brainz
  • instagram-04

CHANNELS

bottom of page