Research into artificial Intelligence (AI) has morphed into many factions since it really got going in the 1970s. Perhaps the classical view of AI is that it mimics the cognitive functions of real-world problem-solving. That is, systems which think and even learn like humans over time. Therefore, we can evaluate AI systems based on how intelligent they are; whether they can solve narrowly defined, or broad problems and whether systems work as independent machines or require human partners.
Nowadays, AI is at work in an increasingly autonomous and inhuman way. For instance, algorithmic systems make decisions in many areas of people’s daily lives, whether they know so, or not. Algorithms are sets of rules or instructions designed to solve a specific problem, perform a specific action or computation. They routinely influence the nature of human behaviour. In fact, they can determine what news we consume, which online content we view, what purchases we make, the jobs we take, the holiday destinations we visit, the friends we make and the relationships we forge, for example. Streaming and social media platforms, such as Netflix, Facebook, and Spotify, arguably exert great power in shaping public opinion and therefore our societal trajectory. By mediating humanity in this way, algorithms are becoming a ubiquitous part of human discourse.
The kick back is that as algorithms become more automatized and personalised, the user becomes increasingly unaware of what’s going on inside the black box that is our smart phone, for example. Surely, we need to explain to users how and why decisions are taken in the digital universe? Much of the research and development into AI seemingly takes place ‘behind closed doors’. To counter this, let us propose that AI is a servant of humanity and therefore, should be fully accountable to it. So, let’s develop AIs ‘out there’, as an integral part of public discourse. That is a discourse which carries with it our broadest societal values and goals.
We might also do well to make AIs explainable to users at a local level. Explainable AI (XAI) aims to make a system’s black box transparent and accountable to its users. Problematic algorithmic bias such as that toward age, race or gender can be exposed, for example. XAI can also explain decision-making processes. This is critical for doctors and patients who need assurances for a medical diagnosis, for example. The specifics of explanations are best defined by who will receive the explanation, why that explanation is required and the intended context of its delivery.
While this sounds promising, explaining algorithms in human terms remains hugely challenging. Yes, the AI community can provide explanations of black box systems. The challenge is to make these explanations intelligible to users. Scientists argue that explainability must be traded off against a system’s accuracy and its ability to resist security attacks, for example. XAI therefore remains ineffectual for it fails to produce meaningful human explanations for complex machine operations.
Surely, we must remember the user and that interaction which emerges between human and computer? Currently, very few research papers in AI mention ‘user’ in their title. Fortunately, new research is pleading that system designers consider causability. Causability is concerned with the extent to which a system explanation achieves an effective, efficient and satisfactory understanding in a specific context of use. Causability is a property of users: a measurement for the quality of use. From this human-factors perspective, little is known about algorithmic explainability. The promise is that when explainability and causability are placed into a symbiotic relationship, human and computer are once again listening to each other. Early AI pioneers like Alan Turing designed mechanical and electrical systems which assisted humanity achieve its goals faster. Today, digitally-mediated persons can unknowingly surrender gender, ethnicity, physique, what they wear, what they do and where they go to corporations and governments. Face recognition and GPS algorithms which collect this data are ubiquitous. AI must surely do much better to preserve a person’s interests and not deprive them of their civil liberties. Our humanity is surely the most precious thing of all.