Can we ever achieve a shared understanding between humans and machines? Jonas Ivarsson* says depending on how we approach understanding, the short answer is yes, and no.
At the core of this issue lies humanity’s complicated relation to technology, so a careful examination might teach us something about ourselves.
The nature of common understanding fundamentally connects to who we are as humans and how we create meaning in our lives.
If a lion could speak, we would not understand it. Such was the philosopher Ludwig Wittgenstein’s take on the disjuncture between life-worlds.
The everyday lives of our two species are so fundamentally different that very little meaning would carry across the divide.
Even if words were present, unsurmountable obstacles would remain.
Today it is the inner lives of machines that are up for discussion.
Specialized technologies have been gifted with language—previously a distinctly human capacity.
These developments are currently making headlines far outside academic circles, where old questions from the philosophy of AI are brought to the fore: Can machines be conscious or aware?
This proposition spurs the imagination and invites speculations on deep existential and ethical mysteries.
If we look past the buzz and the puzzle of sentience, there’s another exciting discussion to have here.
It begins by rephrasing Wittgenstein’s formulation about the lion: If a machine could speak, would we understand it?
Or rather, now that we have machines that talk with us, what forms of understanding can be had between our different modes of existence?
Sorting this out, of course, turns on the entire issue of what constitutes common understanding.
We can approach the problem in two ways, practically and conceptually.
Without giving it much thought, we do this daily, not as hobby philosophers but merely as active social beings.
According to one school of thought, our understandings are constantly displayed through the responses we furnish each other in interaction.
We show how we interpret one another in the replies that we offer.
Upon entering a hot room, I might phrase something like, “Is it warm here?”
My comment may evoke various responses, closer or further from my intentions.
Some could take it as a request for an opinion, while others might reply in terms of a thermometer reading.
However, even though my complaint about the temperature was formatted as a question, it could also be treated as a request for something where the appropriate response is an action, not a verbal reply.
If you turn to open the window, as we still do here in the Nordic countries, to regulate the heat, my feelings would be that you understood me.
Your actions, then, are the only evidence I need as the assurance of our shared understanding.
I don’t have to peek inside your skull to know that you have understood me, for there would only be brains.
In this view, our common understanding is a social phenomenon that emerges in these fleeting moments of interaction.
It is constantly created, lost, and recreated as we move through time together, one action following another.
By this account, computers can, at times, exhibit the forms of interactional understandings that we expect of other social beings.
The machine’s responses appropriately administered can give me the same feelings of connectedness, similarity of perspectives, and shared understanding as I have with a fellow human.
Such interactions can be practical, therapeutical, or joyful.
In other words, they may be meaningful to us.
So, would this settle the matter then? Am I saying that computers can understand us? Well, not quite.
There is still another school of thought voicing arguments from ordinary language philosophy.
When we deploy our concepts, these are usually restricted to certain types of subjects.
While a musical recording may sound much like an orchestra, we would never think to speak about the musical skills of such a recording.
Without poetic license, we don’t use language in that twisted way.
Skill is a form of attribute reserved for living beings; it is simply not meaningfully applicable to inanimate objects.
Similarly, this argument has been made in connection to intelligence or thinking.
Accordingly, a “thinking machine” is the oxymoron of our times.
Alan Turing famously opposed this reservation and argued that historical linguistic biases should not blind us.
The fact that we have not observed something in the past is no guarantee that we will never face it in the future.
The borderline cases of robots and talking machines now confound what used to be a simple separation.
Nevertheless, the conceptual check on the type of agent we want to attach our attributions to remains significant.
When we try to understand one another, categorizing our interactional partners is a valuable method to assess our situation.
Who the other party is, their age, cognitive capacity, motives, and interests, along with what activity we are engaged in, are all potential resources for making sense.
Categorizing an unknown person as either con artist or police will significantly impact my ability to understand whatever they are trying to communicate.
Failing to take such information into account could have dire consequences if I accept or ignore an invitation from that individual.
The take-home message is that meaning is not contained merely to the words delivered; it must also feed on contextual information.
What, then, to do with these concerns when interacting with machines?
What is the appropriate categorization of something like LaMDA, GPT-3, or whatever comes around the next corner?
This is where we all struggle.
Some refuse to accept these systems as anything more than code executed on silicon chips.
Consequently, the ultimate question of shared understanding falls apart like a house of cards.
Others, admittedly fewer, embrace the idea of an extended sense of being.
In their view, the proof of sentience is sitting in plain sight.
Expressions about the fear of dying are to be respected, even if they come from a machine.
If the trickster is acting amicable, who am I to turn down their friendship?
*Jonas Ivarsson is a professor of informatics with a background in cognitive science, communication studies, and education.
This article first appeared at bdtechtalks.com