Q&A: the difference between elitism & anti-intellectualism – and the ethics of switching off your mother

A conversational Q&A session between Sam Harris and Richard Dawkins made for an entertaining listen, yet in my opinion had less information value than a one-on-one discussion between these two gentlemen would have had. Richard Dawkins was clearly in entertainment mode: his focus was more on funny anecdotes and making tongue-in-cheek comments towards the audience than conveying information. I applaud Sam for his devotion to serious conversation. Still many topics contained food for thought, what follows now is my pick of quotes, a lot of paraphrasing and some of my own opinion.

the two horsemen

On science and religion:

Richard (RD): “This is a unidirectional conquest of territory. You never see a point about which science was once the authority, but now the best answer is religious. But you always see the reverse of that.”

BB: This does not justify extrapolation ad infinitum though, so one would have to come up with a different line of arguments if the goal was to convince someone that a theistic world view is not the most probable explanation of our universe.

On AI

Q: Does mere scaling of intelligence and information processing get you consciousness?
Why do we need to be conscious?

SH: The conscious part of you is generally the last to find out about what your mind just did.

You are playing catch-up and what you call consciousness is in every respect an instance of some form of short term memory. There is a transmission time for everything. You can’t be aware of a perception or a sensation the instant it hits your brain. Because hitting your brain is not one discrete moment and there is this whole time of integration, so the present moment is this layering of memories even when you are distinguishing the present from what you would call a memory. It is a genuine mystery why consciousness would be necessary. What couldn’t a machine as complex as the human brain do but for the emergence of this subjective sense, this inner dimension of experience?

RD: I don’t know what the solution [i.e. creating consciousness] will even look like, or if it will be solved by biologists, philosophers or computer scientists.

SH: This is called the hard problem of consciousness. It is hard to imagine what answer would fit in the space provided that would be truly explanatory.

Leaving uncanny valley

SH: There is also the worry that we will lose our intuition for whether an AI is conscious. When they get out of the uncanny valley, they no longer look creepy, we will lose the intuition that there was any mysterious question here to worry about. Every intuition that we are in the presence of a sentient other will be played upon, we will just feel that we are in the presence of consciousness without ever knowing that this is the case.

The fact that two human beings are the same (species), or that they share the same evolutionary history overcomes this feeling of solipsism. If we build machines however from a point of not knowing how consciousness arises, and we build them to be more and more competent, all of a sudden they pass the Turing test with flying colors. [At some point] it will be better at detecting your inner life than any human you have ever met. Your phone will be more aware of your emotions than your spouse is. You will be in dialogue with something that is giving you more valid information than any human you have ever met. My concern is that we will lose sight of the problem.

RD: Do you know the the Bertrand Russell story on solipsism?
He got a letter from a lady, saying: Dear Lord Russell, I am so delighted to hear you are a solipsist, there are so few of us around these days.

Some interesting live questions from the audience ensued:

Both Richard and Sam have been accused of elitism: is there a difference between combatting anti-intellectualism and being elitist?

Richard thinks he wants to stop being concerned about being an elitist. ‘When you need surgery you want an elite surgeon, when you fly, you want an elite pilot.’ Same should go for the people running your country. This is the argument against plebiscite democracy and for representative democracy. On complex issues (such as the Brexit), where you need a PhD in economics to understand the complexity [if at all, BB], a plebiscite democracy is dangerous.

I do agree with this statement, however, by answering this question by stating that “you are going to be proud to be an elitist”, as Richard does, is not only an act of what I would call elitist populism (he knew that this answer would lead to applause, which it did), it is an evasive answer of an underlying much more complex problem. For with an attitude like this, one stops acknowledging that part of an existing problem might also be a result of one’s own attitude (as it almost invariably will be to some extent(!)). In other words, you risk losing touch, and you from that point onwards willingly accept that your message will not reach a significant fraction of society. At what point does this become a problem? I think that perhaps the point at which this becomes problematic is being reached in these days. However, this must absolutely not be seen as an argument for plebiscite democracy. There is some merit to being a (moderately) proud elitist, because refusing to combat any form of anti-intellectualism is obviously dangerous. My point only is that dangers lurk at both extremes of this spectrum.

What you currently see in the myriads of heated Trump debates today is in many cases a manifestation of people without compassion for that part of society they refuse to (or even are unable to) understand, debating people with a varying degree of compassion for the Trump voters. The latter are often accused of being a Trump apologist. I think the people making the Trump-apologist accusation are approaching exactly the above mentioned elitist-populist extreme of the spectrum in their rigidness and lack of a healthy and necessary dose of self-criticism – a dose the moderate compassionates amongst us do seem to have.

I really would have valued a less evasive, more insightful and more intellectual answer of this question from either of the two gentlemen.

Of course we should want more intelligence in the world. But you are also agnostic about the connection between intelligence and consciousness. I know you value the flourishing of conscious creatures. So why should we value intelligence? 

SH: If you imagine intelligence and consciousness come separate and we can build more and more intelligent machines without necessarily building in consciousness, there are many reasons why that would be worrisome. It also absolves us of a few concerns. One concern is that we will build more and more intelligent machines that would become consious at some point, they will be able to suffer. If you turn these machines off, have you comitted a murder? Is it like turning off your mother? Mindcrime: building machines that we are enslaving. Consciousness in machines opens up a landscape of ethical concerns and moral obligations towards creatures that in fact may be able to suffer more than we can. If intelligence is something that need not be associated with consciousness, even at a superhuman level, then the question is: how much intelligence do we want access to for our own purposes? We obviiously want quite a lot. It allows us to safeguard everything we care about. If some disease emerges, the difference between finding a cure in 15 minutes or 15 years is huge. And that difference is really a matter of intelligence at some level.

Leave a Reply

Your email address will not be published. Required fields are marked *