I came across this recent post by Mikko Alasaarela - The Rise of Emotionally Intelligent AI. I’m re-posting here my comment to it.
In his post, Mikko Alasaarela contemplates about AI and emotional intelligence. He claims that AI is moving fast towards being emotionally intelligent, in a way that might be better and less biased than what we humans employ.
It’s an interesting view, but there is a base hypothesis in Mikko’s post that is not too solid:
Our emotions and feelings are organic algorithms that respond to our environment. Algorithms, that are shaped by our cultural history, upbringing and life experiences. And they can be reverse engineered.
Mikko mentions some supporters of this concept of “organic algorithms”, like Yuval Noah Harari and Max Tegmark.
The view of human cognition as an “algorithm”, or in other words, the computational model of cognition, has gained a lot of traction in the last decade. It’s not very surprising. We are, after all, in the age of algorithms.
But that is also the problem. We’re using a metaphor, something we know, to describe something we don’t know. That’s our modern lens, we see through the software/algorithm prism. And it’s quite a good metaphor. It’s useful for a lot of things. It explains things. But then we tend to forget it’s a metaphor.
There is no real evidence that human cognition is a set of “organic algorithms”. More over, there’s no evidence to support the assumption that it’s based on what we call algorithms; algorithms that can be reversed engineered by us, that is, can be expressed in some algorithmic notation that we can produce and reproduce. There is so much unknown regarding consciousness, that to make this statement is really kind of absurd.
And there is another thing overlooked. Our emotional intelligence is largely based on, or connected to, our ability for empathy. We feel what others feel. But we have no clue how to reproduce that, because we have no clue what it means. We know what it’s like, to feel. We don’t know what it means. We don’t know what causes that. We don’t know what is cognition. We don’t know what makes a subjective experience. We have the experience, we have not been very successful in understanding it.
So we have no reason to believe an algorithm can reproduce that. At least not until we have some kind of understanding what this that really is.