Emotion In Artificial Intelligence

Blog

This was originally a chapter from the book ’28 Thoughts On Digital Revolution’ by Jonathan MacDonald, Founder of SELF. The first part of the piece was written in 2007, the second in 2011.

---

In 1992, Gerald Tesauro created a programme called TD-Gammon at IBM’s Thomas J. Watson Research Centre. The TD part stands for Temporal Difference (which is a type of learning system), and the Gammon part is taken from the game, Backgammon.

TD-Gammon quickly became as competent as the world’s best human players, eventually beating them and showing unforeseen strategies that, to this day, are incorporated by humans in backgammon tournaments. The real innovation was actually within the evaluation process used by the programme.

Basically, the algorithm became more and more consistent with every move, improving the view of the board and probabilities, based on the most recent move (hence temporal-difference learning). The capability to dynamically learn got the Artificial Intelligence (AI) community pretty excited.

A year later, in 1993, a guy called Vernor Vinge a mathematics professor, computer scientist and science fiction writer, wrote a book called ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’.

This is how the book starts:

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.
What is The Singularity?
The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
• The development of computers that are “awake” and superhumanly intelligent. (To date, most controversy in the area of AI relates to whether we can create human equivalence in a machine. But if the answer is “yes, we can”, then there is little doubt that beings more intelligent can be constructed shortly thereafter)
• Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity
• Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent
• Biological science may find ways to improve upon the natural human intellect”

Vinge later writes that:

“I’ll be surprised if this event occurs before 2005 or after 2030.”

It may come as a surprise that Vinge wasn’t the first writer who went to these apparent extremes.

In 1847, R. Thornton, the editor of the Primitive Expounder, wrote (more than half in jest) about the recent invention of a four function mechanical calculator:

“…such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!”

And also, a hero of mine Alan Turing stated in 1951 that:

“Once the machine thinking method has started, it would not take long to outstrip our feeble powers … At some stage therefore we should have to expect the machines to take control.”

More recently (and fashionably), in a 2005 book by Ray Kurzweil ‘The Singularity Is Near’, the first chapter discusses what Kurzweil calls The Six Epochs. The penultimate epoch is called ‘The Merger of Human Technology with Human Intelligence’.

This epoch, giving further emphasis to Vinge, is where technology reaches a level of sophistication and fine-structuring comparable with that of biology, allowing the two to merge to create higher forms of life and intelligence. Kurzweil claims that we are now in the process of entering this epoch, thus giving justification to his claims that The Singularity is near.

So far, so good.

The Singularity is a very popular topic now. Those who are really into proper geek technology, have a field day with imagining what life may look like when computers outstrip human capabilities.

There are detractors of course, however the challenges I find most interesting are lesser found in common reviews and posts about The Singularity or AI in general. The challenge I’m fascinated with is:

Can non-benevolent (i.e. non-well meaning) super-intelligence persist?

To this point there was poignant commentary in a piece in 2011 by Mark Waser. Here’s an excerpt:

“Artificial intelligence (AI) researchers generally define intelligence as the ability to achieve widely-varied goals under widely-varied circumstances. It should be obvious, if an intelligence has (or is given) a malevolent goal or goals, that every increase in its intelligence will only make it more capable of succeeding in that malevolence and equally obvious therefore that mere super-intelligence does not ensure benevolence.”

My favourite quote from Mark is at the end of his post:

“A path of non-benevolence is likely to come back and haunt any entity who is not or does not wish to be forever alone in the universe.”

And this brings me to a point of view still under development in my mind… and to be honest, I’m shocked there is such low volume of writing and apparent thought in this area.

I’m concerned that the people most involved with AI are primarily technologists.

In the same way as Mark Zuckerberg defines privacy, identity, and human rights in a totally different way than I do, I’m concerned that the proponents of AI are considering a different definition of benevolence.

The intelligence spoken of is the type necessary to win at backgammon, or chess – activities that have ultimate scenarios and finite variables. The machine intelligence involved in developments of The Singularity and AI is contextually logical and mathematical.

There is little talk of the illogical and emotional, because the machinery being developed, albeit of exponential capability, is fundamentally hierarchical, not democratic like the human brain. We seem to overlook there is a reason why even the smartest computers cannot beat the best players at poker.

I fear that the intelligence involved is only one part of the intelligence that powers humankind. I struggle to believe that the emotional intelligence has been featured strongly enough in AI computations.

Whilst my personal hope is for benevolent super-intelligence, I’m hard pressed to find enough proof that the AI developments are considering the soft science elements as an equal priority.

And let’s not forget, the estimation of our own emotional intelligence is at best embryonic. It is only in recent times that we have started to realise the deep cognitive patterns that power our thoughts, decisions and behaviours.

Ultimately I’m concerned that we haven’t even scratched the surface of our own emotional intelligence, so how prepared are we to ensure optimal artificial emotional intelligence, if indeed that is even a priority?