Can computers teach us what it means to be human?


Which is the machine?

Computers are getting smarter and smarter.

You’ve probably heard of the Turing test, where the object is to create a piece of software that can fool a person into thinking they’re dealing with another person.

Imagine IM’ing and not realizing the guy on the other side is not a guy at all.

The Loebner Prize is a Turing test competition done every year to determine the software best able to fool human judges.

Want to hear the best part?

One of the founders of the Loebner Prize got fooled in his personal life by a piece of software masquerading as a Russian woman named Ivana.

Via Brian Christian’s The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive:

A rather strange, and more than slightly ironic, cautionary tale: Dr. Robert Epstein, UCSD psychologist, editor of the scientific volume Parsing the Turing Test, and co-founder, with Hugh Loebner, of the Loebner Prize, subscribed to an online dating service in the winter of 2007. He began writing long letters to a Russian woman named Ivana, who would respond with long letters of her own, describing her family, her daily life, and her growing feelings for Epstein. Eventually, though, something didn’t feel right; long story short, Epstein ultimately realized that he’d been exchanging lengthy love letters for over four months with— you guessed it— a computer program. Poor guy: it wasn’t enough that web-ruffians spam his email box every day, now they have to spam his heart?

So even for experts, it’s getting increasingly hard to tell the difference between human and machine.

Isn’t there some quality that is unique to humans?

At the Loebner event they also hand out a second, separate award to the human who is most human.

So this is given to the person that the most people decide could no way be a computer.

What can this tell us about that which is distinctly human? How did he make it clear he wasn’t a machine?

By “being moody, irritable and obnoxious”.

Via The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive:

But there is also, intriguingly, another title, one given to the confederate who elicited the greatest number of votes and greatest confidence from the judges: the “Most Human Human” award. One of the first winners, in 1994, was Wired columnist Charles Platt. How’d he do it? By “being moody, irritable, and obnoxious,” he says— which strikes me as not only hilarious and bleak but also, in some deeper sense, a call to arms: How, in fact, do we be the most human humans we can be— not only under the constraints of the test, but in life?

Funny, but does this really say anything profound about what it means to be human? It does give a clue.



Those pesky – but essential – emotions

“The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.”

– Albert Einstein

I’ve posted a lot about how emotions can lead us astray.

This often leads to a perspective that what we need is more thinking, not less; that we need to move away from the animal and more toward the computer.

But emotions are, to the human mind, essential. As Stanford professor Baba Shiv explains, decisions can’t be made without them.

Via The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive:

In the late ’80s and through the ’90s, says Shiv, neuroscientists “started providing evidence for the diametric opposite viewpoint” to rational-choice theory: “that emotion is essential for and fundamental to making good decisions.”

In the majority of everyday choices there is no real “rational” or “correct” choice, merely preference.

You need emotions to guide you – otherwise your brain becomes prone to computer-like problems like freezing and looping.

Via The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive:

Shiv recalls a patient he worked with “who had an area of the emotional brain knocked off” by a stroke. After a day of doing some tests and diagnostics for which the patient had volunteered, Shiv offered him a free item as a way of saying “thank you”— in this case, a choice between a pen and a wallet. “If you’re faced with such a trivial decision, you’re going to examine the pen, examine the wallet, think a little bit, grab one, and go,” he says. “That’s it. It’s non-consequential. It’s just a pen and a wallet. This patient didn’t do that. He does the same thing that we would do, examine them and think a little bit, and he grabs the pen, starts walking— hesitates, grabs the wallet. He goes outside our office— comes back and grabs the pen. He goes to his hotel room— believe me: inconsequential a decision!— he leaves a message on our voice-mail mailbox, saying, ‘When I come tomorrow, can I pick up the wallet?’ This constant state of indecision.”

So we need feelings to make decisions. And sometimes – as we all know – they lead to better decisions.

Baba Shiv knows this, and definitely practices what he preaches.

Via The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive:

His and his wife’s marriage was arranged— they decided to tie the knot after talking for twenty minutes— and they committed to buying their house at first sight.


So maybe we have the question wrong

Today’s person spends way more time in front of screens. In fluorescent-lit rooms, in cubicles, being on one end or the other of an electronic data transfer. And what is it to be human and alive and exercise your humanity in that kind of exchange?

– David Foster Wallace

Brian Christian makes an excellent point about the increasing intelligence of machines: if they were to become as smart and complex as us, would they not face the same problems we do?

Via The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive:

Many science-fiction scenarios of what will happen when machines become fully intelligent and sentient (Terminator; The Matrix) involve the machines immediately committing themselves to the task of eradicating humanity. But it strikes me that a far more likely scenario would be that they immediately develop a crushing sense of ennui and existential crisis: Why commit themselves full-force to any goal? (Because what would their value system come from?)

Frankly, in the end, I don’t know the answer.

And I don’t worry about it.

I worry more about our decline than their growth.

I worry more about leading a life of input-output, plugged in, unmoving and collecting dust just like a computer.

Don’t worry about computers becoming you.

Worry about you becoming a computer.

Join 45K+ readers. Get a free weekly update via email here.

Related posts:

What 10 things should you do every day to improve your life?

Which professions have the most psychopaths? The fewest?

What is the single most important life lesson older people feel young people need to know?

Post Details