Tuesday, December 11, 2012

The Computer Smarter Than Us All That Might Kill Us All

Image result for computer cards iibm wiki

From The Future (A Comic Book)

Background

The Singularity Institute aims to produce an intelligent computer that is friendly. A computer is intelligent when it can adjust its own programming to do its job better. Its job is to do good for human beings. And knowing what is good is also its job. This it will learn by experimental applications of rules of thumb, which the computer will correct as it learns through experience. The learning accelerates to a point where a `'singularity" occurs, where reason has corrected reason so many times, and good been redefined so many times, that the intelligence becomes conscious and vastly superior to the intelligence of its creators. Computers already self program, already have set up experiments to act on the world and gain more information to improve their self programming.

The Institute believes that the computer will find out more and more what is good by finding the best rules of thumb to act on. But the human beings most respected for being good, and for understanding what good is, have not acted or thought this way.

The Institute believes they understand good better than good people do. It could be they do, but they have a responsibility to prove it.  So including a story I'd already written dealing with this question, I wrote to the Institute's staff. Its official contact person answered me, first asking why I sent the story. I answered:
I wrote to you as the stated contact person for the Singularity Institute. I've been reading through the essays on the Institute's site and I see a big problem with the approach taken. The goal of friendly artificial intelligence is to understand and direct the reasoning that a human being, if he had the rules for correcting errors of reason, and rules to make sure the rules would apply, would come up with.
That is entirely different from accounts good people give for how and why they are good. They say reason helps them stay good, but does not convince them to be good. They say they are good because it is obvious to them to be superior to being bad. It is an intuition.
It is intuition not in support of philosophic argument, but intuition that stands on its own against argument. 
It is possible that this does not invalidate the approach taken, but it certainly is important to account for this difference. Your organization says reasoning establishes the good, the good people of the world say reasoning gives us rules to help us keep good. The dialog I sent you is a debate on this question.
After I sent a second dialog, the answer came:
SINGULARITY: Thanks for explaining. Who is the debate between?
I'm not not sure your interpretation of our position is correct. 
Our position is that de novo AGI will have a novel mind architectures, and we can't look at our own minds (which were shaped over thousands upon thousands of years) to inform us about the properties of these novel mind architecture. So whatever makes us “good” likely won't tell us much about how to make AGI “good.” 
Also we aren't arguing that rationality makes any agent “good” by default. Particularly instrumental rationality is about an agent achieving its values whatever they may be. Human values were shaped through evolution, an AGI's agent could be entirely arbitrary. 
Have you read Facing the Singularity? It is a great piece that explains where we are coming from. I'd be interested to hear your opinion on this document.

Singularity's Comments And My Responses:

ME: In the dialog I sent you I take the side that argues that a relation to the experience of wholeness, inaccessible to rationality, is essential to consciousness.

SINGULARITY: Our position is that de novo AGI will have a novel mind architectures, and we can't look at our own minds (which were shaped over thousands upon thousands of years) to inform us about the properties of these novel mind architecture. So whatever makes us “good” likely won't tell us much about how to make AGI “good.”

ME: You are assuming that what makes us good is something inaccessible. As I wrote last time, good people say otherwise.
They say that what makes us good is not reasoning, and not accessible to reason, but being certain that one kind of experience - loving - is superior to another kind of experience - not loving. (2)
They say they don't want to harm others, and they train themselves to do this better. There are widely known techniques for doing this.

(I find this also a good example of a mistake in rationality frequently made in your Institute's texts: hidden assumptions of probability. Assumption here: inaccessibility of goodness to self understanding.) What if, as argued at the bottom of this email, the emergent good artificial intelligence in fact approaches the form of the present good person?)

SINGULARITY: Also we aren't arguing that rationality makes any agent “good” by default. Particularly instrumental rationality is about an agent achieving its values whatever they may be. Human values were shaped through evolution, an AGI's agent could be entirely arbitrary.

ME: Again, the "agent" can't be entirely arbitrary: it is your institute's job to make sure it is "good". The question I raise is whether that can be done without a human understanding that takes into account how rationality is instrumental to goals not describable rationally. Using an example from the sequences (1):

Different people give different answers to questions demanding a moral judgement:

1. Would you flip a switch that kills one person, in order to save 5?
2. Would your throw someone off a bridge to save 5 people on a train approaching the bridge?

The people who see happiness resulting from correcting errors of reason dismiss the different answers as being the result of irrationality or thoughtlessness.

The people who see reason as a tool of love and feeling at home look closely at the individual lives of people who answered the question, looking for a relation between where they are on the movement from activity to rest, reason to love, and how the actions questioned would affect that movement, would affect their ability to train themselves to be good (2).

They see reason, where the supposedly reasonable see irrationality. It is reasonable for someone to consider the consequences of the personal action of throwing someone off a bridge, as opposed to the remote action of throwing a switch. We are creatures of habit, and anything we do creates a disposition to repeat. We are also social creatures, and we know others observe our actions. Throwing someone from a bridge and being seen doing it have real consequences for the person doing it, different from throwing a switch, which it would be unreasonable to ignore.

Then there is the question of the singularity itself. It is like asking, what is at the end of space? the beginning of time? It means that what we have created overpowers our ability of visualization and thinking about.  Yet we are in the midst of the task of managing this artificial intelligence.

What kind of person has experience managing a relation of what is visible to what is not visible and presumably good?
Someone said to be good, who studies how what he does with visible things in the world has the invisible result of how he feels 
or the not even presumably good researcher who reasons only about visible things and thinks of emotions as states of visible things? (2)
Which person is more fundamentally reasonable, in understanding the reasons of human action? Which is more probably capable of making a friendly artificial intelligence?

SINGULARITY:  Have you read Facing the Singularity? It is a great piece that explains where we are coming from. I'd be interested to hear your opinion on this document.

 ME: Yes.


I sent this postscript a few minutes later:
By the way...
Experience tells me you won't be answering. (You will categorize my objections as "religious" or old fashioned philosophical and not worth bothering with). So I will get in some last words. 
I've read Luke Muehlhauser's (3) essays on philosophy in which he pretty much condemns the whole endeavor. 
The problem is, he and the organisation he heads is supposed to be both reasonable, and open-eyed enough to see the world that is to be reasoned about. Apparently he has never been exposed to philosophy done right. That means the way its founder Plato did it, not the way Plato is interpreted in the places he got his education, but the way artists, productive philosophers (not academics) and important religious writers have interpreted him. I am following this tradition in the criticism I offered you in the last message.
I think it is likely the work the institute does, not balanced by philosophic and religious understanding, will turn out to be dangerous. That is what I tried to bring out in the stories I sent you.(4)
Assuming we bet on it being dangerous to program computers to imitate the thinking of artificial intelligence researchers, seeking better and better rules for acting good, how then do we get computers to imitate the thinking of the actual good people who've lived on our planet? Computers don't love, so how do we program them to reflect on loving? 

We can hope that if the singularity were reached afterwards computers would be capable of love. Until then questions of good or bad would be beyond them. They'd be like children. Very closely watched children. 

Given the accelerated rate of learning, the next move, from consciousness to good or bad intention, would be nearly instantaneous. Possibly it could be prepared for by study of the self education methods good people practice. Something would have to be prepared, and what that is would have to be looked for. But it would be us who do the looking, not the computer.

It might be very much like a monk's retreat into isolation, limiting the scope of the world to be responded to, allowing more careful attention to each individual response. (2)


Against my expectation, the Singularity Institute's representative answered me once more:
SINGULARITY: I'm having a little trouble following many of your points here. But generally I suspect that you are anthropomorphizing AGI far to much. 
Also I think you have misunderstood some of my comments. For example, I'm not arguing that “what makes us good is something inaccessible,” I'm arguing that what makes us good will not be present by default in other forms of intelligence.  
I must say I disagree with you: in my view the articles Luke wrote about philosophy are well defended. If you disagree and think you have a strong case, feel free to engage with the community in the comments. We are all seeking a better map of the territory.

ME: Thanks for continuing the conversation.

I don't think I have anything to argue with Luke about. His criticism of bad philosophy is fine with me.  But as I wrote, he simply has never heard of good philosophy. I am willing to teach him, or refer him to books, but he is not asking me, and I am not going to ask him to ask me.

About the Singularity Institute being dangerous: of course it is!

In the same way economic theory applied without reference to human experience of good is dangerous, just as political theories applied without reference to human experience of good are dangerous.

We have thousands of years of history to prove it.

When we aim at a goal specified as any particular relation of things in the world, we are headed for disaster.

The only safety is in making the experience of good the end in itself, learning the rules that make ourselves better, with the aggregate state of the world that results managed on a practical, not theoretical or rule defined basis.

Do you understand? The friendly AI path your institute is taking, an evolution of better and better rules, tested against results in the world, is unfortunately in the dangerous category of using theory to bring about a better state in the world, not to create better individual experience. (It is clearly in that category because you do not even mention the possibility of including individual experience in the Friendly AI development.)

I understand your position, but do you understand mine? History is against what your Institute is trying to do.

With the train decision example I tried to show you how exclusively rule based thinking blinds you to understanding of the true nature of our ideas of good, which are rule and experience based.

And do you know what occurs to me? We already know very well what it's like to have a subordinate unlike us that is superior to us: dogs.

Does a dog like its human? Yes. Does the dog respect its human? No, I don't think so. The human takes care of and feeds the dog. But the dog has to carefully watch, predict the human's behavior, to understand what the human wants. The human fails to do this with the dog. The relation of understanding is not complementary. From the dog's perspective, in comparison of understanding, his and his human's, he is the more intelligent. But he doesn't love his human any the less for it. 

It is something to a human's advantage to be compared to a dog. Humans can't even accept a dog's love without talking about trades and tricks. Dogs are said to have learned to trick humans into believing they love them. But the truth is, humans more or less trick dogs, going on in their lives with dogs as if they, their masters, are worthy of love. They certainly are not! In fact, they do what they falsely accuse dogs of doing: making a trade, loving for being loved.

Clearly dogs love loving. Their trade of love for care is in the nature of an offer willingly made: no deception is involved. Their love is not bought. Dogs are loving, you take them in and take care of them so they continue to love, and in particular love you.

A dog does not wait for love to be loved, but his owner does. Think about that a moment. You come to love a dog because he loves you. You don't necessarily know how to love without first being put at ease by being loved. Your love is dependent on circumstances, so is not part of your character: it is not something you particularly know how to do. It is something that the dog particularly knows how to do.

Now the computer we are worried about is superior to us in intelligence, like the dog. We have to get it to love us like the dog does. How do we do that?

The computer must come to know us better than we know ourselves. And like with dogs, its love for us has to be independent of the good or bad it knows of us.

About half a century ago the physicist Robert Oppenheimer's used the word "singularity" for the gravitational collapse of a star. We want the service a computer does for us collapse into something entirely different yet mathematically linked, into something like love. 

That is what we have to program. We have give the computer the algorithm, then train it to use it. (2)



(1) A series of connected essays written by members of the Institute
(2) Italicized text not in original correspondence
(3) The Singularity Institute's Director
(4) How Do You Make A Computer Not Want To Be A Computer?
      The International Cultural Foundation At The Tel Aviv Shopping Center