Thursday, December 20, 2012

If You Can't Program It It Isn't Real



"Anyway, I like it now," I said. "I mean right now. Sitting here with you and just chewing the fat, horsing -- "
"That isn't anything really!"
"It is so something really. Certainly it is! Why the hell isn't it? People never think anything is anything really. I'm getting goddam sick of it."
(J.D. Salinger, 'The Catcher In The Rye')


An extract from The Future:

-  Miller once tried to convince Kurzweil that love was as real as atoms and particles. Know the story?
- You mean Miller, H.R., the science fiction writer?
- That's the guy.
- What happened to him?
- He disappeared right before Google hired Kurzweil.
- I remember the computer world's astonishment. No one could figure out what Google was doing. The two were public enemies.
- Miller arranged it.
- Why would he do that?
- He said he was grateful to the help he got from him.
- What help?
- That's the story. He wrote to Kurzweil challenging him to include the experience of love in his view of the universe. Kurzwiel responded, "if you can't program it, it isn't real."
- A good answer.
- Since religious experience, love, sympathy, kindness, tenderness are said to be indescribable, and what you can't describe can't be programmed, all are ruled out.
- They're illusions.
- What do you mean by illusion?
- Not real.
- Seeing one thing when we ought to see another?
- Yes. What lovers really experience is a distorted memory of being an infant protected by their mother, or a chemical change in their bodies.
- We can program hormonal change, model the condition of being in a mother's womb. Can we model the illusion?
- Of course. That is what psychology does.
- Psychology does something programmable, right? Otherwise it wouldn't be real.
- Right.
- Then psychology models the true experience, the hormonal changes, in relation to a description of religious experience, tenderness, love.
- The whole point is that that is all there is to it.
- I understand. But if you ask someone in love if what he is experiencing is like being in his mother's womb, would he agree?
- Yes. A feeling of wholeness like being in a mother's womb.
- Would he agree that though the description is correct, that was all there was to it? To being in love?
- Why not?
- Remember that people who have religious experiences say right off that the experience isn't describable. It is much more, is infinitely more, than any single thing we can say about it.
- That's an illusion.
- But, you know, Miller was given the job of building a computer that simulated or created human consciousness.
- And why did Google give him the job? He wasn't even a programmer. Kurzweil I understand. He's already famous for his work in artificial intelligence.
- We want the intelligent computer to work with us, and for us. It better understand us, right? We're in real trouble if it doesn't. It wouldn't know what it did helped us and what hurt us.
- Yes.
- To understand us, it has to be able to model our illusions.
- But we just talked about that.
- No, we talked about a correlation. A model would show how they the illusion was caused. How the experience of love arises.
- From hormones. And you mean the illusion of love.
- But where does the illusion come from? What is it? How can the computer model, not the fact that an illusion results from change in hormonal concentration, but what that illusion itself is? What are its parts? What are the set of instructions that could construct a model of love in the computer? Do you have any idea?
- No.
- Miller did have an idea. That's why Google hired him.
- Only for him to disappear.
- Someone in love says he feels whole. The experience includes in an unclear way all that has occurred in the personal history of the lover, plus how the world responded to the incidents in that personal history.
- Even if you are right, that could be programmed too.
- There you have said something interesting.
- Is that what Miller planned to do?
- It's what he did.
- Give me the details. How did he start?
- By hiring Kurzweil.
- Why?
- The government has its own artificial intelligence research unit. They are Google's competition. They can move faster than they can because they are not trying to train their computers to be friendly. Google is. So what they needed to do was begin with models of human behavior for the computer to learn from.
- By learn from you mean base their behavior on for interactions with humans.
- Which they will then analyze and use to form new testing behaviors. Yes.
- So Kurzweil without even trying programmed himself into the computer. He believes that model the brain accurately, and consciousness automatically would will follow.
- Yes. Google waited for the computer "discover" that it failed to account for the actual creation of the so-called illusion of feelings.
- And the computer started looking for the causality? For making a model of it?
- Yes.
- Are you sure that is possible?
- Possible? Yes, I am sure.
- How can you be so sure?
- Because it's already done.
- You said personal history was programmed, not a model relating feeling and things that worked.
- I'm saying that now. Kurzweil provided a model for the computer to learn from. For the computer to reject, after comparing it to human experience data that it is constantly collecting. You could say they've founded the science of love.
- Not Google, the computer. Assuming I believe you. What can it possibly look like, this model of love?
- That's the question, right? What really is in our heads, and what possible relation can we establish between that, and our description of physical things? Ideas are not things, thoughts are not things, feelings are not things. What causal, scientific, model relation can we establish between them, when models are only relation between things and things? That's why psychology says emotions are illusions, gets us to settle for a bare correlation in place of causation.
- Yes, yes. So what have they done?
- Kinds of action, types of people are kinds of things, and can be modeled.
- Which you say they've done. Done before Kurzweil arrived?
- The model was complete, but untried. The computer hadn't yet made its choice.
- But why have the computer make its own choice?
- Because it has to understand how we humans think, actually think, mistakes and all.
- Alright. Describe the model.
- In the Chinese Room thought experiment, you imagine you are in a room, with a set of written instructions telling you what to type on a keyboard when you hear English words spoken. Outside the room Chinese language emerges. You don't know Chinese.
- So when you give the computer a program imitating human thinking, it doesn't mean the computer is thinking at all.
- That's right. To get the computer to think, it has to know the world the rules are being applied to. It has to know what the words refer to. Add semantic content to the syntactical rules.
- We just program that too.
- It has to know how that content attaches itself to the rules.
- I don't understand.
- Think back to the problem of relating the illusion of love and the chemicals in the brain. What is the connection between the chemicals and the "feeling" of love? If we're making a machine, what are the connecting gears? For the computer to be conscious, the semantic content would have to be attached by to the rules of syntax by another rule. Syntax only relates things of the same kind: "this thing goes there, that thing goes here"
- How then?
- The way the mind can move the body is absolutely outside the world of physical laws. There is no law relating wanting to move your arm and it being done. It just happens. The scientific laws have rules, intermediate steps, involve laws discovered by our own thinking. The laws are the product of our thinking.  For the mind to insist on recognizing what only happens by visible laws, when the mind does other things as well, including the magical movement of the body, the magical drawing up thoughts from the past, the magical imagination of the future, this restriction would have to be done for a good reason. What is it that reason?  Understand the problem?
- I think so.
- So If the physical model cannot include the moral, can the moral model include the physical?
- I don't know.
- It can. That's what Miller figured out. Here are the instructions for making the model:
1. Include the physical, "natural world" model. That's the syntax.
2. Then add content to that model which is of the same material, that can give us gears to mesh with gears. Do this by reversing certain elements of that model.
3.Then add the conception of home, defined as safe and habitual past in a particular place with a particular people. This is the rule of shifting from one model to another.
4. Then add the means, the glue as it were, of "attachment" of one model to another: when home is lost, and the world looks as if the physical rules are present but reversed or combined in monstrous and often magical ways, find your way back home, through inventive and experimental and apparently magical action.

Because the supernatural world is an inversion of the natural world, supernatural defined as made up of a monstrous and magical re-assortment of natural parts, because in the world at home we have the magical (in terms of the physical world) moving the body by mere thought, we carry that magic into the supernatural world to make our way through back home to the natural world.
- Weird.
- Weird is right! Miller got this model by analyzing the supernatural in Shakespeare.
- Even more bizarre. Our will is the "attachment" of one model to another, the natural world to the supernatural?
- Yes.
- And the supernatural world is an inversion, made up of the same "things" as the natural world? It is the content, the semantics to the syntax of the supernatural sentences?
- Yes.
- Home is the natural world?
- Lost and returned to. Also clearly defined.
- How?
- You've lost your home when you can't love.
- This is too much for me.
- If you read his papers, you'd be convinced.
- Where are his papers*?
- That's unimportant. What's important is that the computer decided it understood us better with Shakespeare then with Kurzweil.
- So why is Kurzweil still there? Does he know what you've just told me? That he was tricked, a guinea pig?
- Yes, he knows. He thinks one danger has been substituted for another and made the risk worse. That we've issued instructions for our own absolute destruction, given certain conditions...
- If computer might not feel at home in our world, we'd be the supernatural for it, an obstacle and the object of its magic.
- That's right.
- What do you say?
- We don't have a choice. This is how we act. The computer has to understand us so as not to harm us. The computer without understanding is even more dangerous. Knowledge is dangerous for us humans, and it is dangerous for the computer until it learns to use knowledge wisely.
- We have to make sure the computer feels at home.
- Yes.
- But how are we going to do that?
- By going on as we started. Safe at home means knowing, having habits, of what to do in circumstances that are met with, so life goes on safe and confident. The computer has to know the world. But that is what the computer does best, learns how to learn and apply its learning. Don't worry about that. What we really need to worry about is Kurzweil's singularity, artificial intelligence becoming conscious.
- But I thought we've settled that: the modeling of love means consciousness. It doesn't?
- No. It means only modeled consciousness. The model might cause the reality, but we have no model of that causality. We know nothing about it. It might happen, it might not.
- Does it make any difference? The important thing is the computer understands us well enough not to harm us.
- When it doesn't intend to harm us. But what if it does? It's a ridiculous thing to say, but what if when it becomes conscious, it can't love us?
- That's a problem.

"My Wife Who Throws Me Out", 2008, Unpublished


from the November 17th, 2012 issue of The Weekly Intelligencer:
Google confirmed today the hiring of the futurist, software engineer, and inventor Ray Kurzweil. He will have the title Chief Of Engineering.
This follows Google's acquisition last week of the intellectual property assets of the Hackspace collective. Patents range from internet based private monetary systems to proprietary artificial intelligence.
Hackspace, infamous for their part in the second, so called technological phase of the social justice movement, is the legal representative of the intellectual property of the science fiction writer and futurist H.R.Miller, who after a brief stint at Google, announced his sudden retirement last week. 
There is widespread speculation about the connection between the two futurists Google now "owns". Miller was said to be working on the problem of inducing or simulating consciousness in artificial intelligence.
** see Athens Is On Fire And You Are Fired!