Saturday 2 August 2014

Machines Cannot Think or Feel

     "I hear you've been bothered by bad dreams," says the nurse to the patient.
     Does that sound right? Would you use the word, "bothered" under such circumstances? Wouldn't "troubled" or "disturbed" be more appropriate? In any case, can you explain exactly the shades of meaning for each of these words which may make one more appropriate that others? Now try to explain that to a computer, which will never experience any such emotion.
     Let me make a bold statement: it will never be possible to build a machine which reproduces human language 100% correctly. 99.9% perhaps, but not 100%. Indeed, I will predict that, some day, the mistakes made by computers will be used by linguists to better elucidate the subtle meaning of language. The problem lies not in the hardware itself, but the programming. We speak our own language automatically, but no-one is able to consciously analyse it in such precise detail as to be able to fully commit it to a program. Our own programming is too detailed and subtle for that. (And let's not forget who our Programmer is.) Remember: the meaning of words change with the context. "Rough" has a slightly different meaning when applied to skin, the sea, or a translation. A dictionary definition is not enough; it provides only a broad overview, while the reader is expected to fill in the details with his own experience and the context. But a computer would have to be programmed with every subtle shade of meaning in every possible context.
     The alternate solution is to allow a computer to learn the language the way we did: picking it up as we go along conversing with others, continually refreshing and revising the subtleties of meaning as we hear and use it. In the 2014 TV series, Extant a robot "child" is being taught language - and other human knowledge - in that very manner. What an ambitious project - especially for a prototype! The problem, of course, is that nobody can possibly completely understand the methods by which humans learn. Our programming is too complex. And even if they could, there is still an overwhelming obstacle: any brain made of nuts-and-bolts, silicon chips, laser circuits, positrons, or so forth, must, at bottom, learn differently from one made of flesh and blood.
     There are three other obstacles to machines conversing like humans. Firstly, if you are still reading this, then it means I have adopted a style which is at least mildly interesting. This is the result of conscious and unconscious decisions about the length of words and sentences, and a careful choice of vocabulary. It is possible to convey the same information in a manner which is bland and boring or, alternatively, so complex as to be obtuse. A computer must be programmed to know the difference.
     Secondly, it must be programmed with the rules of conversation: what to say, when it is appropriate, and when not. We tend to forget that nine tenths of our speech is not for the purpose of imparting information, but for social facilitation - hence the unnecessary comments about the weather, last night's TV programs, or sport, and the avoidance of subjects which might cause contention, such as religion and politics. Scifi filmmakers tend to overlook this, and have their "droids" make the same sort of irrelevant, offhand comments which only a human would make under the same circumstances.
    Finally, if a robot is going to discuss with you anything other than the task at hand, then it has to be provided with the full range of knowledge which humans absorb automatically throughout life. And I mean everything! That includes such commonplace things as the fact that porcelain is fragile, that knives and forks are made of metal, that tablecloths are made of textiles (but not wool), and that most stones sink when placed in water. How are you going to program all that into its positronic brain without missing something obvious? A solution would be to get the robot to learn about the world the same way we do. In that case, go back to the paragraph about learning language by the same method.

Thinking
     A machine will never be able to think. The fact that it may reach the same conclusion as a human being does not mean it does so by thinking, any more than that by reaching the same destination meant that it necessarily did so by walking. Consider what thinking is. Except during sleep, it normally goes on all the time. You can consciously decide to concentrate on a particular issue, in which case you start dredging up memories and scanning the neurons for connections. Otherwise, your thoughts move constantly from one topic to another in a chain of associations. Even then, something more systematic is probably taking place in the subconscious, because often the solution will suddenly arrive unexpectedly a long time after you have stopped thinking about it - even "slept on it". Often a forgotten fact will unexpectedly appear after you have given up trying consciously to remember it. If we don't understand how this happens, we obviously can't program a computer to do likewise. And why would we want to, when the same results could be obtained by a different route? And again, there is the fundamental detail that an artificial brain, by its very nature, cannot behave like a flesh and blood one.

Emotions
     The silliest idea in imaginative fiction is a robot with emotions. Yet it turns up all the time, often in places where it is ostensibly rejected. Take the android, Data in Star Trek: the Next Generation. His character was written to illustrate the difference between the behaviour of a logic-driven, unemotional robot and emotional human beings. However, every time a human did something which did not compute to Data's outlook, he would signal his puzzlement by flicking up his eyebrows momentarily. (I bet a lot of people never even noticed.) That was a typically human action, which he would not have performed had he not been programmed to do so.  Indeed, the producer actually stated that Data was intended to be a Pinocchio character: a machine that wanted to be human. But he wouldn't have wanted to be human unless he had be programmed to do so.
     As I explained in the last article, emotions are part of our programming: the subjective experience of the physiological and chemical effects of our fundamental drives. They are not an epiphenomenon of intelligence; intelligence evolved later, and is an independent phenomenon. Of course, it is theoretically possible for a machine to mimic the effects of emotion. In one episode of The X Files, an inventor is experimenting with robots shaped like arthropods. As Mulder and Scully enter a room, a robot like a big metal centipede stops in its tracks, fixes its mechanical eyes on them, and scurries out the door. In comes the inventor and jocularly asks, "What are you doing scaring my robots?" Of course, the robot hadn't been scared; it had just been programmed to flee from any large, unfamiliar moving object. However, successfully mimicking the responses of other emotions in full detail might be a difficult task to program. And, once again, a mechanical brain, by definition, cannot perform exactly the same way as an organic one.
     One would also wonder what would be the point. If a robot really could act out of fear, anger, disgust, or one of the proactive drives such as pride and avarice, it would get out of its owner's control and follow its own agenda.

Motives
     This brings us to a final issue: recently there has been a lot of debate in technological circles about the possible dangers of artificial intelligence: whether it will get out of our control and take over. It is not a new debate. Arthur C. Clarke warned about it, and pointed out that, even now, there are machines which can learn simple tasks by trial and error, such as tying a knot. So where will it all end?
     What none of these people ever consider is the obvious: motive. Ask yourself, as a human being, why would you want to tie a knot? Well, you might want to attach a ribbon to a present. But why would you want to give anyone a present? Well, there are specific periods in which gift giving is regarded as appropriate, even expected. You might have genuine affection for the individual, or you might feel it your duty because that person gave a present to you, or you might want to impress. But for each of these motives, there is a more fundamental motive. Why do you wish to follow society's expectations? Why do you want to impress? Why do you like that person anyway? What is affection?
     Or perhaps you want to tie a boat to a pier. But why do you want to do that? Perhaps you want to use the boat for fishing, or to get to the other side of the river, or for any number of other reasons. And why do you want to go fishing? etc. etc.
     You get the message: even for something as simple as tying a knot there are an almost infinite number of motives, and each of these motives can be traced back farther and farther to some more fundamental, more primeval motive: our instincts, our programming.
     Now, why would a machine "want" to tie a knot? The only possible reason would be to fulfill a task which a human set. Again, it all comes back to programming. Naturally, if you program a machine to learn by trial and error whatever skills it needs to fulfill a specific goal, at some point it will make a mistake and do something you never predicted. You will then have to tweak the program until it does it right. But there is no chance of the machines ever "taking over".

     In any case, that is not the future of robotics, as I shall explain in my next article - provided you don't want to return to the Index.