AIM-291-Buchanan

---------+---------+---------+---------+---------+---------+---------+---------+

 

Review of Joseph Weizenbaum’s Computer Power and Human Reason

by

Bruce G. Buchanan
Adjunct Professor of Computer Science
Stanford University


The following review is to appear in PHAROS, Fall 1976.

 

Weizenbaum's Computer Power and Human Reason is a book about the immorality of applying computers to tasks that ought to be done only by humans. The author speaks out against improper and immoral uses of the tools available to us, especially computers. Some readers will find his description of computers illuminating and alarming, the rest of us can still benefit from the challenge to sharpen our moral criteria.

The author is a leading computer scientist teaching at M.I.I. who is disturbed by both the strongly pro—technology, and the anti-humanist statements of some computer scientists. He is also concerned about the public’s passive acceptance of technologists’ definitions of social problems and their subsequent technological solutions. Examples supporting both concerns are numerous, and readers need to remind themselves often that there is a point to the examples, many of which are very pointed criticisms of individuals. Much (perhaps too much) of the book is devoted to reasons why the book was written.

The last chapter (10) contains the material most worth reading, with the first chapter providing a very readable introduction, with some misgivings at trying to collapse 280 pages into a few lines , we can summarize the main theme of the book in an informal argument:

(P1) It is wrong for us to cause people to be treated as less than “whole persons”. (E.g., “An individual is dehumanized whenever he is treated as less than a whole person.” p. 266).

(P2) Computers can never fully understand human problems [because they must ignore aspects of human experience that are not describable in a language].

(P3) Therefore , it is wrong for us to command computers to deal with human affairs, i.e., to perform tasks requiring understanding of and empathy for human problems
[esentially because computers lack a person’s complete view of other persons].

Proposition P1 is a presupposition for the entire book.
But the concepts of the whole man or man as object are not as simple and obvious as the author would have us believe , and the moral dictum in proposition P1 is not easy to apply, for in some circumstances it is desirable to treat persons as anonymous entities , as in voting. Much of the material in Chapters 4— 9 is an exposition of a non—mechanistic view of man, in support of proposition P2. Chapters 2—3 discuss the abilities of computers and how they work. Intertwined throughout all the chapters is evidence that people want to and can apply computers in human affairs as well as evidence that it leads to evils. Now the problem is understanding these propositions and deciding whether one believes them to be true. In the book , this is largely left as an exercise for the reader.

Any useful tool can always be misused; the particular dangers of computers stem from their versatility and complexity. Because computers are general symbol manipulation devices, they are versatile enough to be used , and misused , in applications that affect life or that have serious, irreversible social consequences. And computer programs are often so complex that no person understands the basis for the program’s decisions. As a result, we are told, managers, physicians, and decision makers of all sorts do not feel responsible for the consequences of decisions made by computer programs. A large portion of the book is an elaboration on these dangers. These dangers are also used to support proposition P3, above: that there are decision making tasks we ought not turn over to computers.

The computer programs that most concern the author are the complex problem solving programs developed under the general name of artificial intelligence. These are classified as (a) simulaion s of human problem solving (cognitive simulations), (b) programs that solve difficult problems without trying to duplicate human methods (performance programs), and (c) programs, or other work, that explore theoretical issues in computing (these he ignores). The claims made about cognitive simulations and performance programs seem to disturb Weizenbaum more than the work itself. Perhaps those claims, made ten and twenty years ago, should have been more cautiously phrased to say, for example, that computers will successfully simulate some [not all] aspects of human thought and complement [not replace] human problem solvers. Several examples are taken from psychiatry, partly because others read too much into a simple program he wrote ten years ago that mimics some conversational aspects of a therapist, a fact that profoundly disturbs him.

In any case , Weizenbaum’s arguments are aimed at showing that computer capabilities are not coextensive with human capabilities. If the arguments are correct, then computer programs cannot successfully duplicate all aspects of human intelligence (proposition P2 , above). Whether they could sometimes be used, morally and profitably, in place of humans still depends on understanding the concept of dehumanization in proposition P1.

This viewpoint can be pushed to extremes to argue against any use of computers, in which case even the worst human decision makers are seen as better qualified to make social decisions than the best computer programs ever will be. What seems more credible is that computer programs are more dangerous in the hands of poor managers than in competent hands. And Weizenbaum obliquely gives us a criterion for competence: the competent manager understands the basis for the program’s decisions and maintains an ability and willingness to override the programs. On the other side of the coin, a program becomes less dangerous (in any hands) if it can demonstrate its line of reasoning from problem description to problem solution, can be queried about its assumptions and methods, and otherwise opens itself to understanding.

Admittedly, computer programs are not easily understood (the main point of chapter 9). This is a great shortcoming, but not one that has escaped notice. Also, managers (of many sorts) have admittedly abdicated some responsibility to technology. In just this sense, physicians are chided for becoming “mere conduits between their patients and the major drug manufacturers” (p.259). But competent managers and physicians will first understand the scope and limitations of their tools before using them.

Weizenbaum is asking us not to do research on programs, methods or tools with obvious potential gross misuses: he finds no benefits that are worth the price of meddling with tools “that represent an attack on life itself” or that substitute for “interpersonal respect, understanding, and love” (p.269). Incidentally, this is the same theme as expressed in his letter to Science attacking recombinant DNA research (1). He also advocates renunciation of projects with “irreversible and not entirely foreseeable side effects.”

(1) Science, July 2, 1976, p.6 .

The guidelines that he gives are certainly incomplete for research on energy, communication, transportation —— and almost anything interesting enough to be applied in the next century -- would have unforseeable side effects or could be used to assault life. They are offered as expressions of his own subjective criteria, and perhaps because they are subjective they cannot be expressed adequately in the language of the brain’s left hemisphere (as he reminds us in another context). Such guidelines, even when precise, also fail to admit the value of research aimed at defining the limits of what computers can do by working on programs at the boundaries between men and machines.

He says he is not asking others to adopt his own criteria, that the book advocates exercising our own courage to say NO when our “inner voice ” tells us an act is wrong (p. 276). Since we do not know what another person’s conscience tells him, however, we do not necessarily feel safer knowing it has been consulted.

Weizenbaum's attacks on his colleagues seem to reduce to the question whether or not they have listened to their own inner voices.

The same concerns were raised in less inflamatory books and articles by Norbert Wiener (2). Wiener’s solution , if it may be called that, is not to stop work on some research but to

“render unto man the things which are man’s and unto the computer the things which are the computer’s. This would seem the intelligent policy to adopt when we employ men and computers together in common undertakings. It is a policy as far removed from that of the gadget worshiper as it is from the man who sees only blasphemy and the degradation of man in the use of any mechanical adjuvants whatever to thoughts.”

(2) E.g., Wiener, God & Golem, Inc. (M.I.T. Press, Cambridge , 1964).

This view of the symbiotic relationship of’ men and machines is a much more constructive one than Weizenbaun’s. It places the computer in the hands of problem solvers , and not the other way around. In this view, there are still interesting questions for computer specialists: how can a program provide a manager (problem solver, decision maker, etc.) with enough information that he can accept responsibility for its output? How can the program convey its scope and limitations to the manager? How can we design programs that are more useful for managers —— i.e ., easier and more pleasant to use, easier to understand, more knowledgeable and more flexible?

Another recent book dealing with the relationship of man and machine is Zen and the Art of Motorcycle Maintenance (3). It too builds on the premise that scientific, logical inquiry leads to only one kind of truth, that the subjective, intuitive, emotional side is necessary for human interaction and is equally legitimate. There is a similarity in the message, but a world of difference in style: while Pirsig’s novel evokes emotions, Weizenbaum prescribes them.

(3) R. Pirsig, Zen and the Art of Motorcycle Maintenance (Morrow , New York , 19714).

A distressing undercurrent through the whole book is its anti—rationalism. In discussing dangerous research in the book, with recombinant DNA used as an example, he questions the need to
give any justification for stopping the research. For example,

“Is not the overriding obligation on men ... to exempt life itself from the madness of treating everything as an object, a sufficient reason, and one that does not even have to be spoken? Wny does it have to be explained? It would appear that even the noblest acts of the most well—meaning people are poisoned by the corrosive climate of values of our time.” (pp. 260—61).

But this irrationalistic sentiment ignores the value of giving reasons for halting research on projects with great potential benefits for human health as well as dangers. If scientists failed to provide reasons for their decisions, either to halt or continue lines of research, how can we ever expect informed decisions from the public and legislative representatives? If there is madness in treating people “as objects” there is just as much madness in assuming that one’s own research decisions require no justification.

In summary, the main issues of the book are important for everyone, and are especially directed at persons working with computers. In spite of the backbiting, the digressions, and the vague language in which the perceived evils are (and perhaps must be) described, the book deserves discussion and contemplation. It is not the kind of book that can be taken literally but it does raise questions that all scientists need to answer for themselves.

 

---------+---------+---------+---------+---------+---------+---------+---------+