これは "Three Reviews of J. Weizenbaum’s COMPUTER POWER AND HUMAN REASON" に収録されている John Mccarthy の "AN UNREASONABLE BOOK" の全文です。
http://www.dtic.mil/dtic/tr/fulltext/u2/a044713.pdf
この文章は彼のホームページにも掲載されています。
jmc.stanford.edu
が、修正が施されているようです。
「人工知能」という名前の発案者であり LISP の発明者でもあった McCarthy は、この研究分野の積極派でした。故に人工知能研究に対して慎重な姿勢を示すWeizenbaum の著作には辛辣な批判を行なっています。
---------+---------+---------+---------+---------+---------+---------+---------+
The following review appeared in Creative Computer and in the SIGART News Letter.
次のレビューはCreative ComputerとSIGART News Letterに掲載されました。
AN UNREASONABLE BOOK
不合理な本
Joseph Weizenbaum, Computer Power and Human Reason, W.H. Freeman Co., San Francisco 1975
This moralistic and incoherent book uses computer science and technology as an illustration to support the view promoted by Lewis Mumford, Theodore Roszak, and Jacques Ellul, that science has led to an immoral view of man and the world. I am frightened by its arguments that certain research should not be done if it is based on or might result in an "obscene" picture of the world and man. Worse yet , the book’s notion of "obscenity" is vague enough to admit arbitrary interpretations by activist bureaucrats.
この道徳的で矛盾した本は、コンピュータ科学と技術を、ルイス・マンフォード、セオドア・ローザック、ジャック・エリュールの「科学は人間と世界を不道徳な視点に導いている」との見解を支持する実例として使用しています。私は、それが世界と人間の「猥褻な」像に基づいている、あるいはその結果になる可能性がある特定の研究が行われるべきではないという主張を恐れています。さらに悪いことに、この本の「節度を欠いた」という概念は、活動家の官僚による恣意的な解釈を認めるにはあいまいです。
1. 彼が本当に信じていることを説明するのは難しい
Weizenbaum’s style involves making extreme statements which are later qualified by contradictory statements. Therefore, almost any quotation is out of context, making it difficult to summarize his contentions accurately.
Weizenbaumのスタイルは、あとで矛盾したステートメントで修飾された極端なステートメントを作成することを伴います。したがって、ほとんどの引用は文脈から外れており、彼の競合を正確に要約することは困難です。
The following passages illustrate the difficulty:
以下の文章は難易度を示しています:
“In 1935, Michael Polanyi”, [British chemist and philosopher of science , was told by] “Nicolai Bukharin, one of the leading theoreticians of the Russian Communist party, ... [that] ‘under socialism the conception of science pursued for its own sake would disappear, for the interests of scientists would spontaneously turn to the problems of the current Five Year Plan.’ Polanyi sensed then that ‘the scientific outlook appeared to have produced a mechanical conception of man and history in which there was no place for science itself.’ And further that ‘this conception denied altogether any intrinsic power to thought and thus denied any grounds for claiming freedom of thought!” -- from page 1. Well, that ’s clear enough; Weizenbaum favors freedom of thought and science and is worried about threats to them. But on page 265, we have
ロシア共産党の有力な理論家の一人であるニコライ・ブカリン(Nicolai Bukharin)は、「社会主義の下で、科学の構想が追求された」と語ったのは、1935年にマイケル・ポラニー(Michael Polanyi)[科学の英国化学者と哲学者]科学者たちの利益が自発的に現在の5カ年計画の問題に変わることになるからである」ポーラニーは、「科学的な見通しは、人と歴史の機械的概念を生み出したようであるさらに、「この構想は、思考の本質的な力を完全に否定し、思考の自由を主張する根拠を否定した」と述べた。 - 1ページ目から。 Weizenbaumは思考や科学の自由を好んでおり、彼らに対する脅威を心配しています。しかし、265ページには、
“Scientists who continue to prattle on about ‘knowledge for its own sake’ in order to exploit that slogan for their self—serving ends have detached science and knowledge from any contact with the real world”. Here Weizenbaum seems to be against pure science, i.e. research motivated solely by curiosity. We also have
「自己目的のためにそのスローガンを利用するために、「自分のための知識」を賭け続けている科学者は、現実世界との接触から科学と知識を切り離している」ここでWeizenbaumは純粋な科学、すなわち好奇心だけによって動機付けられた研究に反対しているようです。我々はまた
“With few exceptions, there have been no results, from over twenty years of artificial intelligence research, that have found their way into industry generally or into the computer industry in particular.” - page 229 This again suggests that industrial results are necessary to validate science.
「わずかな例外を除いて、20年以上の人工知能研究の結果、業界全体または特にコンピュータ業界に影響を与えている結果は得られていませんでした。」 - ページ229 これは、工業的結果が科学を検証するために必要であることを再び示唆しています。
“Science promised, man power. But as so often happens when people are seduced by promises of power -- the price actually paid is servitude and impotence”. This is from the book jacket. Presumably the publisher regards it as a good summary of the book’s main point.
"科学は、人間の力を約束した。しかし、人々が権力の約束によって誘惑されたときにそう頻繁に起こります - 実際に支払われる価格は兵役とインポテンスです。 "これは本ジャケットからのものです。おそらく、出版社はそれを本の要点の良い要約と見なしていると思われます。
“I will, in what follows, try to maintain the position that there is nothing wrong with viewing man as an information processor (or indeed as anything else) nor with attempting to understand him from that perspective, providing, however, that we never act as though any single perspective can comprehend the whole man.” - page 140. We can certainly live with that, but
「私は次のように、あたかも一人のパースペクティブが全人類を理解できるかのように、人間を情報プロセッサとして(あるいは何か他のものとして)見ても何の問題もないという立場を維持しようとはしません」 - 140ページ。確かにそれで生きることができますが、
“Not only has our unbounded feeding on science caused us to become dependent on it, but, as happens with many other drugs taken in increasing dosages, science has been gradually converted into a slow acting poison”. - page 13. These are qualified by
「科学への無限の餌付けは、私たちにそれに依存するようになっただけでなく、他の多くの薬物が増量されて起こるように、科学はゆっくりと作用する毒に徐々に変換されてきました」 - ページ13。これらは以下によって修飾されています。
“I argue for the rational use of science and technology , not for its mystification, let alone its abandonment ”, - page 256
「私は科学技術の合理的な利用を主張しているが、その放棄はもちろんのこと、その神秘についてではないのです」- ページ 256。
In reference to the proposal for a moratorium on certain experiments with recombinant DNA because they might be dangerous, we have “Theirs is certainly a step in the right direction, and their initiative is to be app lauded. Still, one may ask , why do they feel they have to give a reason for what they recommend at all? Is not the overriding obligation on men, including men of science, to exempt life itself from the madness of treating everything as an object, a sufficient reason, and one that does not even have to be spoken? Why does it have to be explained? It would appear that even the noblest acts of the most well—meaning people are poisoned by the corrosive climate of values of our time. ”Is Weizenbaum against all experimental biology or even all experiments with DNA? I would hesitate to conclude so from this quote; he may say the direct opposite somewhere else. Weizenbaum’s goal of getting lines of research abandoned without even having to give a reason seems unlikely to be achieved except in an atmosphere that combines public hysteria and bureaucratic power. This has happened under conditions of religious enthusiasm and in Nazi Germany, in Stalinist Russia and in the China of the “Cultural Revolution”. Most likely it won’t happen in America.
組換えDNAを用いたある種の実験では危険である可能性があるため、モラトリアムの提案に関して、「彼らは確かに正しい方向への一歩であり、彼らのイニシアチブは賞賛される」ことになっています。それでも、彼らはなぜ彼らは彼らがまったく何を勧める理由を与えなければならないと感じるのでしょうか?科学者を含む男性の最優先義務は、すべてを目的、十分な理由、話さなくてもよいものとして扱うという狂気から生命を免除することではないでしょうか?なぜそれを説明する必要があるのでしょうか?最も意義深い人々の最も尊敬されない行為でさえ、私たちの時代の価値観の腐敗的な風潮によって中毒されるように見えるでしょう。Weizenbaumはすべての実験生物学、あるいはDNAを用いたすべての実験にも対抗しているのでしょうか?私はこの引用文からそうすることを躊躇します。彼は他のどこかの反対側を言うかもしれない。Weizenbaumの目的は、公共のヒステリーと官僚的な力を兼ね備えた雰囲気を除いて理由がなくても放棄された研究のラインを達成することは難しいようです。これは、宗教的な熱意とドイツのナチス、ロシアのスターリン主義者と中国の「文化革命」の条件のもとで起こったことです。ほとんどの場合、アメリカでは起こりません。
“Those who know who and what they are do not need to ask what they should do.” - page 273. Let me assure the reader that there is nothing in the book that offers any way to interpret this pomposity. I take it as another plea to be free, of the bondage of having to give reasons for his denunciations.
「誰が誰であるかを知っている人は、何をすべきか尋ねる必要はありません」 - 273ページ。私はこの本を解釈する方法を提供する本は何もないということを読者に保証してもらいたい。私は彼の宣告の理由を述べなければならないという束縛の、自由であるという別の嘆願としてそれを取る。
The menace of such grandiloquent precepts is that they require a priesthood to apply them to particular cases, and would—be priests quickly crystallize around any potential center of power. A corollary of this is that people can be attacked for what they are rather than for anything specific they have done. The April 1976 issue of Ms. has a poignant illustration of this in an article about “trashing”.
そのような大胆な戒律の脅威は、特定の場合にそれらを適用するために神権が必要であり、司祭が潜在的な権力中心を素早く結成することである。これの結論は、人々は彼らがした特定のものではなく、彼らが何であるかを攻撃することができるということです。 1976年4月号のMs.は、「ゴミ箱」に関する記事でこれを鋭意に描いています。
“An individual is dehumanized whenever he is treated as less than a whole person”. — page 266. This is also subject to priestly interpretation as in the encounter group movement.
「個人は、全体よりも少なく扱われたときはいつも、非人間化されている」 - 266ページ。これは、エンカウンターグループ(集団心理療法)運動のような聖職者の解釈の対象となります。
“The first kind [of computer application] I would call simply obscene. These are ones whose very contemplation ought to give rise to feelings of disgust in every civilized person. The proposal I have mentioned, that an animal’s visual system and brain be coup led to computers, is an example. It represents an attack on life itself. One must wonder what must have happened to the proposers’ perception of life, hence to their perceptions of themselves as part of the continuum of life, that they can even think of such a thing, let alone advocated it”. No argument is offered that might be answered, and no attempt is made to define criteria of acceptability. I think Weizeribaum and the scientists who have praised the book may be surprised at some of the repressive uses to which the book will be put. However, they will be able to point to passages in the book with quite contrary sentiments, so the repression won’t be their fault.
「最初の種類のコンピュータアプリケーションの場合、私は単に品格がないと呼ぶだろう。これらは、非常に熟考しているすべての文明人の嫌悪感を引き起こすべきものです。私が言及した提案は、動物の視覚システムと脳がコンピュータにつながるクーデターであることが一例です。それは人生そのものへの攻撃を表しています。提案者の人生の認識に何が起こったのだろうか、ひいては人生の連続の一部としての彼ら自身の認識に疑問を投げかけなければならない」答えることのできる議論は提供されておらず、受容性の基準を定義しようとする試みはなされていません。私はWeizeribaumとその本を賞賛している科学者が、本が置かれる抑圧的な用途のいくつかに驚くかもしれないと思います。しかし、彼らはかなり反対の感情で本の文章を指すことができるので、抑圧は彼らのせいではありません。
2. BUT HERE’S A TRY AT SUMMARIZING
2. でも、ここで要約してみましょう
As these inconsistent passages show, it isn’t easy to determine Weizenbaum’s position, but the following seem to be the book’s main points:
これらの矛盾した箇所が示すように、Weizenbaumの立場を判断することは容易ではありませんが、以下の点が本の主な点です。
1. Computers cannot be made to reason usefully about human affairs,
1. コンピュータは、人間の行いについて有益に理由を付けることはできない。
This is supported by quoting overoptimistic prediction s by computer Scientists and giving examples non—verbal human communication. However, Weizenbaum doesn’t name any specific task that computers cannot carry out, because he wishes “to avoid the unnecessary, interminable, and ultimately sterile exercise of making a catalogue of what computers will and will not be able to do, either here and now or ever”. It is also stated that human and machine reasoning are incomparable and that the sensory experience of a human is essential for human reasoning.
これは、コンピュータ科学者による過度の予測を引用し、非言語的な人間のコミュニケーションの例を挙げることによって支持されている。しかし、Weizenbaumは、コンピュータが実行することができない具体的な任務については何も言及していません。なぜなら、彼は、不要な、間に合わない、そして最終的には無駄なコンピュータのカタログを作成することを避けることを望んでいます。また、人間と機械の推論は比類のないものであり、人間の感覚的な経験は人間の理性にとって不可欠であると述べられています。
2. There are tasks that computers should not be programmed to do.
2. コンピュータにプログラムするべきではないタスクがある。
Some are tasks Weizenbaum thinks shouldn’t be done at all - mostly for new left reasons. One may quarrel with his politics, and I do, but obviously computers shouldn’t do what shouldn’t be done. However, Weizenbaum also objects to computer hookups to animal brains and computer conducted psychiatric interviews. As to the former, I couldn’t tell whether he is an antivivisectionist, but he seems to have additional reasons for calling them “obscene”. The objection to computers doing psychiatric interviews also has a component beyond the conviction that they would necessarily do it badly. Thus he says, “What can the psychiatrist’s image of his patient be when he sees himself, as a therapist, not as an engaged human being acting as a healer, but as an information processor following rules, etc.?” This seems like the renaissance era religious objections to dissecting the human body that came up when science revived. Even the Popes eventually convinced themselves that regarding the body as a machine for scientific or medical purposes was quite compatible with regarding it as the temple of the soul. Recently they have taken the same view of studying mental mechanisms for scientific or psychiatric purposes.
(ほとんど新左翼的な理由から)Weizenbaumが何もしてはいけないと考える仕事があります。彼の闘争と争うかもしれませんし、私はそうしますが、明らかにコンピューターはやらなければならないことをすべきではないようです。Weizenbaumはまた、動物の脳へのコンピュータ接続やコンピュータによる精神医学的インタビューに反対しています。前者については、彼が抗議集団であるかどうかは分からなかったのですが、彼はそれを「品格がない」と呼ぶ理由があるようです。精神医学的インタビューを行っているコンピュータへの異議も、必然的に悪いことになるという確信を超える要素を含んでいます。それ故、彼は「セラピストとして自分自身を見ているときに、精神科医の患者のイメージは、治療者として従事する人間としてではなく、ルールなどの情報プロセッサとして何を見ることができますか?」と言います。これは、科学が復活したときに現れたルネサンス時代の人体解剖に対する宗教的な反対のようです。教皇たちでさえ、最終的には、身体を科学的または医学的目的の機械とみなすことは、それを魂の寺院とみなすことと非常に相容性であると確信しました。最近、彼らは科学的または精神医学的目的のための精神的メカニズムを研究するという同じ見解をとっています。
3. Science has led people to a wrong view of the world and of life.
3. 科学は人々に世界と人生の間違った見方を導いてきた。
The view is characterized as mechanistic, and the example of clockwork is given. (It seems strange for a computer scientist to give this example, because the advance of the computer model over older mechanistic models is that computers can and clockwork can’t make decisions.) Apparently analysis of a living system as composed of interacting parts rather than treating it as an unanalyzed whole is bad.
この見解は機械論的なものであり、時計の例が挙げられています。(コンピュータモデルが古くからの機械論的モデルよりも進歩したことにより、コンピュータができる意思決定が時計仕掛けにはできないということから、この例を与えることはコンピュータ科学者にとっては奇妙なことです)分析されていない全体を扱うのではなく、相互作用する部分だけで構成される生命体の分析が間違っているのは明らかです。
4. Science is not the sole or even main source of reliable general knowledge.
4. 科学は信頼できる唯一あるいは主要な一般知識の源ではない。
However, he doesn't propose any other sources of knowledge or say what the limits of scientific knowledge is except to characterize certain thoughts as “obscene”.
しかし、他の知識源を提案したり、科学的知識の限界が「節度を欠いた」特定の考えを特徴付けること以外は何も言いません。
5. Certain people and institutions are attacked .
5. 特定の人々や組織が攻撃されている。
These include the Department of “Defense” (sic), Psychology Today, the New York Times Data Bank , compulsive computer programmers, Kenneth Colby, Marvin Minsky, Roger Schank, Allen Newell, Herbert Simon, J.W. Forrester, Edward Fredkin, B.F. Skinner, Warren McCulloch(until he was old), Laplace and Leibniz.
これには国防総省、Psychology Today、ニューヨークタイムズ・データバンク、強迫的なコンピュータープログラマー、Kenneth Colby、Marvin Minsky、Roger Schank、Allen Newell、Herbert Simon、J.W. Forrester、Edward Fredkin、B.F. Skinner、Warren McCulloch、ラプラスとライプニッツが含まれます。
6. Certain political and social views are taken for granted.
6. 特定の政治的・社会的見解が当然とらえられている。
The view that U.S. policy in Vietnam was “murderous” is used to support an attack on “logicality” (as opposed to “rationality”) and the view of science as a “slow acting poison”. The phrase “It may be that the people’s cultivated and finally addictive hunger for private automobiles..." (p.30) makes psychological, sociological, political, and technological presumptions all in one phrase. Similarly, “Men could instead choose to have truly safe automobiles, decent television, decent housing for everyone, or comfortable, safe, and widely distributed mass transportation.” presumes wide agreement about what these things are, what is technologically feasible, what the effects of changed policies would be, and what activities aimed at changing people’s taste are permissible for governments.
ベトナムにおける米国の政策が「殺人的」であるという見方は、(「合理性」とは対照的に)「論理性」と「遅い作用の毒」としての科学観に対する攻撃を支援するために使用されています。「人々の洗練された、最終的には習慣性のある飢えが私的な自動車のためにあるかもしれない...」(p.30)という言葉は、心理的、社会学的、政治的、技術的な前提を一言にまとめています。同様に、「人間は本当に安全な自動車、まともなテレビ、誰にとってもまともな住宅、広く快適で安全かつ広く普及している大衆交通機関を持つことを選ぶことができます」と述べています。政策が成立し、人々の嗜好を変えることを目的とした活動は政府にとって許容されるものです。
3. THE ELIZA EXAMPLE
3. ELIZA の事例
Perhaps the most interesting part of the book is the account of his own program ELIZA that parodies Rogerian non-directive psychotherapy and his anecdotal account of how some people ascribe intelligence and personality to it. In my opinion, it is quite natural for people who don’t understand the notion of algorithm to imagine that a computer computes analogously to the way a human reasons. This leads to the idea that accurate computation entails correct reasoning and even to the idea that computer malfunctions are analogous to human neuroses and psychoses. Actually, programming a computer to draw interesting conclusions from premises is very difficult and only limited success has been attained. However, the effect of these natural misconceptions shouldn't be exaggerated; people readily understand the truth when it is explained, especially when it applies to a matter that concerns them. In particular, when an executive excuses a mistake by saying that he placed excessive faith in a computer, a certain skepticism is called for.
おそらく、本書の最も興味深い部分は、ロジェリアの非指向性精神療法を模倣する彼自身のプログラムELIZAの記述と、ある人々が知性と性格をそれに帰する方法に関する彼の逸話的な説明でしょう。私の意見では、アルゴリズムという概念を理解していない人にとっては、コンピュータが人間的なやり方に似ていると想像するのは当然です。これは、正確な計算が正しい推論を必要とし、コンピュータの誤動作が人間の神経症や精神病に類似しているという考えに至るという考えにつながります。実際に、前提から興味深い結論を引き出すためにコンピュータをプログラミングすることは非常に困難であり、限られた成功しか達成されていません。しかし、これらの自然な誤解の影響は誇張されるべきではありません。人々はそれが説明された際、特にそれが関係する事柄に当てはまるときには真実を容易に理解します。特に、経営幹部がコンピュータに過度の信用を置いたと言って間違いを犯すと、ある種の懐疑心が叫ばれます。
Colby’s (1973) study is interesting in this connection, but the interpretation below is mine. Colby had psychiatrists interview patients over a teletype line and also had them interview his PARRY program that simulates a paranoid. Other psychiatrists were asked to decide from the transcripts whether the interview was with a man or with a program, and they did no better than chance. However, since PARRY is incapable of the simplest causal reasoning, if you ask, “How do you know the people following you are Mafia” and get a reply that they look like Italians, this must be a man not PARRY. Curiously, it is easier to imitate (well enough to fool a psychiatrist) the emotional side of a man than his intellectual side. Probably the subjects expected the machine to have more logical ability, and this expectation contributed to their mistakes. Alas, random selection from the directory of the Association for Computing Machinery did no better.
コルビー(1973)の研究はこの点で興味深いですが、以下の解釈は私のものです。コルビーは精神科医にテレタイプラインで患者を面接させ、また、彼らにはパラノイドをシミュレートする彼のPARRYプログラムにインタビューしました。他の精神科医には、インタビューが人間かプログラムかにかかわらず、記録から決定するよう求められ、彼らはチャンスよりもうまくいきませんでした。しかし、PARRYはもっとも簡単な因果関係ができないので「あなたの次の人々がマフィアであることをどのように知っていますか」と尋ねれば、彼らはイタリア人のように見えるという返答が返ってきます。奇妙なことに、知的側面よりも人間の感情面を模倣することは(精神科医をだますには十分)簡単です。おそらく被験者は機械がより論理的な能力を持つことを期待していたでしょう。そして、この期待は彼らの間違いに寄与しました。悲しいかな、ACM のディレクトリからの無作為な選択は良くありませんでした。
It seems to me that ELIZA and PARRY show only that people , including psychiatrists, often have to draw conclusions on slight evidence, and are therefore easily fooled. If I am right, two sentences of instruction would allow them to do better.
ELIZAとPARRYは、精神科医を含む人々はしばしばわずかな証拠で結論を出さなければならず、したがって容易にばかげていることを示しているようです。私が正しいとすれば、2つの手順となる文章でより良くなります。
In his 1966 paper on ELIZA (cited as 1965), Weizenbaum writes,
1966年のELIZA(1965年引用)の論文で、Weizenbaumは次のように書いています。
“One goal for an augmented ELIZA program is thus a system which already has access to a store of information about some aspect of the real world and which, by means of conversational interaction with people, can reveal both what it knows, i.e. behave as an information retrieval system, and where its knowledge ends and needs to be augmented. Hopefully the augmentation of its knowledge will also be a direct consequence of its conversational experience. It is precisely the prospect that such a program will converse with many people and learn something from each of them which leads to the hope that it will prove an interesting and even useful conversational partner.” Too bad he didn’t successfully pursue this goal; no-one else has. I think success would have required a better understanding of formalization than is exhibited in the book.
「ELIZAプログラムの拡張の1つの目標は、現実の世界のいくつかの側面についての情報の記憶に既にアクセスし、人々との対話的相互作用によって、それが知っていることを両者が明らかにすることができるシステム、すなわち、情報検索システムとして動作し、その知識がなければ、拡張される必要がある場所です。うまくいけば知識の拡大は、会話経験の直接的な結果にもなります。そのようなプログラムが多くの人々と会話し、それぞれから何かを学ぶことは、興味深く、役に立つ会話のパートナーでもあるという希望につながるのは確かな見通しです。」他には誰もいなかったのに、あまりにも彼はこの目標を首尾良く追い求めていなかった。成功はこの本に展示されているよりも、公式化のより良い理解を必要としていたであろうと私は思います。
4. WHAT DOES HE SAY ABOUT COMPUTERS?
4. 彼はコンピュータについて何を知っているのだろうか?
While Weizenbaum ’s main conclusions concern science in general and are moralistic in character , some of his remarks about computer science and A l are worthy of comment.
Weizenbaumの主な結論は一般的に科学に関わるものであり、道徳主義的であるが、コンピュータサイエンスとA lに関する彼の発言の一部はコメントする価値があります。
1. He concludes that since a computer cannot have the experience of a man, it cannot understand a man. There are three points to be made in reply. First , humans share each other’s experiences and those of machines or animals only to a limited extent. In particular, men and women have different experiences. Nevertheless, it is common in literature for a good writer to show greater understanding of the experience of the opposite sex than a poorer writer of that sex. Second, the notion of experience is poorly understood; if we understood it better, we could reason about whether a machine could have a simulated or vicarious experience normally confined to humans. Third, what we mean by understanding is poorly understood, so we don’t yet know how to define whether a machine understands something or not.
1. 彼は、コンピュータは人間の経験を持つことができないので、人間を理解することはできないと結論づけています。回答には3つのポイントがあります。第一に、人間はお互いの経験を共有しますが、機械や動物の経験を限られた範囲でしか共有しません。特に、男性と女性は異なる経験をしています。それにもかかわらず、良い作家が貧しい作家よりも異性の経験をより深く理解することは、文学ではよくあることです。第二に、経験の概念はあまり理解されていません。私たちがそれをよりよく理解すれば、通常は人間に限定されたシミュレートされた、あるいは代理的な経験をマシンが持つことができるかどうかを判断することができます。第三に、理解を意味するものはあまり理解されていないので、マシンが何かを理解するかどうかを定義する方法はまだ分かりません。
2. Like his predecessor critics of artificial intelligence, Taube, Dreyfus and Lighthill, Weizenbaum is impatient, implying that if the problem hasn’t been solved in twenty years, is time to give up. Genetics took about a century to go from Mendel to the genetic code for proteins, and still has a long way to go before we will fully understand the genetics and evolution of intelligence and behavior. Artificial intelligence may be just as difficult. My current answer to the question of when machines will reach human—level intelligence is that a precise calculation shows that we are between 1.7 and 3.1 Einsteins and .3 Manhattan Projects away from the goal. However, the current research is producing the information on which the Einstein will base himself and is producing useful capabilities all the time.
2. Taube、Dreyfus、Lighthillといった彼以前の人工知能の評論家のように、Weizenbaum は性急で、この問題が20年後に解決されなければ、あきらめてしまうことを仄めかしてしています。遺伝学はメンデルからタンパク質の遺伝暗号にいたるまでに一世紀もかかりましたが、知性と行動の遺伝学と進化を完全に理解するまでにはまだ長い道のりがあります。人工知能はそれほど難しいかもしれません。マシンが人間のレベルの知性に到達するときの質問に対する私の現在の答えは、正確な計算によれば、1.7~3.1 アインシュタインとマンハッタンプロジェクトの 0.3 くらい私たちは目標から離れていることがわかります。しかし、現在の研究は、アインシュタイン自身の基盤となる情報を生み出しており、常に有用な能力を生み出しています。
3. The book confuses computer simulation of a phenomenon with its formalization in logic. A simulation is only one kind of formalization and not often the most useful — even to a computer. In the first place, logical and mathematical formalizations can use partial information about a system insufficient for a simulation. Thus the law of conservation of energy tells us much about possible energy conversion systems before we define even one of them. Even when a simulation program is available, other formalizations are necessary even to make good use of the simulation. This review isn’t the place for a full explanation of the relations between these concepts.
3. この本は論理の形式化と現象のコンピュータ・シミュレーションを混同しています。シミュレーションは、一種の形式化であり、コンピュータにとってさえも、最も有用とは限りません。まず、論理および数式の形式化では、シミュレーションを行うには不十分なシステムに関する部分的な情報を使用できます。したがって、エネルギーの保存の法則は、可能性のあるエネルギー変換システムについて、そのうちの1つでも定義する前に、私たちに多くのことを教えてくれます。シミュレーションプログラムが利用可能であっても、シミュレーションをうまく利用するためには他の形式化が必要です。このレビューは、これらの概念間の関係の完全な説明のための場所ではありません。
Like Punch’s famous curate’s egg, the book is good in parts. Thus it raises the following interesting issues:
パンチの有名なキュレーターの卵のように、この本は部分的に優れています。したがって、次の興味深い問題が発生します。
1. What would it mean for a computer to hope or be desperate for love?
1. コンピュータが求愛する、あるいは愛を諦めるということは、どういう意味ですか?
Answers to these questions depend on being able to formalize (not simulate) the phenomena in question. My guess is that adding a notion of hope to an axiomatization of belief and wanting might not be difficult. The study of propositional attitudes in philosophical logic points in that direction.
これらの質問への答えは、問題の現象を正式化できる(シミュレートしない)ことに依存します。私の推測は、信念と欲望の公理化に希望の概念を加えることは難しくないかもしれないということです。哲学的論理における命題的な考え方の研究はその方向を目指しています。
2. Do differences in experience make human and machine intelligence necessarily so different that it is meaningless to ask whether a machine can be more intelligent than a machine?
2. 経験の違いによって、人間と機械のインテリジェンスが必然的に大きく異なるため、マシンよりもインテリジェントなマシンがあるかどうかを尋ねる意味がありませんか?
My opinion is that comparison will turn out to be meaningful. After all, most people have not doubt that humans are more intelligent than turkeys. Weizenbaum’s examples of the dependence of human intelligence on sensory abilities seem even refutable, because we recognize no fundamental difference in humanness in people who are severely handicapped sensorily, e.g. the deaf, dumb and blind or paraplegics.
私の意見は、比較が有意義であることが判明するということです。結局のところ、ほとんどの人は、人間が七面鳥よりも知的であることを疑うことはありません。 Weizenbaumの知覚能力への人間の知性の依存の例は、たとえ感受性に重度の障害を持つ人々(例えば、聴覚障害者、愚痴、盲人または対麻痺患者)の人間性に根本的な違いがないことを認識しているので、改訂することさえ可能であるようです。
5. IN DEFENSE OF THE UNJUSTLY ATTACKED ・ SOME OF WHOM ARE INNOCENT
5. 国防総省の不当な攻撃とそれを頼りにしている誰か
Here are defenses of Weizeribaum’s targets. They are not guaranteed to entirely suit the defendees.
ここはWeizeribaumの攻撃目標の防衛です。彼らは非防衛者を完全に適合することを保証していません。
Weizenbaum’s conjecture that the Defense Department supports speech recognition research in order to be able to snoop on telephone conversations is biased, baseless, false, and seems motivated by political malice. The committee of scientists that proposed the project advanced quite different considerations, and the high officials who made the final decisions are not ogres. Anyway their other responsibilities leave them no time for complicated and devious considerations. I put this one first, because I think the failure of many scientists to defend the Defense Department against attacks they know are unjustified, is unjust in itself, and furthermore has harmed the country.
国防総省が電話会話を盗聴できるように音声認識研究を支援しているとのWeizenbaum憶測は、偏見があり、根拠がなく、誤っており、政治的な悪意を持っているようです。このプロジェクトを提案した科学者委員会は全く異なる検討を行いましたし、最終決定を下した高官は人喰い鬼ではありません。とにかく、彼らのその他の責任を抱えていることから、彼らに複雑で邪悪な考慮のための時間はありません。これを最初に述べますが、多くの科学者が、彼らが知っている攻撃に対して国防総省を守ることが正当化されていないので、それ自体が不公平であり、さらには国を傷つけてしまいました。
Weizenbaum doubts that computer speech recognition will have cost -- effective applications beyond snoop ing on phone conversations. He also says, “There is no question in my mind that there is no pressing human problem that will be more easily solved because such machines exist”. I worry more about whether the programs can be made to work before the sponsor loses patience. Once they work, costs will come down. Winograd pointed out to me that many possible household applications of computers may not be feasible without some computer speech recognition. One needs to think both about how to solve recognized problems and about opportunities to put new technological possibilities to good use. The telephone was not invented by a committee considering already identified problems of communication.
Weizenbaumはコンピュータの音声認識が電話会話を詮索する以上の費用対効果の高いアプリケーションを持つことを疑っています。彼はまた「そのようなマシンが存在するため、より簡単に解決される、人類の緊急の問題ではないということは私の心の中に疑問がありません」と述べています。私はスポンサーが忍耐を失う前にプログラムを作れるかどうかについてもっと心配しています。一度でも作業すればコストが下がります。Winogradは、多くのコンピューターに可能な家庭用アプリケーションは、コンピューターの音声認識なしでは実現できない可能性があることを私に指摘しました。認識された問題を解決する方法と、新しい技術的可能性を有効に活用する機会について考える必要があります。電話はすでに確認されたコミュニケーションの問題を考慮して委員会によって発明された訳ではありません。
Referring to Psychology Today as a cafeteria simply excites the snobbery of those who would like to consider their psychological knowledge to be above the popular level. So far as I know, professional and academic psychologists welcome the opportunity offered by Psychology Today to explain their ideas to a wide public. They might even buy a cut-down version of Weizenbaum’s book if he asks them nicely. Hmm , they might even buy this review.
カフェテリアとしての Psychology Today を参照するだけで、彼らの心理的な知識が人気のあるレベルを上回っていると思っている人のうなりを興奮させるだけです。私が知る限りでは、プロフェッショナルとアカデミックな心理学者は、Psychology Todayが提供する機会を幅広い人々に説明することを歓迎します。彼らが Weizenbaum の本を上手に尋ねれば、それをカットダウン版として購入するかもしれない。うーん、彼らはこのレビューを買うかもしれない。
Weizenbaum has invented a New York Times Data Bank different from the one operated by the New York Times - and possibly better. The real one stores abstracts written by humans and doesn't use the tapes intended for typesetting machines. As a result the user has access only to abstracts and cannot search on features of the stories themselves, i.e. he is at the mercy of what the abstractors thought was important at the time.
WeizenbaumはNew York Timesが運営するものとは異なるNew York Times Data Bankを発明しました -- おそらくそれはより良いでしょう。実際のものは人間によって書かれた抄録を格納しており、組版機械用のテープは使用していません。その結果、ユーザはアブストラクトのみにアクセスでき、ストーリー自体のフィーチャを検索することはできません。すなわち、彼は当時アブストラクトが重要だとだと考える輩の言いなりになっていました。
Using computer programs as psychotherapists, as Colby proposed, would be moral if it would cure people. Unfortunately, computer science isn’t up to it, and maybe the psychiatrists aren’t either.
コルビーが提案したように、コンピュータプログラムを心理療法士として使用して人々を治癒させるなら、道徳的なものになるでしょう。残念ながら、コンピュータサイエンスはこれまでのところそうではなく、おそらく精神科医もそうではありません。
I agree with Minsky in criticizing the reluctance of art theorists to develop formal theories. George Birkhoff’s formal theory was probably wrong, but he shouldn’t have been criticized for trying. The problem seems very difficult to me, and I have made no significant progress in responding to a challenge from Arthur Koestler to tell how a computer program might make or even recognize jokes. Perhaps some reader of this review might have more success.
ミンスキーが、芸術家が正式な理論を開発することを嫌っていることへの批判に私は同意します。George Birkhoffの公式理論はおそらく間違っていたのですが、彼はそれを試して批判されるべきではありませんでした。この問題は私にとっては非常に難しいようですが、コンピュータプログラムがジョークを作ったり認識したりする方法を教えてくれるArthur Koestlerの挑戦には、大きな進展はありませんでした。おそらく、このレビューの一部の読者はもっと成功するかもしれません。
There is a whole chapter attacking “compulsive computer programmers” or “hackers”. This mythical beast lives in the computer laboratory, is an expert on all the ins and outs of the timesharing system, elaborates the time-sharing system with arcane features that he never documents, and is always changing the system before he even fixes the bugs in the previous version. All these vices exist, but I can ’t think of any individual who combines them, and people generally outgrow them. As a laboratory director, I have to protect the interests of people who program only part time against tendencies to over-complicate the facilities. People who spend all their time programming and who exchange information by word of mouth sometimes have to be pressed to make proper writeups. The other side of the issue is that we professors of computer science sometimes lose our ability to write actual computer programs through lack of practice and envy younger people who can spend full time in the laboratory. The phenomenon is well known in other sciences and in other human activities.
"強迫的なコンピュータープログラマー"や "ハッカー"を攻撃する章があります。この神話的なビーストはコンピュータラボに居住し、タイムシェアリング・システムのすべての面倒を見る上では専門家であり、タイムシェアリング・システムの文書化されていない門外漢には理解し難い機能を詳細に述べ、以前のバージョンのバグを修正する前に常にシステムを変更しています。これらの悪条件はすべて存在しますが、私はそれらを組み合わせた個人を考えることはできず、人々は一般にそれを超えて成長します。研究所のディレクターとして、施設を過度に複雑にする傾向に逆らって、パートタイムのみをプログラムする人々の利益を守る必要があります。すべての時間をプログラミングし、口頭で情報を交換する人は、適切な控えを作るために押さなければならないことがあります。問題のもう一つの側面は、コンピュータサイエンスの教授たちが、実践の欠如によって実際のコンピュータプログラムを書く能力を失い、研究室でフルタイムで過ごすことができる若い人たちを羨むことがあるということです。この現象は、他の科学および他の人間の活動においてよく知られています。
Weizenbaum attacks the Yale computer linguist, Roger Schank, as follows -- the inner quotes are from Schank: “What is contributed when it is asserted that ‘there exists a conceptual base that is interlingual, onto which linguistic structures in a given language map during the understanding process and out of which such structures are created during generation [of linguistic utterances)’? Nothing at all . For the term ‘conceptual base’ could perfectly well be replaced by the word ‘something’. And who could argue with that so—transformed statement?” Weizenbaum goes on to say that the real scientific problem “remains as untouched as ever”. On the next page he says that unless the “Schank -- like scheme” understood the sentence “Will you come to dinner with me this evening ?” to mean “a shy young man’s desperate longing for love", then the sense in which the system “understands” is “about as weak as the sense in which ELIZA “understood". This good example raises interesting issues and seems to call for some distinctions. Full understanding of the sentence indeed results in knowing about the young man’s desire for love, but it would seem that there is a useful lesser level of understanding in which the machine would know only that he would like her to come to dinner.
Weizenbaumは、イェール大学のコンピュータ言語学者 Roger Schank を次のように攻撃します -- シングルクォートはSchankからの引用です: 「’言語的発話の生成の際にそのような構造が創出された理解過程の間に、与えられた言語の言語構造が写像されている舌の間にある概念的基盤が存在する’と主張されたときに寄与したものは何か?全く何もない。 ’概念ベース’ という言葉は ’何か’ という言葉に完全に置き換えることができる。誰がそんなに変身した声明で議論することができるのだろうか?」Weizenbaumは、現実の科学的問題は「今までと変わらない」と言い続けています。次のページで彼は次のように述べています。『「シャンクさまのスキーム」では、「今晩私と一緒に夕食に来る?」という文は「恥ずかしがり屋の若い男の絶望的な愛への憧憬」を意味しており、システムが「理解する」感覚は ELIZA が「理解した」感覚と同じくらい弱いという意味である。』この良い例は興味深い問題を提起し、いくつかの区別を必要とするようです。文章を完全に理解することは、若者の愛に対する欲求を知ることになりますが、機械が夕食に来てほしいということだけを知っている有用なレベルの理解はないようです。
Contrast Weizenbaum’s demanding, more-human-than-thou attitude to Schank and Winograd with his respectful and even obsequious attitude to Chomsky. We have “The linguist’s first task is therefore to write grammars, that is, sets of rules, of particular languages, grammars capable of characterizing all and only the grammatically admissible sentences of those languages, and then to postulate principles from which crucial features of all such grammars can be deduced. That set of principles would then constitute a universal grammar. Chomsky’s hypothesis is, to put it another way, that the rules of such a universal grammar would constitute a kind of projective description of important aspects of the human mind. ”There is nothing here demanding that the universal grammar take into account the young man’s desire for love. As far as I can see, Chomsky is just as much a rationalist as we artificial intelligentsia.
Chomsky’s goal of a universal grammar and Schank’s goal of a conceptual base are similar, except that Schank’s ideas are further developed, and the performance of his students’ programs can be compared with reality. I think they will require drastic revision and may not be on the right track at all, but then I am pursuing a rather different line of research concerning how to represent the basic facts that an intelligent being must know about the world. My idea is to start from epistemology rather than from language, regarding their linguistic representation as secondary. This approach has proved difficult, has attracted few practitioners, and has led to few computer programs, but I still think it’s right.
Weizenbaum approves of the Chomsky school’s haughty attitude towards Schank, Winograd and other AI based language researchers. On page 184, he states, “many linguists, for example, Noam Chomsky, believe that enough thinking about language remains to be done to occupy them usefully for yet a little while, and that any effort to convert their present theories into computer models would, if attempted by the people best qualified, be a diversion from the main task. And they rightly see no point to spending any of their energies study ing the work of the hackers.”
This brings the chapter on “compulsive computer programmers” alias “hackers” into a sharper focus. Chomsky’s latest book Reflections on Language makes no reference to the work of Winograd, Schank, Charniak, Wilks, Bobrow or William Woods to name only a few of those who have developed large computer systems that work with natural language and who write papers on the semantics of natural language. The actual young computer programmers who call themselves hackers and who come closest to meeting Weizenbaum’s description don’t write papers on natural language. So it seems that the hackers whose work need not be studied are Winograd, Schank, et.al. who are professors and senior scientists. The Chomsky school may be embarassed by the fact that it has only recently arrived at the conclusion that the semantics of natural language is more fundamental than its syntax, while AI based researchers have been pursuing this line for fifteen years.
The outside observer should be aware that to some extent this is a pillow fight within M.I.T. Chomsky and Halle are not to be dislodged from M.I.T. and neither is Minsky - whose students have pioneered the AI approach to natural language. Schank is quite secure at Yale. Weizenbaum also has tenure. However, some assistant professorships in linguistics may be at stake, especially at M.I.T.
Allen Newell and Herbert Simon are criticized for being overoptimistic and are considered morally defective for attempting to describe humans as difference-reducing machines. Simon’s view that the human is a simple system in a complex environment is singled out for attack. In my opinion, they were overoptimistic, because their GPS model on which they put their bets wasn’t good enough. Maybe Newell’s current production system models will work out better. As to whether human mental structure will eventually turn out to be simple, I vacillate but incline to the view that it will turn out to be one of the most complex biological phenomena.
I regard Forrester’s models as incapable of taking into account qualitative changes, and the world models they have built as defective even in their own terms, because they leave out saturation—of—demand effects that cannot be discovered by curve-fitting as long as a system is rate—of—expansion limited. Moreover, I don’t accept his claim that his models are better suited than the unaided mind in “interpreting how social systems behave”, but Weizenbaum’s sarcasm on page 246 is unconvincing. He quotes Forrester, “[desirable modes of behavior of the social system] seem to be possible only if we have a good understanding of the system dynamics and are willing to endure the self—discipline and pressures that must accompany the desirable mode”. Weizenbaum comments, “There is undoubtedly some interpretation of the words ‘system’ and ‘dynamics’ which would lend a benign meaning to this observation”. Sorry, but it looks ok to me provided one is suitably critical of Forrester’s proposed social goals and the possibility of making the necessary assumptions and putting them into his models.
Skinner’s behaviorism that refuses to assign reality to people’s internal state seems wrong to me, but we can’t call him immoral for trying to convince us of what he thinks is true.
Weizenbaum quotes Edward Fredkin, former director of Project MAC, and the late Warren McCulloch of M.I.T. without giving their names. pp. 241 and 240. Perhaps he thinks a few puzzles will make the book more interesting, and this is so. Fredkin’s plea for research in automatic programming seems to overestimate the extent to which our society currently relies on computers for decisions. It also overestimates the ability of the faculty of a particular university to control the uses to which technology will be put, and it underestimates the difficulty of making knowledge based systems of practical use. Weizenbaum is correct in pointing out that Fredkin doesn’t mention the existence of genuine conflicts in society, but only the new left sloganeeririg elsewhere in the book gives a hint as to what he thinks they are and how he proposes to resolve them.
As for the quotation from (McCulloth 1956), Minsky tells me “this is a brave attempt to find a dignified sense of freedom within the psychological determinism morass”. Probably this can be done better now, but Weizenbaum wrongly implies that McCulloch’s 1956 effort is to his moral discredit.
Finally, Weizenbaum attributes to me two statements -- both from oral presentations which I cannot verify. One of them is “The only reason we have not yet succeeded in simulating every aspect of the real world is that we have been lac king a sufficiently powerful logical calculus. I am working on that problem”. This statement doesn’t ex press my present opinion or my opinion in 1973 when I am alleged to have expressed it in a debate, and no—one has been able to find it in the video—tape of the debate.
We can’t simulate “every aspect of the real world”, because the initial state information is available, the laws of motion are imperfectly known, and the calculations for a simulation are too extensive. Moreover, simulation wouldn’t necessarily answer our questions. Instead, we must find out how to represent in the memory of a computer the information about the real world that is actually available to a machine or organism with given sensor y capability, and also how to represent a means of drawing those useful conclusions about the effects of courses of action that can be correctly inferred from the attainable information. Having a sufficiently powerful logical calculus is an important part of this problem -- but one of the easier parts.
[Note added September 1976 -- This statement has been quoted in a large fraction of the reviews of Weizenbaum’s book (e.g. in Datamation and Nature) as an example of the arrogance of the “artificial intelligentsia”. Weizenbaum firmly insisted that he heard it in the Lighthill debate arid cited his notes as corroboration, but later admitted (in Datamiation) after reviewing the tape that he didn’t, but claimed I must have said it in some other debate. I am confident I didn’t say it, because it contradicts views I have held and repeatedly stated since 1959. My present conjecture is that Weizenbaum heard me say something on the importance of formalization, couldn’t quite remember what, and quoted “what McCarthy must have said” based on his own misunderstanding of the relation between computer modeling and formalization. (His two chapters on computers show no awareness of the difference between declarative and procedural knowledge or of the discussions in the AI literature of their respective roles). Needless to say, the repeated citation by reviewers of a pompous statement that I never made and which is in opposition to the view that I think represents my major contribution to AI is very offensive].
The second quotation from me is the rhetorical question, “What do judges know that we cannot tell a computer”. I’ll stand on that if we make it “eventually tell” and especially if we require that it be something that one human can reliably teach another.
6. A SUMMARY OF POLEMICAL SINS
6.政治的な兆候の概要
The speculative sections of the book contain numerous dubious little theories, such as this one about the dehumanizing effect of of the invention of the clock: “The clock had created literally a new reality; and that is what I meant when I said earlier that the trick man turned that prepared the scene f or the rise of modern science was nothing less than the transformation of nature and of his perception of reality. It is important to realize that this newly created reality was and remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted the old reality. The feeling of hunger was rejected as a stimulus for eating; instead one ate when an abstract model had achieved a certain state, i.e. when the hand of a clock pointed to certain marks on the clock’s face (the anthropomorphism here is highly significant too), and similarly for signals for sleep and rising, and so on.”
This idealization of primitive life is simply thoughtless. Like modern man, primitive man ate when the food was ready, and primitive man probably had to start preparing it even further in advance. Like modern man, primitive man lived in families whose members are no more likely to become hungry all at once than are the members of a present family.
I get the feeling that in toppling this microtheory I am not playing the game; the theory is intended only to provide an atmosphere, and like the reader of a novel, I am supposed to suspend disbelief. But the contention that science has driven us from a psychological Garden of Eden depends heavily on such word pictures.
By the way, I recall from my last sabbatical at M.I.T. that the feeling of hunger is more of ten the direct social stimulus for eating for the “hackers” deplored in Chapter 4 than it could have been for primitive man. Often on a crisp New England night, even as the clock strikes three, I hear them call to one another, messages flash on the screens, a flock of hackers magically gathers, and the whole picturesque assembly rushes chattering off to Chinatown.
I find the book substandard as a piece of polemical writing in the following respects:
1. The author has failed to work out his own positions on the issues he discusses. Making an extreme statement in one place and a contradictory statement in another is rio substitute for trying to take all the factors into account and reach a considered position. Unsuspicious readers can come away with a great variety of views, and the book can be used to support contradictory positions.
2. The computer linguists -- Winograd, Schank, et.al. -- are denigrated as hackers and compulsive computer programmers by innuendo.
3. One would like to know more precisely what biological and psychological experiments and computer applications he finds acceptable. Reviewers have already drawn a variety of conclusions on this point.
4. The terms “authentic”, “obscene”, and “dehumanization" are used as clubs. This is what mathematicians call “proof by intimidation”.
5. The book encourages a snobbery that has no need to argue for its point of view but merely utters code words, on hearing which the audience is supposed applaud or hiss as the case may be. The New Scientist reviewer certainly salivates in most of the intended places.
6. Finally, when moralizing is both vehement and vague, it invites authoritarian abuse either by existing authority or by new political movements. Imagine, if you can, that this book were the bible of some bureaucracy, e.g. an Office of Technology Assessment, that acquired power over the computing or scientific activities of a university, state, or country. Suppose Weizenbaum’s slogans were combined with the bureaucratic ethic that holds that any problem can be solved by a law forbidding something and a bureaucracy of eager young lawyers to enforce it. Postulate further a vague Humane Research Act and a “public interest” organization with more eager young lawyers suing to get judges to legislate new interpretations of the Act. One can see a laboratory needing more lawyers than scientists and a Humane Research Administrator capable of
forbidding or requiring almost anything.
I see no evidence that Weizenbaum forsees his work being used in this way; he doesn’t use the phrase laissez innover which is the would—be science bureaucrat’s analogue of the economist’s laissez faire, and he never uses the indefinite phrase “it should be decided” which is a common expression of the bureaucratic ethic. However, he has certainly given his fellow computer scientists at least some reason to worry about potential tyranny.
Let me conclude this section with a quotation from Andrew D. White, the first president of Cornell University, that seems applicable to the present situation - not only in computer science, but also in biology. - “In all modern history, interference with science in the supposed interest of religion, no matter how conscientious such interference may have been, has resulted in the direst evils both to religion and to science, and invariably; and, on the other hand, all untrammelled scientific investigation, no matter how dangerous to religion some of its stages my have seemed for the time to be, has invariably resulted in the highest good both of religion and of science”. Substitute morality for religion and the parallel is clear. Frankly, the feebleness of the reaction to attacks on scientific freedom worries me more than the strength of the attacks.
7. WHAT WORRIES ABOUT COMPUTERS ARE WARRANTED?
7.コンピュータについてのどんな懸念があるのか?
Grumbling about Weizenbaum’s mistakes and moralizing is not enough. Genuine worries prompted the book, and many people share them. Here are the genuine concerns that I can identify and the opinions of one computer scientist about their resolution: What is the danger that the computer will lead to a false model of man? What is the danger that computers will be misused? Can human— level artificial intelligence be achieved? What, if any, motivational characteristics will it have? Would the achievement of artificial intelligence be good or bad for humanity?
1. Does the computer model lead to a false model of man.
1. コンピュータモデルは、人間の誤ったモデルにつながるか?
Historically, the mechanistic model of the life and the world followed animistic models in accordance with which, priests and medicine men tried to correct malfunctions of the environment and man by inducing spirits to behave better. Replacing them by mechanistic models replaced shamanism by medicine. Roszak explicitly would like to bring these models back, because he finds them more “human”, but he ignores the sad fact that they don’t work, because the world isn’t constructed that way. The pre-computer mechanistic models of the mind were, in my opinion. unsuccessful, but I think the psychologists pursuing computational models of mental processes may eventually develop a really beneficial psychiatry.
Philosophical and moral thinking hasn’t yet found a model of man that relates human beliefs and purposes to the physical world in a plausible way. Some of the unsuccessful attempts have been more mechanistic than others. Both mechanistic and non—mechanistic models have led to great harm when made the basis of political ideology, because they have allowed tortuous reasoning to justify actions that simple human intuition regards as immoral. In my opinion, the relation between beliefs, purposes and wants to the physical world is a complicated but ultimately solvable problem. Computer models can help solve it, and can provide criteria that will enable us to reject false solutions. The latter is more important for now, and computer models are already hastening the decay of dialectical materialism in the Soviet Union.
2. What is the danger that computers will be misused?
2.コンピュータが悪用される危険性とは何か?
Up to now, computers have been just another labor-saving technology. I don’t agree with Weizenbaum’s acceptance of the claim that our society would have been inundated by paper work without computers. Without computers, people would work a little harder and get a little less for their work. However, when home terminals become available, social changes of the magnitude of those produced by the telephone and automobile will occur. I have discussed them elsewhere, and I think they will be good - as were the changes produced by the automobile and the telephone. Tyranny comes from control of the police coup led with a tyrannical ideology; data banks will be a minor convenience. No dictatorship yet has been overthrown for lack of a data bank.
One’s estimate of whether technology will work out well in the future is correlated with one’s view of how it worked out in the past. I think it has worked out well e.g. cars were not a mistake and am optimistic about the future. I feel that much current ideology is a combination of older anti—scientific and anti—technological views with new developments in the political technology of instigating and manipulating fears and guilt feelings.
3. What motivations will artificial intelligence have?
3.人工知能にはどんな動機があるか?
It will have what motivations we choose to give it. Those who finally create it should start by motivating it only to answer questions and should have the sense to ask for full pictures of the consequences of alternate actions rather than simply how to achieve a fixed goal, ignoring possible side—effects. Giving it human motivational structure with its shifting goals sensitive to physical state would require a deliberate effort beyond that required to make it behave intelligently.
4. Will artificial intelligence be good or bad?
4.人工知能は良いのだろうか?悪いのだろうか?
Here we are talking about machines with the same range of intellectual abilities as are posessed by humans. However, the science fiction vision of robots with almost precisely the ability of a human is quite unlikely, because the next generation of computers or even hooking computers together would produce an intelligence that might be qualitatively like that of a human, but thousands of times faster. What would it be like to be able to put a hundred years thought into every decision? I think it is impossible to say whether qualitatively better answers would be obtained ; we will have to try it and see.
The achievement of above—human—level artificial intelligence will open to humanity an incredible variety of options. We cannot now fully envisage what these options will be, but it seems apparent that one of the first uses of high—level artificial intelligence will be to determine the consequences of alternate policies governing its use. I think the most likely variant is that man will use artificial intelligence to transform himself, but once its properties and the
conequences of its use are known, we may decide not to use it. Science would then be a sport like mountain climbing; the point would be to discover the facts about the world using some stylized limited means. I wouldn’t like that, but once man is confronted by the actuality of full AI. they may find our opinion as relevant to them as we would find the opinion of Pithecanthropus about whether subsequent evolution took the right course.
5. What shouldn’t computers be programmed to do.
5.コンピュータをプログラムするべきではないものとは何か。
Obviously one shouldn’t program computers to do things that shouldn’t be done. Moreover, we shouldn’t use programs to mislead ourselves or other people. Apart from that, I find none of Weizenbaum’s examples convincing. However, I doubt the advisability of making robots with human-like motivational and emotional structures that might have rights and duties independently of humans. Moreover, I think it might be dangerous to make a machine that evolved intelligence by responding to a program of rewards and punishments unless its trainers understand the intellectual and motivational structure being evolved.
All these questions merit and have received more extensive discussion, but I think the only rational policy now is to expect the people confronted by the problem to understand their best interests better than we now can. Even if full Al were to arrive next year, this would be right. Correct decisions will require an intense effort that cannot be mobilized to consider an eventuality that is still remote. Imagine asking the presidential candidates to debate on TV what each of them would do aboux each of the forms that full AI might take.
References:
McCulloch, W.S.(1956)
Toward some circuitry of ethical robots or an observational science of the genesis of social evaluation in the mind-like behavior of artifacts.
Acta Biotheoretica, XI, parts 3/4, 147—156
This review is filed as WEIZEN.REV[PUB,JMC] at SU-AI on the ARPA net.
Any comments sent to JMC@ SU-AI will be stored in the directory PUB,JMC also known as McCarthy’s Electric Magazine.
The comment files will be designated WEIZEN.1, WEIZEN.2, etc.
--
John Mccarthy
Artificial Intelligence Laboratory
Stanford, California 94305
September 16, 1976
---------+---------+---------+---------+---------+---------+---------+---------+