ELIZA -- 人間と機械の自然言語コミュニケーション研究のためのコンピュータプログラム

ジョゼフ・ワイゼンバウムの ELIZA の論文です。原文は

http://web.stanford.edu/class/cs124/p36-weizenabaum.pdf

から閲覧できます。まだテキスト化が不十分ですが、まずは公開しておきます。

どうやらこの論文にはロング・バージョンとショートバージョンがあるようです。こちらはロング・バージョンで、1960年代のソフトウェアの論文らしく、プログラムの仕様がかなり詳しく記述されていますし、末尾には有名なELIZAスクリプト DOCTOR のソースも掲載されています。(このソースでは正しく動作しないという噂もありますが)

ちなみにソフトウェアをソースコードやバイナリコードの形で広く配布することが一般的になったのはもっと後の時代のようで、例えば、ELIZAのような研究目的のソフトウェア実装の場合、その仕様を論文やテクニカルノートとして記述し、第3者はそのコピーを入手して、手元のコンピュータ環境で等価な機能を持つソフトウェアを自ら実装すると言ったことを行ってました。というのも当時のコンピュータはメーカー毎はもちろんのこと、同一メーカーの製品であってもシリーズが異なると互換性が全くない有様で、他者からソースをもらっても、それを自らの環境で動かすためには「移植」という厄介な作業を行わなければならなかったからです。

当時、比較的小さなコンピュータリソースで動作し「コンピュータと対話できる」ELIZAは人気のソフトウェアであり、様々なコンピュータ環境に移植されたようです。その結果、様々な派生バージョンが登場することになりました。

2018/07/06 論文のテキストの入力が終わりました。

 

ELIZA -- A Computer Program For the Study of Natural Language Communication Between Man And Machine

Joseph Weizenbaum
Massachusetts Institute of Technology Department of Electrical Engineering
Cambridge, Mass..


Communications of the ACM Volume 9, Number 1 (January 1966): 36-35.


Abstract(要約)

ELIZA is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules. The fundamental technical problems with which ELIZA is concerned are:

ELIZAは、MITのMACタイムシェアリングシステムにおいて動作するプログラムで、人間とコンピュータの自然な会話を可能にします。入力文は、入力テキストに現れるキーワードによって引き起こされる分解ルールに基づいて分析されます。レスポンスは、選択された分解ルールに関連付けられた再構築ルールによって生成されます。 ELIZAに関する基本的な技術的課題は次のとおりです。

 

  1. the identification of key words,
  2. the discovery of minimal context,
  3. the choice of appropriate transformations,
  4. generation of responses in the absence of keywords, and
  5. the provision of an ending capacity for ELIZA "scripts".

 

  1. キーワードの識別
  2. 最小文脈の発見
  3. 適切な変換の選択
  4. キーワードがない場合の応答の生成、そして
  5. ELIZA"スクリプト"を停止する機能の提供

 

A discussion of some psychological issues relevant to the ELIZA approach as well as of future developments concludes the paper.
この論文の最後で、ELIZAのアプローチに関連したいくつかの心理学的問題と今後の展開について議論します。


Introduction(はじめに)

It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself "I could have written that". With that thought he moves the program in question from the shelf marked "intelligent" to that reserved for curios, fit to be discussed only with people less enlightened that he.
説明とは上手に釈明することだと言われています。この格言は、コンピュータ・プログラミングの分野、特に発見的プログラミングや人工知能と呼ばれる領域では全く達成されていません。その領域では、マシンは驚異的な方法で動作し、しばしば最も経験豊富な観察者でさえも十分に驚嘆させます。しかし、一旦、特定のプログラムの仮面が剥がされて、その内部の仕組みへの理解を促すのに十分な説明される(それぞれはかなり分かりやすい手順を単に掻き集めたものであることを明らかにする)と、その魔法は消滅します。説明を受けた人は「私でも書けるかもしれない」と呟きます。問題のプログラムを「知的」と記された棚から珍しいものの棚に移して、まだ知らされていない人とだけ議論する事を願います。


The object of this paper is to cause just such a reevaluation of the program about to be "explained". Few programs ever needed it more.
この論文の目的は、このような「釈明する」プログラムを再評価することです。今までに再評価が必要なプログラムはほとんどありませんでした。


ELIZA Program(ELIZAプログラム)

ELIZA is a program which makes natural language conversation with a computer possible. Its present implementation is on the MAC time-sharing system at MIT. It is written in MAD-SLIP[4] for the IBM 7094. Its name was chosen to emphasize that it may be incrementally improved by its users, since its language abilities may be continually improved by a "teacher". Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.
ELIZAは、コンピュータとの自然な会話を可能にするプログラムです。現在の実装は、MITのMACタイムシェアリングシステム上に稼働します。これは、IBM7094 の MAD-SLIP で書かれています。「教師」によって言語能力が絶え間なく改善される期待から、その名前はユーザーによって段階的に改善される可能性があることを願って選ばれました。有名なピグマリオンのエリザのように、より上品に見えるようにすることができますが、その外見と現実の関係は劇作家の領域に止まっています。

For the present purpose it is sufficient to characterize the MAC system as one which permits an individual to operate a full scale computer from a remotely located typewriter. The individual operator has the illusion that he is the sole user of the computer complex, while in fact others may be "time-sharing" the system with him. What is important here is that the computer can read messages typed on the typewriter and respond by writing on the same instrument. The time between the computer's receipt of a message and the appearance of its response is a function of the program controlling the dialog and of such MAC system parameters as the number of users currently corresponding with the system. These latter parameters generally contribute so little to the overall response time that conversational interaction with the computer need never involve truly intolerable delays.
現在の目的において、MACシステムを個人が遠隔地にあるタイプライターからフルスケールのコンピュータを操作することを可能にする手段とみなすことができます。個々のオペレータは、彼がコンピュータ複合体の唯一のユーザであると錯覚していますが、実際には他の人とシステムを「タイムシェアリング」しているかもしれません。ここで重要なことは、コンピュータがタイプライターで入力されたメッセージを読み込み、同じ装置に書くことで応答できることです。コンピュータがメッセージを受信して​​からその応答が現れるまでの時間は、対話を制御するプログラムとその時点でのシステムに対応するユーザ数のようなMACシステムのパラメータに相関します。一般的にこれらの後者のパラメータは応答時間全体にはほとんど影響しないため、コンピュータとの対話のやり取りが、本当に耐え難いほどの遅れを伴うことは決してありません。

When in conversation with ELIZA, the user types in some statement or set of statements in natural language using normal punctuation and sentence structures. Only the question mark may not be used, because it is interpreted as a line delete character by the MAC system. The user's statement is terminated by a double carriage return which serves to turn control over to ELIZA. ELIZA then analyzes the user's statement and generates some response which it types out. Control is then again with the user.
ELIZAと会話するとき、ユーザは、通常の句読点や文章構造を使用して、自然言語による文章を入力します。疑問符は、MACシステムによって行削除文字として解釈されるため、使用されないことがあります。ユーザーの文章は、ELIZAに制御を移すためのダブルキャリッジリターンで終了します。次にELIZAはユーザーの文章を分析し、出力する応答を生成します。その後、制御は再度ユーザに返されます。

A typical conversation is the following:
一般的な会話は次のとおりです:

Men are all alike.
男性はすべて似ています。
IN WHAT WAY?
どうしたの?
They're always bugging us about something or other.
彼らはいつも何かについて私たちを悩ませています。
CAN YOU THINK OF A SPECIFIC EXAMPLE?
具体的な例を考えられますか?
Well, my boyfriend made me come here.
うーん、私のボーイフレンドは私をここに来させました。
YOUR BOYFRIEND MADE YOU COME HERE
あなたのボーイフレンドはあなたをここに来させましたか?
He says I'm depressed much of the time.
彼は、私が長時間が落ち込んでいると言います。
I AM SORRY TO HEAR YOU ARE DEPRESSED
私はあなたが落ち込んでいると聞いて悲しいです。
It's true. I am unhappy
それは本当です。私は幸せではありません。
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
あなたはここに来るとあなたが不幸にならないように助けてくれると思いますか?
I need some help, that much seems certain.
私は助けが必要です。
WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP
あなたが何かの助けを得られることは何を意味するのですか?
Perhaps I could learn to get along with my mother.
おそらく私は母と一緒になることを学ぶことができました。
TELL ME MORE ABOUT YOUR FAMILY
あなたの家族についてもっと教えてください
My mother takes care of me.
私の母は私を心配してくれます。
WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU
他にあなたの家族の誰があなたの心配をするのですか?
My father.
私のお父さん。
YOUR FATHER
あなたのお父さんですか?
You are like my father in some ways.
ある意味ではあなたは私の父のようです。
WHAT RESEMBLANCE DO YOU SEE
どのようなところが似ていますか?
You are not very aggressive but I think you don't want me to notice that.
あなたはあまり積極的ではありませんが、私はそれに気付かないようにしています。
WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE
何があなたに私があまりに積極的でないと考えさせるのですか?
You don't argue with me.
あなたは私と議論しないから。
WHY DO YOU THINK I DON'T ARGUE WITH YOU
なぜ私はあなたと議論しないと思うのですか?
You are afraid of me.
あなたは私を恐れている。
DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU
私はあなたを恐れていると信じてください。
My father is afraid of everybody.
私の父はみんなを恐れている。
WHAT ELSE COMES TO YOUR MIND WHEN YOU THINK OF YOUR FATHER
あなたが父親を考えたときに他に何を思い出しますか?
Bullies.
いじめ。
DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOYFRIEND MADE YOU COME HERE
あなたのボーイフレンドがあなたをここに来させたという事実と何か関係がありますか?

The capitalized lines are the machine responses.
大文字の行はマシンからの応答です。

The gross procedure of the program is quite simple; the input is read and inspected for the presence of a keyword. When such a word is found, the sentence is transformed according to a rule associated with the keyword, if not a content-free remark or, under certain conditions, an earlier transformation is retrieved. The text so computed or retrieved is then printed out.
プログラムの全体的なプロシージャは非常に簡単です。入力が読み取られ、キーワードの存在が検査されます。該当する単語が見つかると、入力文はキーワードに関連付けられたルールに従って変換され、内容のない発言でない場合や特定の条件下では、さらに前の変換が検索されます。変換処理された、あるいは検索されたテキストは、その後プリントアウトされます。

In detail, of course, the procedure sketched above is considerably more complex. Keywords, for example, may have a RANK or precedence number. The procedure is sensitive to such numbers in that it will abandon a keyword already found in the left-to-right scan of the text in favor of one having a higher rank. Also, the procedure recognizes a comma or period as a delimiter. Whenever either one is encountered and a keyword has already been found, all subsequent text is deleted from the input message. If no key has yet been found, the phrase or sentence to the left of the delimiter (as well as the delimiter itself) is deleted. As a result, only single phrases or sentences are ever transformed.
もちろん、詳細に述べると上に示したプロシージャはかなり複雑です。例えば、キーワードにはRANKまたは優先順位番号を付けることができます。テキストの左から右のスキャンで既に見つかったキーワードを、上位のものを優先して破棄するという点で、この手順はこのような数値に敏感です。また、プロシージャはコンマまたはピリオドを区切り文字として認識します。いずれかに遭遇し、キーワードがすでに見つかった場合は、後続のすべてのテキストが入力メッセージから削除されます。キーが見つからない場合は、区切り記号の左側の区切り記号(区切り記号自体)も削除されます。結果として、単一のフレーズかセンテンスのみが変換されます。

Keywords and their associated transformation*1 rules constitute the SCRIPT for a particular class of conversation. An important property of ELIZA is that a script is data; i.e., it is not part of the program itself. Hence, ELIZA is not restricted to a particular set of recognition patterns or responses, indeed not even to any specific language. ELIZA scripts exist (at this writing) in Welsh and German as well as in English.
キーワードとそれに関連する変換*1ルールは、特定のクラスの会話のためのSCRIPTを構成します。 ELIZAの重要な特性はスクリプトがデータであること、すなわち、プログラム自体の一部ではありません。したがって、ELIZA は特定の認識パターンや応答のセットに限定されず、実際には特定の言語に限定されません。(本稿を書いている時点で)英語だけでなく、ウェールズ語、ドイツ語のELIZAスクリプトが存在します。

 

*1 The word "transformation" is used in its generic sense rather than that given it by Harris and Chomsky in linguistic contexts.
*1 「変換」という用語は、ハリスとチョムスキー言語学的な文脈で与えたものではなく、一般的な意味で使われています。


The fundamental technical problems with which ELIZA must be preoccupied are the following:
ELIZAに取り組まなければならない基本的な技術的課題は次のとおりです。

 

  1. The identification of the "most important" keyword occurring in the input message.
  2. The identification of some minimal context within which the chosen keyword appears; e.g., if the keyword is "you", is it followed by the word "are" (in which case an assertion is probably being made).
  3. The choice of an appropriate transformation rule, and, of course, the making of the transformation itself.
  4. The provision of a mechanism that will permit ELIZA to respond "intelligently" when the input text contained no keywords.
  5. The provision of machinery that facilitates editing, particularly extension, of the script on the script writing level
  1. 入力メッセージに現れる「最も重要な」キーワードを識別する。
  2. 選択されたキーワードが現れる最小限の文脈を識別する。例えば、キーワードが"you"である場合は"are"という単語が続きます(この場合、アサーションが作成されている可能性があります)
  3. 適切な変換規則を選択する。そしてもちろん変換そのものを作成する。
  4. 入力テキストにキーワードが含まれていない場合にELIZAが「賢く」応答する仕組みを提供する。
  5. スクリプト作成レベルでのスクリプトの編集、特に拡張を容易にする機構を提供する。


There are, of course, the usual constraints dictated by the need to be economical in the use of computer time and storage space.
もちろん、コンピュータの時間とメモリ空間の使用に経済的な必要性が指摘される通常の制約があります。
The central issue is clearly one of text manipulation, and at the heart of that issue is the concept of the transformation rule which has been said to be associated with certain keywords. The mechanisms subsumed under the slogan "transformation rule" are a number of Slip functions which serve to (1) decompose a data string according to certain criteria, hence to test the string as to whether it satisfies these criteria or not, and (2) to reassemble a decomposed string according to certain assembly specifications.

中心的な課題はテキスト操作の1つであることは明らかで、その課題の核心には、特定のキーワードに関連付けられていると思われる変換ルールの概念があります。スローガン "変換ルール"に含まれるメカニズムは、(1)特定の基準に従ってデータ列を分解し、それによりこれらの基準を満たしているかどうかを検査し、(2)特定のアセンブリ仕様に従って分解された文字列を再構成する、といった幾つものSLIPの関数です。

While this is not the place to discuss these functions in all their detail (or even to reveal their full power and generality), it is important to the understanding of the operation of ELIZA to describe them in some detail.

この場は、これらの機能をすべて詳細に話し合う場所でも(あるいは、その能力と一般性を明らかにするためでも)ありませんが、ELIZAの動作を理解するためには、それをある程度詳しく記述することが重要です。

Consider the sentence "I am very unhappy these days". Suppose a foreigner with only a limited knowledge of English but with a very good ear heard that sentence spoken but understood only the first two words "I am". Wishing to appear interested, perhaps even sympathetic, he may reply "How long have you been very unhappy these days?" What he must have done is to apply a kind of template to the original sentence, one part of which matched the two words "I am" and the remainder isolated the words "very unhappy these days". He must also have a reassembly kit specifically associated with that template, one that specifies that any sentence of the form "I am BLAH" can be transformed to "How long have you been BLAH", independently of the meaning of BLAH. A somewhat more complicated example is given by the sentence "It seems that you hate me". Here the foreigner understands only the words "you" and "me"; i.e., he applies a template that decomposes the sentence into the four parts:

"I am very unhappy these days"(私は最近、とても不幸です)という文章を考えてみましょう。英語の知識が限られているが、とても良い耳で聞いたことがある、最初の2つの単語 "I am" を理解している外国人を考えてみましょう。関心を表明したいと思う(おそらく同情している)彼は "How long have you been very unhappy these days?" と返すかもしれません。彼がしなければならないことは、元の文章にある種のテンプレートを適用することです。その一部は "I am" という2つの単語に一致し、残りは "very unhappy these days" という単語が分離されます。また、彼はそのテンプレートに特に関連した再構成する道具を持ち、"I am BLAH" という形式の文を BLAH の意味とは無関係に "How long have you been BLAH" に変えることができるように指定できなければなりません。もう少し複雑な例は "It seems that you hate me"(あなたが私を嫌いなようです)という文があげられます。ここでは外国人は "you" と "me" という言葉だけを理解しています。すなわち、文を4つの部分に分解するテンプレートを彼は適用し…

(1) It seems that    (2) you    (3) hate    (4) me

of which only the second and fourth parts are understood. The reassembly rule might then be "What makes you think I hate you"; i.e., it might throw away the first component, translate the two known words ("you" to "I" and "me" to "you") and tack on a stock phrase (What makes you think) to the front of the reconstruction. A formal notation in which to represent the decomposition template is

そのうちの2番目と4番目の部分のみが理解されます。その場合、再構成ルールは "What makes you think I hate you"(あなたは私があなたを憎むと思うものは何ですか)かもしれません。つまり、最初の要素を捨てて、2つの既知の単語("you" を "I" に、そして "me" を "you" に)に翻訳し、ストックしておいたフレーズ(What makes you think)を再構築した文の先頭に付けます。分解テンプレートを表すフォーマルな記法は次のように

(0 YOU 0 ME)

and the reassembly rule

そして再構築ルールは次のようになります。

(WHAT MAKES YOU THINK I 3 YOU).

The "0" in the decomposition rule stands for "and indefinite number of words" (analogous to the indefinite dollar sign of COMIT) [6] while the "3" in the reassembly rule indicates that the third component of the subject decomposition is to be inserted in its place. The decomposition rule

分解規則の "0"は、"and indefinite number of words"(COMITの不定のドル記号に類似)を表し、再アセンブリ規則の "3"は、分解される対象の第3の成分が挿入される場所をを示します。分解ルール

(0 YOU 1 ME)

would have worked just as well in this specific example. A nonzero integer "n" appearing in a decomposition rule indicates that the component in question should consist of exactly "n" words. However, of the two rules shown, only the first would have matched the sentence. "It seems you love and hate me," the second failing because there is more than one word between "you" and "me".

は、この具体例でも同様に機能するでしょう。分解ルールに現れる非ゼロの整数 "n" は、問題の構成要素が正確に "n" 個の語で構成されるべきであることを示します。しかし、表示された2つのルールのうち、最初のルールのみが文に一致します。"you"と "me"の間には複数の単語があるので、"It seems you love and hate me,"の2番目の単語は失敗します。

 

Fig. 1. Keyword and rule list structure

図1.キーワードとルールリストの構造

 

In ELIZA the question of which decomposition rules to apply to an input text is of course a crucial one. The input sentence might have been, for example, "It seems that you hate," in which case the decomposition rule (0 YOU 0 ME) would have failed in that the word "ME" would not have been found at all, let alone in its assigned place. Some other decomposition rule would then have to be tried and, failing that, still another until a match could be made or a total failure reported. ELIZA must therefore have a mechanism to sharply limit the set of decomposition rules which are potentially applicable to a currently active input sentence. This is the keyword mechanism.

もちろん、ELIZAでは、入力テキストにどのような分解ルールを適用するかは重要な問題です。例えば、入力文が "It seems that you hate" だったとしましょう。この場合、分解ルール (0 YOU 0 ME) は "ME" という単語が、割り当てられた場所でまったく見つからないので失敗します。マッチするか、または全ての失敗が報告されるまで、他の分解ルールを繰り返し試してみる必要があります。したがって、ELIZAは、現在アクティブな入力文に潜在的に適用可能な分解ルールのセットを大幅に制限するメカニズムを持たなければなりません。これがキーワード・メカニズムです。

An input sentence is scanned from left to right. Each word is looked up in a dictionary of keywords. If a word is identified as a keyword, then (apart from the issue of precedence of keywords) ony decomposition rules containing that keyword need to be tried. The trial sequence can even be partially ordered. For example, the decomposition rule (0 YOU 0) associated with the keyword "YOU" (and decomposing the sentence into (1) all the words in front of "YOU", (2) the word "YOU", and (3) all the words following "YOU") should be the last one tried since it is bound to succeed.

入力文は左から右にスキャンされます。各単語は、キーワードの辞書で検索されます。単語がキーワードとして識別された場合(キーワードの優先順位の問題は別として)そのキーワードを含む分解ルールを試す必要があります。トライアルシーケンスは部分的にオーダーすることもできます。たとえば、キーワード「YOU」に関連付けられた分解ルール(0 YOU 0)(と (1) "YOU" の前のすべての単語、(2) "YOU" という単語、(3) "YOU" の後のすべての単語に分解したもの)は、成功するためにバインドされているため、試行された最後のルールでなければなりません。

Two problems now arise. One stems from the fact that almost none of the words in any given sentence are represented in the keyword dictionary. The other is that of "associating" both decomposition and reassembly rules with keywords. The first is serious in that the determination that a word is not in a dictionary may well require more computation (i.e., time) than the location of a word which is represented. The attack on both problems begins by placing both a keyword and its associated rules on a list. The basic format of a typical key list is the following:

現在、2つの問題が発生しています。与えられた文中の単語のほとんどがキーワード辞書に表されていないという事実が原因です。もう1つは、分解ルールと再アセンブリルールの両方をキーワードに「関連付ける」ことです。第1の問題は、単語が辞書内にないという決定が、表現される単語の位置よりも多くの計算(すなわち、時間)を必要とする可能性があるという点で重大である。両方の問題に対する攻撃は、キーワードとそれに関連するルールの両方をリストに入れることから始まります。典型的なキーリストの基本形式は次のとおりです。

(K ((D1) (R1, 1) (R1, 2) ... (R1, m1))
    ((D2) (R2, 1) (R2, 2) ... (R2, m2))
      .                     .
      .                     .
      .                     .
    ((Dn) (Rn, 1) (Rn, 2) ... (Rn, mn)))

where K is the keyword, Di the ith decomposition rule associated with K and Ri,j the jth reassembly rule associated with the ith decomposition rule.

ここで、Kはキーワードであり、Diはi番目の分解ルールに関連するj番目の再アセンブリルールであり、KおよびRiに関連するi番目の分解ルールである。

A common pictorial representation of such a structure is the tree diagram shown in Figure 1. The top level of this structure contains the keyword followed by the names of lists; each one of which is again a list structure beginning with a decomposition rule and followed by reassembly rules. Since list structures of this type have no predetermined dimensionality limitations, any number of decomposition rules m a y be associated with a given keyword and any number of reassembly rules witch any specific decomposition rule. SLIP is rich in functions that sequence over structures of this type efficiently. Hence programming problems are minimized.

An ELIZA script consists mainly of a set of list structures of the type shown. The actual keyword dictionary is constructed when such a script is first read into the hitherto empty program. The basic structural component of the keyword dictionary is a vector KEY of (currently) 128 contiguous computer words. As a particular key list structure is read the keyword K at its top is randomized (hashed) by a procedure that produces (currently) a 7 bit integer "i". The word "always", for example, yields the integer 14. KEY(i), i.e., the ith word of the vector KEY, is then examined to determine whether it contains a list. name. If it does not, then an empty list is created, its name placed in KEY(i), and the key list structure in question placed on that list. If KEY(i) already contains a list name, then the name of the key list structure is placed on the bottom of the list named in KEY(i). The largest dictionary so far attempted contains about 50 keywords. No list named in any of the words of the KEY vector contains more than two key list structures.

Every word encountered in the scan of an input text, i.e., during the actual operations of ELIZA, is randomized by the same hashing algorithm as was originally applied to the incoming keywords, hence yields an integer which points to the only possible list structure which could potentially contain that word as a keyword. Even then, only the tops of any key list structures that may be found there need be interrogated to determine whether or not a keyword has been found. By virtue of the various list sequencing, operations that SLIP makes available, the actual identification of a keyword leaves as its principal product a pointer to the list of decomposition (and hence reassembly) rules associated with the identified keyword. One result of this strategy is that often less time is required to discover that a given word is not in the keyword dictionary than to locate it if it is there. However, the location of a keyword yields pointers to all information associated with that word.

Some conversational protocols require that certain transformations be made on certain words of the input text independently of any contextual considerations. The first conversation displayed in this paper, for example, requires that first person pronouns be exchanged for second person pronouns and vice versa throughout tile input text. There may be further transformations but these minimal substitutions are unconditional. Simple substitution rules ought not to be elevated to the level of transformations, nor should the words involved be forced to carry with them all the structure required for the fully complex case. Furthermore, unconditional substitutions of single words for single words can be accomplished during the text scan itself, not as a transformation of the entire text subsequent to scanning. To facilitate the realization of these desiderata, any word in the key dictionary, i.e., at the top of a key fist, structure., may be followed by an equal sign followed by whatever word is to be its substitute. Transformation rules may, but need not, follow. If none do follow such a substitution rule, then the substitution is made on the fly, i.e., during text scanning, but the word in question is not identified as a keyword for subsequent purposes. Of course, a word may be both substituted for and be a keyword as well. An example of a simple substitution is

(YOURSELF = MYSELF).

Neither "yourself" nor "myself" are keywords in the particular script from which this example was chosen. The fact that keywords can have ranks or precedences has already been mentioned. The need of a ranking mechanism may be established by an example. Suppose an input sentence is "I know everybody laughed at me." A script may tag the word "I" as well as the word "everybody" as a keyword. Without differential ranking, "I" occurring first would determine the transformation to be applied. A typical response might be "You say you know everybody laughed at you." But the important message in the input sentence begins with the word "everybody". It is very often true that when a person speaks in terms of universals such as "everybody", "always" and "nobody" he is really referring to some quite specific event or person. By giving "everybody" a higher rank than "I", the response "Who in particular are you thinking of" may be generated.

 

FIG. 2. Basic flow diagram of keyword detection

 

The specific mechanism employed in ranking is that the rank of every keyword encountered (absence of rank implies rank equals 0) is compared with the rank of the highest, ranked keyword already seen. If the rank of the new word is higher than that of any previously encountered word, the pointer to the transformation rules associated with the new word is placed on top of a list called the keystack, otherwise it is placed on the bottom of the keystack. When the text scan terminates, the keystack has at its top a pointer associated with the highest ranked keyword encountered in the scan. The remaining pointers in the stack may not be monotonically ordered with respect to the ranks of the words from which they were derived, but they are nearly so -- in any event they are in a useful and interesting order. Figure 2 is a simplified flow diagram of keyword detection. The rank of a keyword must, of course, also be associated with the keyword. Therefore it must appear on the keyword list structure. It may be found, if at all, just in front of the list of transformation rules associated with the keyword. As an example consider the word "MY" in a particular
script. Its keyword list may be as follows:

(MY = YOUR 5 (transformation rules)).

Such a list would mean that whenever the word "MY" is encountered in any text, it would be replaced by the word "YOUR". Its rank would b e 5.

Upon completion of a given text scan, the keystack is either empty or contains pointers derived from the keywords found in the text. Each of such pointers is actually a sequence reader -- a SLip mechanism which facilitates scanning of lists -- pointing into its particular key list in such a way that one sequencing operation to the right (SEQLR) will sequence it t o the first set of transformation rules associated with its keyword, i.e., to the list

((D1) (R1,1) (R1,2) ... (R1, Rm1)).

The top of that list, of course, is a list which serves a decomposition rule for the subject text. The top of the keystack contains the first, pointer to be activated.

The decomposition rule D1 associated with the keyword K, i.e., {(D1, K}, is now tried. It may fail however. For example, suppose the input text was:

You are very helpful.

The keyword, say, is "you", and {(D1), you} is

(0 I remind you of 0)

(Recall that the "you" in, the original sentence has already been replaced by "I" in the text now analyzed.) This decomposition rule obviously fails to match tile input sentence. Should {(D1), K} fail to find a match, then {(D2), K} is tried. Should that too fail, {(D3), K} is atlempted, and so on. Of course, the set of transformation rules can be guaranteed to terminate with a decomposition rule which nmst match. The decomposition rule

(0 K 0)

will match any text in which tile word K appears while

(0)

will match any text whatever. However, there are other ways to leave a particular set of transformation rules, as will be shown below. For the present, suppose that some particular decomposition rule (Di) has matched the input text. (Di), of course, was found on a list of the form

((Di)(Ri,1)(R ~)・・・(Ri, mi)).

Sequencing the reader which is presently pointing at (Di) will retrieve the reassembly rule (Ri, 1) which may then be applied to the decomposed input text to yield the output message.

Consider again the input text

You are very helpful

in which "you" is the only key word. The sentence is transformed during scanning to

I are very helpful

{(D1), you} is "(0 i remind your of 0)" and fails to match as already discussed. However, {(D2), you} is "(0 I are 0)" and obviously matches the text, decomposing it into the constituents

(1) empty    (2) I    (3) are    (4) very helpful.

{(R2, 1), you} is

(What makes you think I am 4)

Hence it produces the output text

What makes you think I am very helpful.

Having produced it, the integer 1 is put in front of (R2, 1) so that the transformation rule list in question now appears as

((D21(R2, 1)(R2, 2)・・・(R2, m2)).

Next time {(D2), K} matches an input text, the reassembly rule (R2, 2) will be applied and the integer 2 will replace the 1. After (R1, m2) has been exercised, (R2, 1) will again be invoked. Thus, after the system has been in use for a time, every decomposition rule which has matched some input text has associated with it an integer which corresponds to the last reassembly rule used in connection with that decomposition rule. This mechanism isures that the complete set of reassembly rules associated with a given decomposition rule is cycled through before any repetitions occur.

The system described so far is essentially one which selects a decompsition rule for the highest ranking keyword found in an input text, attempts to match that text according to that decomposition rule and, failing to make a match, selects the next reassembly rule associated with the matching decomposition rule and applies it to generate an output text. It is, in other words, a system which, for the highest ranking keyword of a text, selects a specific decomposition and reassembly rule to be used in forming the output message.

Were the system to remain that simple, then keywords that required identical sets of transformation rules be associated with them. This would be logically sounds but would complicated the task of script writing and would also make unsecessary storage demands. There are therefore special types of decomposition and assembly rules characterized by the sppearance of "=" at the top of the rule list. The word following the equal sign indicates whitch new set of transformation rules is to be applied. For example, the keyword "what" may have associated tieh it a transformation rule set of the form

((0) (Why do you ask) (Is that an important question)・・・)

which would apply qeually well to the keywords "how" and "when". The entire keyword list for "how" may therefore be

(How (= What))

The keywords "how", "what" and "when" may thus be made to form an equivalence class with respect to the transformation rules which are to apply to them.

In the above example the rule "(= what)" is in the place of a decomposition rule, although it causes no decomposition of the relevant text. It may also appear, however, in the place of a reassembly rule. For example, the keyword "am" may have amoung others the following transformation rule set associated with it:

((0 are you 0) (Do you believe you are 4)・・・(= what)・・・)

(It is here assumed that "are" has been substituted for "am" and "you" for "I" in the initial text scan) Then, the input text

Am I sick

would elicit either

Do you believe you are sick

or

Why do you ask

depending on how many times the general form had already occurred.

Under still other conditions it may be desirable to perform a preliminary transformation on the input text before subjecting it to the decompositions and reassemblies which finally yield the output text. For example, the keyword "you're" should lead to the transformation rules associated with "you" but should first be replaced by a word pair. The dictionary entry for "you're" is therefore:

(you're = I'm ((0 I'm O) (PRE (I AM 3) (=YOU))))

which has the following effect:

  1. Wherever "you're" is found in the input text, it is replaced by "I'm".
  2. If "you're" is actually selected as the regnant keyword, then the input text is decomposed into three constituent parts, namely, all text in front of the first occurrence of "I'm", the word "I'm" itself, and all text following the first occurrence of "I'm".
  3. The reassembly rule beginning with the code "PRE" is encountered and the decomposed text reassembled such that the words "I AM" appear in front of the third constituent, determined by the earlier decomposition.
  4. Control is transferred, so to speak, to the transformtion rules associated with the keyword "you", where, further decompositions etc. are attempted.

It is to be noted that the set

(PRE (I AM 3) (=YOU))

is logically in the place of a reassembly rule and may therefore be one of many reassembly rules associated with the given decomposition.

Another form of reassembly rule is

(NEWKEY)

which serves the case in which attempts to match on the curcently regnant keyword are to be given up and the entire decomposition and reassembly process is to start again on the basis of the keyword to be found in the keystack. Whenever this rule is invoked, the top of the keystack is "popped up" once, i.e., the new regnant keyword recovered and removed from the keystack, and the entire process reinitiated as if the initial text scan had just terminated. This mechanism makes it possible to, in effect, test on key phrases as opposed to single key words.

A serious problem whieh remains to be discussed is the reaction of the system in case no keywords remain to serve as transformation triggers. This can arise either in case the keystack is empty when NEWKEY is invoked or when the input text. contained no keywords initially.

The simplest mechanism supplied is in the form of the special reserved keyword "NONE" which must be part of any script. The script writer must associate the universally matching decomposition rule (0) with it and follow this by as many content-free remarks in the form of transformation rules as he pleases. (Examples are: "Please go on", "That's very interesting" and " I see".)

There is, however, another mechanism which causes the system to respond more spectacularly in the absence of a key. The word "MEMORY" is another reserved pseudo-keyword. The key list structure associated with it differs from the ordinary one in some respects. An example illuminates this point.

Consider tile following structure:

(MEMORY MY
  (0 YOUR 0 = LETS DISCUSS FURTHER WHY YOUR 3)
  (0 YOUR 0 = EARLIER YOU SAID YOUR 3)
  ・
  ・
  ・

The word "MY" (which must be an ordinary keyword as well) has been selected to serve a special function. Whenever it is the highest ranking keyword of a text one of the transformations on the MEMORY list is randomly selected, and a copy of the text is transformed accordingly. This transformation is stored on a first-in-first-out stack for later use. The ordinary processes already described are then carried out. When a text without keywords is encountered later and a certain counting mechanism is in a particular state and the stack in question is not empty, then the transformed text is printed out as the reply. It is, of course, also deleted from the stack of such transformations.

The current version of ELIZA requires that one keyword be associated with MEMORY and that exactly four transformations accompany that word in that context. (An application of a transformation rule of the form

(LEFT HAND SIDE = RIGHT HAND SIDE)

is equivalent to the successive application of the two forms

(LEFT HAND SIDE), (RIGIIT HAND SIDE).)

Three more details will complete the formal description of the E L I Z A program.

The transformation rule mechanism of SLIP is such that it permits tagging of words in a text and their subsequent recovery on the basis of one of their tags. The keyword "MOTHER" in ELIZA, for example, may be identified as a noun and as a member of the class "family" as follows:

(MOTHER DLIST (/NOUN FAMILY)).

Such tagging in no way interferes with other information (e.g., rank or transformation rules) which may be associated with the given tag word. A decomposition rule my contain a matching constituent of the form (/TAG1 TAG2・・・) which will match and isolate a word in the subject text having any one of the mentioned tags. If, for example, "MOTHER" is tagged as indicated and the input text

"CONSIDER MY AGED MOTHER AS WELL AS ME"

subjected to the decomposition rule

(0 YOUR, 0 (/FAMILY) 0)

(remembering that "MY" has been replaced by "YOUR"), then the decomposition would be

(1) CONSIDER    (2) YOUR    (3) AGED    (4) MOTHER    (5) AS WELL AS ME.

Another flexibility inherent in the SLIP text manipulation mechanism underlying ELIZA is tha or-ing of matching criteria is permitted in decomposition rules. The above input text would have been decomposed precisely as stated above by the decomposition rule:

(0 YOUR 0 (*FATHER MOTHER) 0)

which, by virtue of the presence of "*" in the sublist structure seen above, would have isolated either the word "FATHER" or "MOTHER" (in that order) in the input text, whichever occurred first after the first appearance of the word "YOUR".

Finally, the script writer must begin his script with a list, i.e., a message enclosed in parentheses, which contains the statement he wishes EIIZA to type when the system is first loaded. This list may be empty.

Editing of an ELIZA script is achieved via appeal to a contextual editing program (ED) which is part of the MAC library. This program is called whenever the input text to ELIZA consists of the single word "EDIT ". ELIZA then puts itself in a so-called dormant state and presents the then stored script for editing. Detailed description of ED is out of place here. Suffice it to say that changes, additions and deletions of the script may be made with considerable efficiency and on the basis of entirely contextual cues, i.e., without resort to line numbers or any other artificial devices. When editing is completed, ED is given the command to FILE the revised script. The new script is then stored on the disk and read into ELIZA. ELIZA then types the word "START" to signal that the conversation may resume under control of the new script.

An important consequence of the editing facility built into ELIZA is that a given ELIZA script need not start out to be a large, full-blown scenario. On the contrary, it should begin as a quite modest set of keywords and transformation rules and permitted to be grown and molded as experience with it builds up. This appears to be the best way to use a truly interactive man-machine facility -- i.e., not as a device for rapidly debugging a code representing a fully thought out solution to a problem, but rather as an aid for the exploration of problem solving strategies.


Discussion

At this writing, the only serious ELIZA scripts which exist are some which cause ELIZA to respond roughly as would certain psychotherapists (Rogerians). ELIZA performs best when its human correspondent is initially instructed to "talk" to it, via the typewriter of course, just as one would to a psychiatrist. This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world. If, for example, one were to tell a psychiatrist "I went for a long boat ride" and he responded "Tell me about boats", one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation. It is important to note that this assumption is one made by the speaker. Whether it is realistic or not is art altogether separate question. In any case, it has a crucial psychological utility in that it serves the speaker to maintain his sense of being heard and understood. The Speaker further defends his impression (which even in real life may be illusory) by attributing to his conversation partner all sorts of background knowledge, insights and reasoning ability. But again, these are the speaker's contribution to the conversation. They manifest themselves inferentially in the interpretations he makes of the offered responses. From the purely technical programming point of view then, the psychiatric interview form of an ELIZA script has the advantage that it eliminates the need of storing explicit information about the real world.

本稿の執筆時点で、存在する唯一の真面目なELIZAスクリプトは、ELIZAが特定の精神療法医(Rogerians)のように応答するものです。ELIZAは、それと対話する人間がそれと初めて「話す」ように、もちろんタイプライターを介して、精神科医と同じように、指示されたときに最高のパフォーマンスを発揮します。この会話形式は、精神医学的インタビューが、参加しているペアの1人が現実世界についてほとんど何も知らない姿勢を取ることが許される自然言語による2者間コミュニケーションに分類される数少ない例の1つであることから選択されました。たとえば、ある人が精神科医に「長いボートに乗った」と言い、「ボートについて教えてください」と答えた場合に、ボートについて何も知らなかったとは思っていませんが、その後の会話でも何らかの目的があります。この仮定は対話者によってなされたものであることに注意することが必要です。それが現実であろうがなかろうが全く別の問題です。いずれにせよ、対話者が聞き取り、理解される感覚を維持するために重要な心理的な有用性があります。対話者は、会話のパートナーにあらゆる種類の背景知識、洞察力、推論能力を帰属させることによって(その実生活が幻想であったとしても)彼の印象をさらに擁護します。これもまた、対話者の会話への貢献です。彼らは提示された反応を解釈して、それを推論します。純粋に技術的プログラミングの観点から、ELIZAスクリプトの精神医学的インタビュー形式は、現実世界に関する明示的な情報を埋め込む必要性を排除する利点があります。

The human speaker will, as has been said, contribute much to clothe ELIZA'S responses in vestments of plausibility. But he will not defend his illusion (that he is being understood) against all odds. In human conversation a speaker will make certain (perhaps generous) assumptions about his conversational partner. As long as, it remains possible to interpret the latter's responses consistently with those assumptions, the speaker's image of his partner remains unchanged, in particular, undamaged. Responses which are difficult to so interpret may well result in an enhancement of the image of the partner, in additional rationalizations which then make more complicated interpretations of his responses reasonable. When, however, such rationalizations become too massive and even self-contradictory, the entire image may crumble and be replaced by another ("He is not, after all, as smart as I thought he was"). When the conversational partner is a machine (the distinction between machine and program is here not useful) then the idea of credibility may well be substituted for that of plausibility in the above.

人間の対話者は、ELIZAの妥当性を証明するために多くの貢献をしています。大きな困難にもかかわらず、彼は(理解されているという)自分の錯覚を主張することはありません。人間の間での会話では、対話者は彼の会話相手について一定の(おそらく寛大な)仮定を行います。後者の反応をこれらの仮定と一貫して解釈することが可能であれば、相手のイメージは変わらず、特に傷つかないものです。そのように解釈することが困難な応答でも、パートナーのイメージを強化して追加の合理化をもたらし、その結果、彼の応答の合理化するより複雑な解釈を行う可能性があります。しかし、そのような合理化があまりにも重くなり、自己矛盾するようになっても、イメージ全体が崩壊し、別のものに置き換えられることがあります("結局、彼は自分が思ったほど賢くないだろう")。対話相手が機械である場合(機械とプログラムとの区別はここでは意味はありません)信憑性の考え方は上記の妥当性の考え方に代わる可能性があります。

With ELIZA as the basic vehicle, experiments may be set up in which the subjects find it credible to believe that the responses which appear on his typewriter are generated by a human sitting at a similar instrument in another room. How must the script be written in order to maintain the credibility of this idea over a long period of time? How can the performance of ELIZA be systematically degraded in order to achieve controlled and predictable thresholds of credibility in the subject? What, in all this, is the role of the initial instruction to the subject? On the other hand, suppose the subject is told he is communicating with a machine. What is he led to believe about the machine as a result of his conversational experience with it? Some subjects have been very hard to convince that ELIZA (with its present script) is not human. This is a striking form of Turing's test. What experimental design would make it more nearly rigorous and airtight?

基本的な手段としてELIZAを使って、被験者が彼のタイプライターに現れる応答は、別の部屋の同様の機器に座っている人間によって生成されると信じる実験を設定することができます。このアイデアの信頼性を長期間維持するためにスクリプトをどのように書くべきでしょうか?被験者の信頼性を制御可能かつ予測可能な閾値を達成するため、ELIZA のパフォーマンスを体系的に低下させるためにはどうすれば良いでしょうか?これを達成するため、被験者に対する最初の指示の役割は何でしょうか?あるいは、被験者に機械と通信していることを伝えたとします。会話した経験の結果として、彼に機械であることを信じさせるのは何でしょうか?(現在のスクリプトで動く)ELIZAが人間ではないことを説得するのは非常に困難でした。これは、チューリング・テストの典型的な事例です。どのような実験設計がより厳密で緻密なものになるのでしょうか?

The whole issue of the credibility (to humans) of machine output demands investigation. Important decisions increasingly tend to be made in response to computer output. The ultimately responsible human interpreter of "What the machine says" is, not unlike the correspondent with ELIZA, constantly faced with the need to make credibility judgments. ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.

(人間の)機械出力に対する信頼性の一連の問題は、研究を必要とします。コンピュータ出力に応じて、重要な決定がますます増加する傾向にあります。ELIZAとよく似た「発言する機械」の判断について最終的な責任を負う人間は、常に信頼性の判断をする必要性に直面しています。他に何もなければ、理解の錯覚を作り、それを維持することがどれほど簡単であるかを、ELIZAは示しています。そこにある種の危険が潜んでいます。

The idea that the present ELIZA script contains no information about the real world is not entirely true. For example, the transformation rules which cause the input

現在のELIZAスクリプトに現実世界に関する情報が含まれていないという考え方は完全には真実ではありません。たとえば、入力を引き起こす変換ルール

Everybody hates me(誰もが私を嫌ってる)

to be transformed to

は次のように変換されます。

Can you think of anyone in particular(特定の誰かを思い出せますか?)

and other such are based on quite specific hypotheses about the world. The whole script constitutes, in a loose way, a model of certain aspects of the world. The act of writing a script is a kind of programming act and has all the advantages of programming, most particularly that it clearly shows where the programmer's understanding and command of his subject leaves off.

そして、その他のものは世の中についての非常に特殊な仮説に基づいています。スクリプト全体は、緩やかな方法で、世界の特定の側面のモデルを構成します。スクリプトを書く行為は一種のプログラミング行為であり、プログラミングのすべての利点を持っています。最も顕著なことは、プログラマーの理解と命令がどこに残されているかを明確に示しています。

A large part of whatever elegance may be credited to ELIZA lies in the fact that ELIZA maintains the illusion of understanding with so little machinery. But there are bounds on the extendability of ELIZA's "understanding" power, which arc a function of the ELIZA program itself and not a function of any script it may be given. The crucial test of understanding, as every teacher should know, is not the subject's ability to continue a conversation, but to draw valid conclusions from what he is being told. In order for a computer program to be able to do that, it must at least have the capacity to store selected parts of its inputs. ELIZA throws away each of its inputs, except for those few transformed by means of the MEMORY machinery. Of course, the problem is more than one of storage. A great part of it is, in fact, subsumed under the word "selected" used just, above. ELIZA in its use so far has had as one of its principal objectives the concealment of its lack of understanding. But to encourage its conversational partner to offer inputs from which it can select remedial information, it, must reveal its misunderstanding. A switch of objectives from the concealment to the revelation of misunderstanding is seen as a precondition to making an ELIZA-like program the basis for an effective natural language man-machine communication system.

ELIZAによって充たされるエレガンスさの大部分を占めているのは、ELIZAが機械をほとんど使用していないという錯覚を抱かせることにあります。しかし、ELIZAプログラムの機能で、与えられたスクリプトの機能ではない、ELIZAの「理解」力の拡張性には限界があります。すべての教師が知るべき、理解に関する決定的なテストは、会話を続ける被験者の能力ではなく、話されていることから有効な結論を引き出すことです。コンピュータプログラムがそれを行うためには、少なくとも入力の選択された一部を格納する能力を持っていなければなりません。ELIZAは、MEMORYの機械で変換されたものを除いて、それぞれの入力を捨てます。もちろん、問題は複数のストレージです。その大部分は、実際には、上記の「選択された」という言葉の下に包含されています。ELIZAは、これまでの使用において、その主要な目的の1つとして、理解の欠如を隠蔽してきました。しかし、対話相手に救済情報を選択できるインプットを提供することを奨励するためには、その誤解を明らかにしなければならない。ELIZAのようなプログラムを効果的な自然言語のマン・マシン通信システムの基礎にするための前提条件として、誤解の隠蔽から啓示へと目的を切り替える必要があります。

One goal for an augmented ELIZA program is thus a system which already has access to a store of information about some aspects of the real world and which, by means of conversational interaction with people, can reveal both what it knows, i.e., behave as an information retrieval system, and where its knowledge ends and needs to be augmented. Hopefully the augmentation of its knowledge will also be a direct consequence of its conversational experience. It is precisely the prospect that. such a program will converse with many people and learn something from each of them, which leads to the hope that it will prove an interesting and even useful conversational partner.

ELIZAプログラムの拡張の1つの目標は、現実の世界のいくつかの側面について既に記憶されている情報にアクセスできるシステムで、すなわち、情報検索システムとして振る舞う、知っているものと、その知識が終わり、人々との会話のやりとりによって拡張される必要がある場所の両方を明らかにすることができるものです。うまくいけば知識の拡大は、会話経験の直接的な結果にもなります。それはまさにその見通しです。そのようなプログラムは、多くの人と会話し、それぞれから何かを学ぶので、興味深く、役に立つ会話のパートナーになることが期待されます。

One way to state a slightly different intermediate goal is to say that ELIZA should be given the power to slowly build a model of the subject conversing with it. If the subject mentions that he is not married, for example, and later speaks of his wife, then ELIZA should be able to make the tentative inference that he is either a widower or divorced. Of course, he could simply be confused. In the long run, ELIZA should be able to build up a belief structure (to use Abelson's phrase) of the subject and on that basis detect the subject's rationalizations, contradictions, etc. Conversations with such an ELIZA would often turn into arguments, Important steps in the realization of these goals have already been taken. Most notable among these is Abelson's and Carroll's work on simulation of belief structures [1].

わずかに異なる中間目標を述べる1つの方法は、ELIZAがそれを話す主題のモデルをゆっくりと構築する力を与えられるべきだと言うことです。たとえば、結婚していないと主張し、後で妻を話すと、ELIZAは彼が死別しているか離婚しているかの仮推測をすることができます。もちろん、彼は単に混乱するかも知れません。長期的には、ELIZAは被験者の信念構造(Abelsonのフレーズを使用する)を構築でき、その基礎で被験者の合理化、矛盾などを検出できるはずです。そのようなELIZAとの会話は、しばしば議論に変わるでしょう、これらの目標の実現における重要なステップは既に行われています。これらの中で最も顕著なものは、AbelsonとCarrollの信念構造のシミュレーションに関する研究です。

The script that has formed the basis for most of this discussion happens to be one with an overwhelming psychological orientation. The reason for this has already been discussed. There is a danger, however, that the example will run away with what it is supposed to illustrate. It is useful to remember that the ELIZA program itself is merely a translating processor in the technical programming sense. Gorn[2] in a paper on language systems says:

この議論の大部分を成すスクリプトは、圧倒的な心理学的方向性を持つものです。この理由は既に議論されていますが、この例は、説明されているもので逃げる危険があります。技術的プログラミングの意味では、ELIZAプログラム自体は単に翻訳プロセッサであることを覚えておくと便利です。Gornは言語システムに関する論文で、次のように語っています: 

Given a language which already possesses semantic content, then a translating processor, even if it operates only syntactically, generates corresponding expressions of another language to which we can attribute as "meanings" (possibly multiple -- the translator may not be one to one) the "semantic intents" of the generating source expressions; whether we find the result consistent or useful or both is, of course, another problem. It is quite possible that by this method the same syntactic object language can be usefully assigned multiple meanings for each expression...
すでにセマンティック・コンテンツを持つ言語が与えられている場合、翻訳プロセッサは構文的にしか動作しなくても、生成するソース表現の「意味」(複数の場合もあります  --  トランスレータは1対1ではない可能性があります)として属性を付けることができる別の言語の対応する表現を生成します。結果が一貫しているか有用であるのか、あるいはその両方が別の問題であるかを問いません。この方法では、同じ構文的オブジェクト言語を各式に複数の意味を割り当てることができます...

It is striking to note how well his words fit ELIZA. The "given language" is English as is the "other language", expressions of which arc generated. In principle, the given language could as well be the kind of English in which "word problems" in algebra are given to high school students and the other language, a machine code allowing a particular computer to "solve" the stated problems. (See Bobrow's program STUDENT[3].)

彼の言葉がELIZAにどれほどうまく収まるかを知ることは印象的です。 「与えられた言語」は、「他の言語」と同様に英語であり、その表現は生成される。原則として、与えられた言語は、代数の「単語問題」が高校生に与えられる英語のようなものでもよいし、特定のコンピュータが上記の問題を「解決」できるマシンコードでもよい。(BobrowのSTUDENT[3]を参照)

The intent of the above remarks is to further rob ELIZA of the aura of magic to which its application to psychological subject matter has to some extent contributed. Seen in the coldest possible light, ELIZA is a translating processor in Gorn's sense; however, it is one which has been especially constructed to work well with natural language text.

上記の発言の意図は、ELIZAに心理学的主題への適用がある程度貢献している魔法のオーラをさらに奪うことです。最も冷淡に見た場合、ELIZAはGornのセンスで翻訳するプロセッサです。しかし、それは特に自然言語のテキストでうまく動作するように構築されているものです。

 


REFERENCES

1. ABELSON, R. P., AND CARROLL, J. D.
Computer simulation of individual belief systems.
Amer. Behav. Sci. 9 (May 1965), 24-30.

2. GORN, S.
Semiotic relationships in ambiguously stratified language systems.
Paper presented at Int. Colloq. Algebraic Linguistics and Automatic Theory, Hebrew U. of Jerusalem, Aug. 1964.

3. BOBROW, D. G.
Natural language input for a computer problem solving system.
Doctoral thesis, Math. Dept., MIT. Cambridge, Mass., 1964.

4. WEIZENBAUM, J.
Symmetric list processor.
Comm. ACM 6, (Sept. 1963), 524-544.

5. ROGERS, C.
Client Centered Therapy: Current Practice, Implications and Theory.
Houghton Mifflin, Boston, 1951.

6. YNGVE, J.
COMIT Programndng Manual.
MIT Press, Cambridge, Mass., 1961. 


APPENDIX. An ELIZA Script

-------------------------------------------------------
(HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM)
START
(SORRY ((0) (PLEASE DON'T APOLIGIZE)
(APOLOGIES ARE NOT NECESSARY) (WHAT FEELINGS
DO YOU HAVE WHEN YOU APOLOGIZE) (I'VE TOLD YOU
(DONT = DON'T)
(CANT = CAN'T)
(WONT = WON'T)
(REMEMBER S
((0 YOU REMEMBER O) (DO YOU OFTEN THINK OF 4)
(DOES THINKING OF 4 BRING ANYTHING ELSE TO MIND)
(WHAT ELSE OO YOU REMEMBER)
(WHY DO YOU REMEMBER 4 JUST NOW)
(WHAT IN THE PRESENT SITUATION REMINDS YOU OF 4)
(WHAT IS THE CONNECTION BETWEEN ME AND 4))
((0 DO I REMEMBER 0) (DID YOU THINK I WOULD FORGET 5)
(WHY DO YOU THINK I SHOULD RECALL 5 NOW)
(WHAT ABOUT 5) (=WHAT) (YOU MENTIONED S))
((0) (NEWKEY)))
(IF 3 ((0 IF 0) (DO YOU THINK ITS LIKELY THAT 3) (DO YOU WISH THAT 3)
(WHAT DO YOU THINK ABOUT 3) (REALLY, 2 3)))
(WHAT DO YOU THINK ABOUT 3) (REALLY, 2 3)))
(DREAMT 4 ((0 YOU DREAMT O)
(REALLY, 4) (HAVE YOU EVER FANTASIED 4 WWILE YOU WERE AWAKE)
(HAVE YOU DREAMT 4 BEFORE) (=DREAM) (NEWKEY)))
(DREAMED = DREAMT 4 (=DREAMT))
(DREAM 3 ((0) (WHAT DOES THAT DREAM SUGGEST TO YOU)
(DO YOU DREAM OFTEN) (WHAT PERSONS APPEAR IN YOUR DREAMS)
(DON'T YOU BELIEVE THAT DREAM HAS SOMETHING TO DO WITH
YOUR PROBLEM) (NEWKEY)))
(DREAMS = DREAM 3 (=DREAM))
(HOW (=WHAT))
(WHEN (=WHAT))
(ALIKE 10 (=DIT))
(SAME 10 (=DIT))
(CERTAINLY (=YES))
(FEEL DLIST(/BELIEF))
(THINK DLIST(/BELIEF))
(BELIEVE DLIST(/BELIEF))
(WISH DLIST(/BELIEF))
(MEMORY MY (0 YOUR 0 = LETS DISCUSS FURTHER WHY YOUR 3)
(0 YOUR 0 = EARLIER YOU SAID YOUR 3)
(O YOUR 0 = BUT YOUR 3)
(0 YOUR 0 = DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR 3))
(NONE ((0) (I AM NOT SURE I UNDERSTAND YOU FULLY)
(PLEASE GO ON)
(WHAT DOES THAT SUGGEST TO YOU)
(DO YOU FEEL STRONGLY ABOUT DISCUSSING SUCH THINGS)))
(PERHAPS ((0) (YOU DON'T SEEM QUITE CERTAIN)
(WHY THE UNCERTAIN TONE)
(CAN'T YOU BE MORE POSITIVE)
(YOU AREN'T SURE) (DON'T YOU KNOW)))
(MAYBE (=PERHAPS))
(NAME 15 ((0) (I AM NOT INTERESTED IN NAMES)
(I'VE TOLD YOU BEFORE, I DON'T CARE ABOUT NAME -
PLEASE CONTINUE)) )
PLEASE CONTINUE)) )
(DEUTSCH (=XFREMD))
(FRANCAIS (=XFREMD))
(ITALIANO (=XFREMD))
(ESPANOL (=XFREMD))
(XFREMD ((0) (I AM SORRY, I SPEAK ONLY ENGLISH)))
(HELLO ((0) (HOW DO YOU DO, PLEASE STATE YOUR PRORLEM)))
(COMPUTER 50 ((O) (DO COMPUTERS WORRY YOU)
(WHY DO YOU MENTION COMPUTERS) (WHAT DO YOU THINK MACHINES
HAVE TO DO WITH YOUR PROBLEM) (DON'T YOU THINK COMPUTERS CAN
HELP PEOPLE) (WHAT ABOUT MACHINES WORRIES YOU) (WHAT
DO YOU THINK ABOUT MACHINES)))
(MACHINE SO (=COMPUTER))
(MACHINES 50 (=COMPUTER))
(COMPUTERS 50 (=COMPUTER))
(AM = ARE ((O ARE YOU O) (DO YOU BELIEVE YOU ARE 4)
(WOULD YOU WANT TO BE 4) (YOU WISH I WOULD TELL YOU YOU ARE 4)
(WHAT WOUD IT MEAN IF YOU WERE 4) (=WHAT))
((0) (WHY DO YOU SAY 'AM') (I DON'T UNDERSTAND THAT)))
(ARE ((0 ARE I 0)
(WHY ARE YOU INTERESTED IN WHETHER I AM 4 OR NOT)
(WOULD YOU PREFER IF I WEREN'T 4) (PERHAPS I AM 4 IN YOUR
FANTASIES) (DO YOU SOMETIMES THINK I AM 4) (=WHAT))
((O ARE 0) (DID YOU THINK THEY MIGHT NOT BE 3)
(WOULD YOU LIKE IT IF THEY WERE NOT 3) (WHAT IF THEY WERE NOT 3)
(POSSIBLY THEY ARE 3)) )
(YOUR = MY ((0 MY O) (WHY ARE YOU CONCERNED OVER MY 3)
(WHAT ABOUT YOUR OWN 3) (ARE YOU WORRIED ABOUT SOMEONE ELSES 3)
(REALLY, MY 3)))
(WAS 2 ((0 WAS YOU 0 )
(WHAT IF YOU WERE 4) (DO YOU THINK YOU WERE 4)
(WERE YOU 4) (WHAT WOULD IT MEAN IF YOU WERE 4)
(WHAT DOES ' 4 ' SUGGEST TO YOU) (=WHAT))
((O YOU WAS O)
(WERE YOU REALLY) (WHY DO YOU TELL ME YOU WERE 4 NOW)
(WERE YOU REALLY) (WHY DO YOU TELL ME YOU WERE 4 NOW)
(PERHAPS I ALREADY KNEW YOU WERE 4) )
((O WAS I O) (WOULD YOU LIKE TO BELIEVE I WAS 4)
(WHAT SUGGESTS THAT I WAS 4)
(WHAT DO YOU THINK) (PERHAPS I WAS 4)
(WHAT IF I HAD BEEN 4))
((0) (NEWKEY)) )
(WERE = WAS (=WAS))
(ME = YOU)
(YOU'RE = I'M ((0 I'M 0) (PRE ( I ARE 3) (=YOU))))
(I'M = YOU'RE ((0 YOU'RE O) (PRE (YOU ARE 3) (=I))))
(MYSELF = YOURSELF)
(YOURSELF = MYSELF)
(MOTHER DLIST(/NOUN FAMILY))
(NON = MOTHER DLIST(/ FAMILY))
(DAD = FATHER DLIST(/ FAMILY))
(SISTER BLIST(/FAMILY))
(BROTHER DLIST(/FAMILY))
(WIFE DLIST(/FAMILY))
(CHILDREN DLIST(/FAMILY))
(I = YOU
((0 YOU (* WANT NEED) O) (WHAT WOULD IT MEAN TO YOU IF YOU GOT 4)
(WHY DO YOU WANT 4) (SUPPOSE YOU GOT 4 SOON) (WHAT
IF YOU NEVER GOT 4) (WHAT WOULD GETTING 4 MEAN TO
YOU) (WHAT DOES WANTING 4 HAVE TO DO WITH THIS DISCUSSION))
((0 YOU ARE O (*SAD UNHAPPY DEPRESSED SICK ) 0)
(I AM SORRY TO HEAR YOU ARE 5) (DO YOU THINK COMING HERE
WILL HELP YOU NOT TO BE 5) (I'M SURE ITS NOT PLEASANT TO
BE 5) (CAN YOU EXPLAIN WHAT MADE YOU 5))
((O YOU ARE O (*HAPPY ELATED GLAD BETTER ) 0)
(HOW HAVE I HELPED YOU TO BE 5)
(HAS YOUR TREATMENT MADE YOU 5) (WHAT MAKES YOU S JUST
NOW) (CAN YOU EXPLAIN WHY YOU ARE SUDDENLY 5))
((0 YOU WAS O) (=WAS))
((0 YOU WAS O) (=WAS))
((O YOU (/BELIEF) YOU 0) (DO YOU REALLY THINK SO) (BUT YOU ARE
NOT SURE YOU 5) (DO YOU REALLY DOUBT YOU 5))
((0 YOU 0 (/BELIEF) O I 0) (=YOU))
((0 YOU ARE O)
(IS IT BECAUSE YOU ARE 4 THAT YOU CAME TO ME)
(HOW LONG HAVE YOU BEEN 4)
(DO YOU BELIEVE IT NORMAL TO BE 4)
(DO YOU ENJOY BEING 4))
((0 YOU (* CAN'T CANNOT) 0) (HOW DO YOU KNOW YOU CAN'T 4)
(HAVE YOU TRIED)
(PERHAPS YOU COULD 4 NOW)
(DO YOU REALLY WANT TO BE ABLE TO 4))
((0 YOU DON'T 0) (DON'T YOU REALLY 4)(WHY DON'T YOU 4)
(DO YOU WISH TO BE ABLE TO 4) (DOES THAT TROUBLE YOU))
((0 YOU FEEL 0) (TELL ME MORE ABOUT SUCH FEELINGS)
(DO YOU OFTEN FEEL 4)
(DO YOU ENJOY FEELING 4)
(OF WHAT DOES FEELING 4 REMIND YOU))
((0 YOU 0 I 0) (PERHAPS IN YOUR FANTASY WE 3 EACH OTHER)
(DO YOU WISH TO 3 ME)
(YOU SEEN TO NEED TO 3 ME)
(DO YOU 3 ANYONE ELSE))
((0) (YOU SAY 1)
(CAN YOU ELABORATE ON THAT) (DO YOU SAY 1 FOR SOME SPECIAL REASON)
(THAT'S QUITE INTERESTING)))
(YOU = I ((O I REWIND YOU OF O) (-DIT))
((0 I ARE 0) (WHAT MAKES YOU THINK I AM 4)
(DOES IT PLEASE YOU TO RELIEVE I AM 4)
(DO YOU SOMETIMES WISH YOU WERE 4)
(PERHAPS YOU WOULD LIKE TO BE 4))
((0 I 0 YOU) (WHY DO YOU THINK I 3 YOU)
(YOU LIKE TO THINK I 3 YOU - DON'T YOU)
(WHAT MAKES YOU THINK I 3 YOU)
(REALLY, I 3 YOU) (DO YOU WISH TO BELIEVE I 3 YOU)
(REALLY, I 3 YOU) (DO YOU WISH TO BELIEVE I 3 YOU)
(SUPPOSE I DID 3 YOU - WHAT WOULD THAT MEAN)
(DOES SOMEONE ELSE BELIEVE I 3 YOU))
((0 I O) (WE WERE DISCUSSING YOU - NOT ME)
(OH, I 3) (YOU'RE NOT REALLY TALKING ABOUT ME - ARE YOU)
(WHAT ARE YOUR FEELINGS NOW)))
(YES ((0) (YOU SEEM QUITE POSITIVE) (YOU ARE SURE)
(I SEE) (I UNDERSTAND)))
(NO ((0) (ARE YOU SAYING 'NO' JUST TO BE NEGATIVE)
(YOU ARE BEING A BIT NEGATIVE) (WHY NOT) (WHY ' NO')))
(MY = YOUR 2 ((0 YOUR 0 (/FAMILY) 0)
(TELL ME MORE ABOUT YOUR FAMILY) ( WHO ELSE IN YOUR FAMILY
5) (YOUR 4) (WHAT ELSE COMES TO MIND WHEN YOU THINK OF
YOUR 4))
((0 YOUR O) (YOUR 3) (WHY DO YOU SAY YOUR 3) (DOES THAT
SUGGEST ANYTHING ELSE WHICH BELONGS TO YOU) (IS IT
IMPORTANT TO YOU THAT 2 3)))
(CAN ((0 CAN I O) (YOU BELIEVE I CAN 4 DON'T YOU) (=WHAT)
(YOU WANT ME TO BE ABLE TO 4) (PERHAPS YOU WOULD LIKE TO BE
ABLE TO 4 YOURSELF)) ((0 CAN YOU 0) (WHETHER OR NOT YOU CAN 4
DEPENDS ON YOU MORE THAN ON ME) (DO YOU WANT TO BE ABLE TO 4)
(PERHAPS YOU DON'T WANT TO 4) (=WHAT)))
(WHAT ((O) (WHY DO YOU ASK) (DOES THAT QUESTION INTFRrST YOU)
(WHAT IS IT YOU REALLY WANT TO KNOW)
(ARE SUCH QUESTIONS MUCH ON YOUR MIND)
(WHAT ANSWER WOULD PLEASE YOU MOST)
(WHAT DO YOU THINK) (WHAT COMES TO YOUR MIND WHFN YOU ASK THAT)
(HAVE YOU ASKED SUCH QUESTION BEFORE)
(HAVE YOU ASKED ANYONE ELSE)))
(BECAUSE ((0) (IS THAT THE REAL REASON) (DON'T ANY OTHER REASONS
COME TO HIND) (DOES THAT REASON SEEM TO EXPLAIN ANYTHING ELSE)
(WHAT OTHER REASONS MIGHT THERE BE)))
(WHY ((O WHY DON'T I 0) (DO YOU BELIEVE I DON'T 5) (PERHAPS I
WILL 5 IN GOOD TIME) (SHOULD YOU 5 YOURSELF) (YOU WANT ME TO 5)
(=WHAT))
(=WHAT))
((0 WHY CAN'T YOU O) (DO YOU THINK YOU SHOULD BE ABLE TO 5)
(DO YOU WANT TO BE ABLE TO 5) (DO YOU BELIEVE THIS WILL HELP YOU
TO 5) (HAVE YOU ANY IDEA WHY YOU CAN'T 5) (=WHAT))
(= WHAT))
(EVERYONE 2 ((O (* EVERYONE EVERYBODY NOBODY NOONF) O )
(REALLY, 2) (SURELY NOT 2) (CAN YOU THINK OF
ANYONE IN PARTICULAR) (WHO, FOR EXAMPLE) (YOU ARE THINKINO OF
A VERY SPECIAL PERSON)
(WHO, MAY I ASK) (SOMEONE SPECIAL PERHAPS)
(YOU HAVE A PARTICULAR PERSON IN MINn, DON'T YOU) (WHO DO YOU
THINK YOU'RE TALKING ABOUT)))
(EVERYBODY 2 (= EVERYONE))
(NOBODY 2 (=EVERYONE))
(NOONE 2 (=EVERYONE))
(ALWAYS 1 ((0) (CAN YOU THINK OF A SPECIFIC EXAMPLE) (WHEN)
(WHAT INCIDENT ARE YOU THINKING OF) (REALLY, ALWAYS)))
(LIKE 10 ((O (*AM IS ARE WAS) 0 LIKE O) (=DIT))
((0) (NEWKEY)) )
(DIT ((O) ( IN WHAT WAY) (WHAT RESEMBLANCE DO YOU SEE)
(WHAT DOES THAT SIMILARITY SUGGEST TO YOU)
(WHAT OTHER CONNECTIONS DO YOU SEE)
(WHAT DO YOU SUPPOSE THAT RESEMBLANCE MEANS)
(WHAT IS THE CONNECTION, DO YOU SUPPOSE)
(COULD THERE REALLY BE SOME CONNECTION)
(HOW)))
()
-------------------------------------------------------

RECEIVED SEPTEMBER, 1965

Volume 9 / Number 1 / January, 1966


LETTERS--continued from p. 35

The technique consists of translating the code for the letter "O" to the code for the numeral O whenever it is encountered in the input character string. If the string consists only of items such as nulmbers and names and it is necessary to sort alphabetically on names, the occurrence of an alphabetic character within a name field is used to cause the code for zero to be retranslated to the code for the letter "0" by a rescan of the characters in the name field.

If no sorting is required, the retranslation can be avoided, provided that delimiters such as FORMAT or GO TO are spelled with zero within the recognizer segment of a translator. It, is also necessary to redefine identifier as

<identifier> ::= <letter> | <identifier> <letter> | <identifier> <digit> | <0> <identifier>

where it is understood that the letter "O" is removed from the standard definition of letter as in ALGOL 60. The redefinition permits the inclusion of identifiers such as ODD or OOPS but prevents the use of an identitier consisting only of the repeated m ark O.

This technique requires consistency of use and might result in chaos in a warehousing operation in which the letter "0" is used in parts labels with check digits.

L. RICHARD TURNER
NASA Lewis Research Center
Cleveland, Ohio


Comment on a Problem in Concurrent
Programing Control

Dear Editor:
I would like to comment on Mr. Dijksra's solution [Solution of a problem in concurrent programming control. Comm ACM 8(Sept. 1965), 569] to a messy problem that is hardly academic . We are using it no w on a multiple computer complex.
When there are only two computers, the algorithm may be simplified to the following:

Boolean array b(0; 1) integer k, i, j,
comment  This is the program for computer i, which may be either 0  or 1, computer j != i is the other one, 1 or 0;
CO: b(i) := false;
C1: if k != i then begin
C2: if not b(j) then go to C2;
    else k := i; go to C1 end;
    else critical section;
    b(i) := true;
    remainder of program;
    go to C0;
    end

Mr. Dijkstra has come up with a clever solution to a really practical problem.

HARRIS HYMAN
Munitype
New York, New York