Kenneth Colby

Wikipedia英語版のKenneth Colbyのページの翻訳です。






Kenneth Colby

Kenneth Mark Colby (1920 – April 20, 2001) was an American psychiatrist dedicated to the theory and application of computer science and artificial intelligence to psychiatry. Colby was a pioneer in the development of computer technology as a tool to try to understand cognitive functions and to assist both patients and doctors in the treatment process. He is perhaps best known for the development of a computer program called PARRY, which mimicked a person with paranoid schizophrenia and could "converse" with others. PARRY sparked serious debate about the possibility and nature of machine intelligence.

Kenneth Mark Colby(1920 - April 20、2001)は、精神医学においてコンピュータサイエンス人工知能の理論と応用を専門とするアメリカの精神科医でした。コルビーは、認知機能を理解し、治療プロセスにおいて患者と医師の両方を支援するためのツールとして、コンピュータ技術の発展のパイオニアでした。おそらく彼は、妄想型統合失調症患者を模倣し、他人と「会話する」ことができるPARRYと呼ばれるコンピュータプログラムの開発で最もよく知られています。 PARRYは機械の知能の可能性と性質について真剣な議論を起こしました。


Early life and education(幼少期と教育)

Colby was born in Waterbury, Connecticut in 1920. He graduated from Yale University in 1941 and received his M.D. from Yale Medical School in 1943.




Colby began his career in psychoanalysis as a clinical associate at the San Francisco Institute of Psychoanalysis in 1951. During this time, he published A Primer for Psychotherapists, an introduction to psychodynamic psychotherapy. He joined the Department of Computer Science at Stanford University in the early sixties, beginning his pioneering work in the relatively new field of artificial intelligence. In 1967 the National Institute of Mental Health recognized his research potential when he was awarded a Career Research Scientist Award. Colby came to UCLA as a professor of psychiatry in 1974, and was jointly appointed professor in the Department of Computer Science a few years later. Over the course of his career, he wrote numerous books and articles on psychiatry, psychology, psychotherapy and artificial intelligence.

コルビーは、1951年にサンフランシスコの精神分析研究所で臨床精神分析の仕事を始めました。この間、彼は精神療法の入門者であるサイコセラピストのための入門書を出版しました。彼は、60年代初期にスタンフォード大学コンピュータサイエンス学科に入学し、比較的新しい人工知能分野での先駆的研究を開始しました。 1967年、国立精神衛生研究所は、キャリア・リサーチ・サイエンティスト賞を受賞した時に、研究の可能性を認識しました。コルビーは1974年に精神医学の教授としてUCLAに入学し、数年後にはコンピュータサイエンス学部の教授に任命されました。彼のキャリアの過程で、精神医学、心理学、心理療法人工知能に関する数多くの書籍や記事を書きました。



Early in his career, in 1955, Colby published Energy and Structure in Psychoanalysis, an effort to bring Freud's basic doctrines into line with modern concepts of physics and philosophy of science.[1] This, however, would be one of the last attempts by Colby to reconcile psychoanalysis with what he saw as important developments in science and philosophical thought. Central to Freud's method is his employment of a hermeneutics of suspicion, a method of inquiry that refuses to take the subject at his or her word about internal processes. Freud sets forth explanations for a patient's mental state without regard for whether the patient agrees or not. If the patient does not agree, s/he has repressed the truth, that truth that the psychoanalyst alone can be entrusted with unfolding. The psychoanalyst's authority for deciding the nature or validity of a patient's state and the lack of empirical verifiability for making this decision was not acceptable to Colby.


Colby's disenchantment with psychoanalysis would be further expressed in several publications, including his 1958 book, A Skeptical Psychoanalyst. He began to vigorously criticize psychoanalysis for failing to satisfy the most fundamental requirement of a science, that being the generation of reliable data. In his 1983 book, Fundamental Crisis in Psychiatry, he wrote, “Reports of clinical findings are mixtures of facts, fabulations, and fictives so intermingled that one cannot tell where one begins and the other leaves off. …we never know how the reports are connected to the events that actually happened in the treatment sessions, and so they fail to qualify as acceptable scientific data.”.[2]

コルビーの精神分析への幻滅は、彼の1958年の本『懐疑的精神分析者』を含むいくつかの出版物でさらに表現されました。彼は精神分析を、科学の最も基本的な要件、すなわち信頼性の高いデータの生成を満たすことができなかったことを批判し始めました。彼は1983年の本『精神医学の危機』において、「臨床所見の報告は、事実、作り話、想像が混在しているため、どこが始まり、他の部分が残っているのか分からない ...報告書が治療セッションで実際に起こった出来事とどのように結びついているかはわからないので、彼らは受け入れ可能な科学的データとしての資格を得られない」と書いています。

Likewise, in Cognitive Science and Psychoanalysis, he stated, "In arguing that psychoanalysis is not a science, we shall show that few scholars studying this question get to the bottom of the issue. Instead, they start by accepting, as do psychoanalytic theorists, that the reports of what happens in psychoanalytic treatment -- the primary source of the data -- are factual, and then they lay out their interpretations of the significance of facts for theory. We, on the other hand, question the status of the facts." [3] These issues would shape his approach to psychiatry and guide his research efforts.



Computer Science(計算機科学)

In the 1960s Colby began thinking about the ways in which computer theory and application could contribute to the understanding of brain function and mental illness. One early project involved an Intelligent Speech Prosthesis which allowed individuals suffering from aphasia to “speak” by helping them search for and articulate words using whatever phonemic or semantic clues they were able to generate.[4]


Later, Colby would be one of the first to explore the possibilities of computer-assisted psychotherapy. In 1989, with his son Peter Colby, he formed the company Malibu Artificial Intelligence Works to develop and market a natural language version of cognitive behavioral therapy for depression, called Overcoming Depression. Overcoming Depression would go on to be used as a therapeutic learning program by the U.S. Navy and Department of Veteran Affairs and would be distributed to individuals who used it without supervision from a psychiatrist. Needless to say, this practice was challenged by the media. To one journalist Colby replied that the program could be better than human therapists because "After all, the computer doesn't burn out, look down on you or try to have sex with you." [5]

その後、コルビーはコンピュータ支援心理療法の可能性を探る最初の人の一人になりました。 1989年、彼の息子Peter Colbyと共に、彼はMalibu Artificial Intelligence Works社を設立し、Overcoming Depression と呼ぶ、うつ病に対する認知行動療法自然言語バージョンを開発し販売しました。Overcoming Depression は、米海軍および退役軍人局による治療学習プログラムとして使用され、精神科医からの監督なしで利用者個人に配布されました。言うまでもなく、この訓練はメディアの挑戦を受けました。あるジャーナリストに対しコルビーは「結局のところ、コンピュータが燃え尽きることなく、あなたを見下ろしたり、あなたとセックスしようとしていない」ため、プログラムは人間のセラピストよりも優れている可能性があると返答しました。


Artificial Intelligence(人工知能

In the 1960s at Stanford University, Colby embarked on the creation of software programs known as "chatterbots," which simulate conversations with people. One well known chatterbot at the time was ELIZA, a computer program developed by Joseph Weizenbaum in 1966 to parody a psychologist. ELIZA, by Weizenbaum's own admission, was developed more as a language-parsing tool than as an exercise in human intelligence. Named after the Eliza Doolittle character in Pygmalion it was the first conversational computer program, designed to imitate a psychotherapist asking questions instead of giving advice. It appeared to give conversational answers, although it could be led to lapse into obtuse nonsense.

スタンフォード大学の1960年代、コルビーは人々との会話をシミュレートする「チャッターボット」と呼ばれるソフトウェアプログラムの作成に着手しました。当時の有名なチャッターボットはELIZAでした。1966年にジョセフ・ヴァイゼンバウムによって開発されたコンピュータ・プログラムであるELIZAは、心理学者を模倣しました。Weizenbaum自身の了解により、ELIZAは、人間の知能の訓練よりも言語解析ツールとして開発されました。 PygmalionのEliza Doolittleのキャラクターにちなんで命名されたそれは、アドバイスを与える代わりに質問をする心理療法士を模倣するように設計された最初の会話型コンピュータプログラムでした。会話的な答えを出すように見えましたが、それは鈍感でナンセンスになる可能性があります。

In 1972, at the Stanford Artificial Intelligence Laboratory, Colby built upon the idea of ELIZA to create a natural language program called PARRY that simulated the thinking of a paranoid individual. This thinking entails the consistent misinterpretation of others' motives – others must be up to no good, they must have concealed motives that are dangerous, or their inquiries into certain areas must be deflected - which PARRY achieved via a complex system of assumptions, attributions, and “emotional responses” triggered by shifting weights assigned to verbal inputs.

1972年、スタンフォード人工知能研究所で、コルビーはELIZAというアイデアに基づき、偏執症の個人の思考をシミュレートしたPARRYという自然言語プログラムを作成しました。この考え方は、他者の動機が一貫して誤って解釈されることを伴います --会話の相手は悪いことにならなければならず、彼らは危険な動機を隠さなければならない、あるいは特定の分野への質問を逸らさなければならない -- 入力される言語に割り当てられた重みをシフトすることによって引き起こされる仮定、帰属、「感情的反応」の複雑なシステムを介してPARRYは達成されました。


PARRY: A Computer Model of Paranoia(PARRY: 偏執症のコンピュータモデル)

Colby's aim in writing PARRY had been practical as well as theoretical. He thought of PARRY as a virtual reality teaching system for students before they were let loose on real patients.[6] However, PARRY's design was driven by Colby's own theories about paranoia. Colby saw paranoia as a degenerate mode of processing symbols where the patient's remarks "are produced by an underlying organized structure of rules and not by a variety of random and unconnected mechanical failures." [7] This underlying structure was an algorithm, not unlike a set of computer processes or procedures, which is accessible and can be reprogrammed, in other words "cured."


Shortly after it was introduced, PARRY would go on to create intense discussion and controversy over the possibility or nature of machine intelligence. PARRY was the first program to pass the “Turing Test," named for the British mathematician Alan Turing, who in 1950 suggested that if a computer could successfully impersonate a human by carrying on a typed conversation with a person, it could be called intelligent. PARRY succeeded in passing this test when human interrogators, interacting with the program via remote keyboard, were unable with more than random accuracy to distinguish PARRY from an actual paranoid individual.

PARRYが導入された直後、マシン・インテリジェンスの可能性や性質についての激しい議論と論争が起こりました。 PARRY は英国の数学者 Alan Turing にちなんで命名された "Turing Test" に最初に合格したプログラムです。1950年、Turing はコンピュータが人とのタイプされた会話を続けることによって人間を偽装することができれば、それはインテリジェントと呼ぶことができると提案しました。リモート・キーボードを介してプログラムと対話している人間の質問者は、PARRYを実際の偏執症患者との区別が無作為の精度以上に行うことができなかったことから、PARRY はこのテストに合格しました。 

As philosopher Daniel Dennett stated in Alan Turing: Life and Legacy of a Great Thinker,

哲学者のダニエル・デネットが『アラン・チューリング: 偉大な思想家の生涯と遺産』で述べたように…

To my knowledge, the only serious and interesting attempt by any program designer to win even a severely modified Turing test has been Kenneth Colby. He had genuine psychiatrists interview PARRY. He did not suggest that they might be talking or typing to a computer; rather he made up some plausible story about why they were communicating with a real live patient via teletype. Then he took the PARRY transcript, inserted it into a group of teletype transcripts and gave them to another group of experts — more psychiatrists — and said, 'One of these was a conversation with a computer. Can you figure out which one it was?' They couldn't.[8]

私の知る限り、厳しく修正されたチューリングテストでさえ勝ち残った、唯一の本気で興味深い試みは Kenneth Colby でした。彼は本物の精神科医に PARRY をインタビューさせました。彼らはコンピュータが話したりタイプしているとは思わなかった。むしろ、テレタイプを介して実在する患者と通信していた理由についていくつかの説得力のあるストーリーを彼らは作り上げました。次に、彼は PARRY の会話録をとり、それをテレタイプの会話録のグループに挿入し、さらに別の精神科医のグループに渡して「1つはコンピュータとの会話だった。それがどれだったか分かりますか?」と尋ねましたが、彼らにはわかりませんでした。

Much of the criticism of ELIZA as a model for artificial intelligence focused on the program's lack of an internal world model that influenced and tracked the conversation. PARRY simulates paranoid behavior by tracking its own internal emotional state on a few different dimensions. To illustrate this, Colby created another program called RANDOM-PARRY which chose responses at random. Responses from RANDOM-PARRY did not model the human patients' responses as well as standard PARRY. Some have argued that PARRY fooled its judges because paranoid behavior makes inappropriate responses or non sequiturs appropriate. But there is still a certain logic to them that PARRY simulates effectively. It is simpler to simulate paranoid behavior, perhaps, but it is not trivial.[9]

人工知能のモデルとしてのELIZAへの批判の大部分は、会話に影響を及ぼし、それを追跡する内部世界モデルの欠如に重点を置いていました。 PARRY は、いくつかの異なる次元で自身の内部の感情状態を追跡することによって、妄想行動をシミュレートします。これを説明するために、Colby は RANDOM-PARRY という別のプログラムを作成し、ランダムに応答を選択しました。RANDOM-PARRYからの回答は、標準的な PARRY と同様に、人間の患者の反応をモデル化しませんでした。偏執的な行動は不適切な応答、あるいは適切でない順序付けを行うので、PARRYが審査員をだましたと主張する者もいます。しかし、彼らにはまだ PARRY が効果的にシミュレートするための明確なロジックがあります。パラノイドの行動をシミュレートする方がよりシンプルですが、おそらく、それは瑣末なことではありません。

The two computer programs, ELIZA and the paranoid PARRY, eventually met for series of "talks," executing different programs. One exchange such went like this:

ELIZA とパラノイドのPARRYの2つのコンピュータプログラムは、最終的に、さまざまなプログラムを実行する一連の「会話」のために出会いました。ある会話はこのようなやり取りを行いました:

    PARRY: I know the mob controls the big rackets.
    ELIZA: Who are you really talking about?
    PARRY: Have you ever really thought about the Mafia?
    ELIZA: Suppose you ever really thought about the Mafia?
    PARRY: Let's talk about something else.
    ELIZA: I am not sure I understand you fully.
    PARRY: You keep asking me the same things, why not change the topic?
    ELIZA: In what way?[10]
    PARRY: 私は暴徒が大きなラケットをコントロールしていることを知っています。 
    ELIZA: 誰があなたについて本当に話していますか? 
    PARRY: マフィアについて本当に考えたことはありますか? 
    ELIZA: マフィアについて本当に考えたとします? 
    PARRY: 別の話をしましょう。 
    ELIZA: 私はあなたを完全に理解しているかどうかはわかりません。 
    PARRY: あなたは私に同じことを尋ね続けます、なぜ話題を変えないのですか? 
    ELIZA: どのように?

Colby would claim that PARRY mimics the natural process by which a person (in this case a person with paranoid schizophrenia) engages in conversation. The structure of the program that makes the linguistic decisions in PARRY is isomorphic to the 'deep structure' of the mind of the paranoiac. As Colby stated: "Since we do not know the structure of the 'real' simulative processes used by the mind-brain, our posited structure stands as an imagined theoretical analogue, a possible and plausible organization of processes analogous to the unknown processes and serving as an attempt to explain their workings".[11]

コルビーは、PARRY は人間(ここでは妄想性統合失調症の患者)が会話に参加する自然なプロセスを模倣すると主張します。PARRY における言語的決定を行うプログラムの構造は、パラノイアの心の「深い構造」と同じ形をしています。コルビーは次のように述べています。「私たちは脳によって使用される「真の」擬似プロセスの構造を知らないので、私たち仮定した構造は想像された理論上の類似物ですが、未知のプロセスに類似している可能性があり説得力のある構造で、その働きを説明しようとしています。」

Yet, some critics of PARRY expressed the concern that this computer program does not in actuality "understand" the way a person understands and continued to assert that the idiosyncratic, partial and idiolectic responses from PARRY cover up its limitations.[12] Colby attempted to answer these and other criticisms in a 1974 publication entitled, "Ten Criticisms of PARRY." [13]

しかし、一部の PARRY の批判者は、このコンピュータ・プログラムが実際に人間の理解のようには「理解」していないという懸念を表明し、PARRY からの特異で部分的かつ個人語のような反応がその限界を隠蔽していると主張し続けました。コルビーは、1974年に『PARRYへの10の批評』と題した出版物で、これらの批判やその他の批判に答えようとしました。

Colby also raised his own ethical concerns over the application of his work to real life situations. In 1984, he wrote,


With the great amount of attention now being paid by the media to artificial intelligence, it would be naive, shortsighted, and even self-deceptive to think that there will not be public interest in scrutinizing, monitoring, regulating, and even constraining our efforts. What we do can affect people’s lives as they understand them. People are going to ask not only what we are doing but also whether it should be done. Some might feel we are meddling in areas best left alone. We should be prepared to participate in open discussion and debate on such ethical issues."[14]

Still, PARRY has withstood the test of time and for many years has continued to be acknowledged by researchers in computer science for its apparent achievements. In a 1999 review of human-computer conversation, Yorick Wilks and Roberta Catizone from the University of Sheffield comment:

それでも、PARRY は時の試練に耐えてきました。何年もの間、コンピュータ・サイエンスの研究者たちはその明らかな成果を認めています。 1999年のヒューマン・コンピュータ対話のレビューでは、シェフィールド大学のYorick WilksとRo​​berta Catizoneがコメントしています。 

The best performance overall in HMC (Human-machine conversation) has almost certainly been Colby’s PARRY program since its release on the net around 1973. It was robust, never broke down, always had something to say and, because it was intended to model paranoid behaviour, its zanier misunderstandings could always be taken as further evidence of mental disturbance, rather than the processing failures they were." [15]
HMC(ヒューマンーマシン対話)の研究領域全体で最高のパフォーマンスは、1973 年頃のネット上でのリリース以来ほぼ確実にコルビーの PARRY プログラムでした。それは堅牢で、決して壊されず、いつも言いたいことがありました。なぜなら、それは妄想行動をモデル化することを目的としていたため、誤解は、彼らの処理失敗ではなく精神障害のさらなる証拠として常にとらえることができました。


Other Areas of Study(その他の研究領域)

During his career, Colby ventured into other, more esoteric areas of research including classifying dreams in "primitive tribes." His findings suggested that men and women of primitive tribes differ in their dream life, these differences possibly contributing an empirical basis to our theoretical constructs of masculinity and femininity.[16]


Colby was also a chess player, and published a respected chess book called "Secrets of a Grandpatzer."[17] The book focuses on improving one's Elo rating from an average level ("patzer") to a very strong level ("grandpatzer", in the range 1700 to 2200).[18]




  • (1951) A Primer for Psychotherapists. (ISBN 978-0826020901)
  • (1955) Energy and Structure in Psychoanalysis.
  • (1957) An exchange of views on psychic energy and psychoanalysis.
  • (1958) A Skeptical Psychoanalyst.
  • (1960) Introduction to Psychoanalytic Research
  • (1973) Computer Models of Thought and Language.
  • (1975) Artificial Paranoia : A Computer Simulation of Paranoid Processes (ISBN 9780080181622)
  • (1979) Secrets of a Grandpatzer: How to Beat Most People and Computers at Chess (ISBN 9784871878876)
  • (1983) Fundamental Crisis in Psychiatry: Unreliability of Diagnosis (ISBN 9780398047887)
  • (1988) Cognitive Science and Psychoanalysis (ISBN 9780805801774)



  • "Sex Differences in Dreams of Primitive Tribes" American Anthropologist, New Series, Vol. 65, No. 5, Selected Papers in Method and Technique (Oct., 1963), pp. 1116–1122
  • "Computer Simulation of Change in Personal Belief Systems." Behavioral Science, 12 (1967), pp. 248–253
  • "Dialogues Between Humans and an Artificial Belief System." IJCAI (1969), pp. 319–324
  • "Experiments with a Search Algorithm for the Data Base of a Human Belief System." IJCAI (1969), pp. 649–654
  • "Artificial Paranoia." Artif. Intell. 2(1) (1971), pp. 1–25
  • "Turing-like Indistinguishability Tests for the Validation of a Computer Simulation of Paranoid Processes." Artif. Intell. 3(1-3) (1972), pp. 199–221
  • "Idiolectic Language-Analysis for Understanding Doctor-Patient Dialogues." IJCAI (1973), pp. 278–284
  • "Pattern-matching rules for the recognition of natural language dialogue expressions." Stanford University, Stanford, CA, 1974
  • "Appraisal of four psychological theories of paranoid phenomena." Journal of Abnormal Psychology. Vol 86(1) (1977), pp. 54–59
  • "Conversational Language Comprehension Using Integrated Pattern-Matching and Parsing." Artif. Intell. 9(2) (1977), pp. 111–134
  • "Cognitive therapy of paranoid conditions: Heuristic suggestions based on a computer simulation model." Journal Cognitive Therapy and Research Vol 3 (1) (March 1979)
  • "A Word-Finding Algorithm with a Dynamic Lexical-Semantic Memory for Patients with Anomia Using a Speech Prosthesis." AAAI (1980), pp. 289–291
  • "Reloading a Human Memory: A New Ethical Question for Artificial Intelligence Technology." AI Magazine 6(4) (1986), pp. 63–64


See also(関連情報)



  1. Energy and Structure in Psychoanalysis (1958)
  2. Fundamental Crisis in Psychiatry (1983)
  3. Cognitive Science and Psychoanalysis (1988)
  4. Kenneth Mark Colby
  5. quoted in Mind as Machine: A History of Cognitive Science By Margaret A. Boden
  6. Mind as Machine: A History of Cognitive Science By Margaret A. Boden p. 370
  7. Artificial Paranoia : A Computer Simulation of Paranoid Processes p. 99-100
  8. In: Alan Turing: Life and Legacy of a Great Thinker By Christof Teuscher, Douglas Hofstadter, p 304
  9. "Chatterbots, Tinymuds, And The Turing Test: Entering The Loebner Prize Competition" by Michael L. Mauldin
  10. "Dialogues with colorful personalities of early AI"
  11. Artificial Paranoia: A Computer Simulation of Paranoid Processes. p.21
  12. "Wallowing in the Quagmire of Language: Artificial Intelligence, Psychiatry, and the Search for the Subject". Phoebe Sangers, Cultronix.
  13. "Ten Criticisms of Parry" by Kenneth Colby
  14. "Reloading a Human Memory: A New Ethical Question for Artificial Intelligence Technology." AI Magazine 6(4) (1986), pp. 63-64
  15. arXiv:cs.CL/9906027 v1 25 Jun 1999 "Human-Computer Conversation" by Yorick Wilks and Roberta Catizone
  16. "Sex Differences in Dreams of Primitive Tribes," American Anthropologist, New Series, Vol. 65, No. 5: 1116-1122
  17. Spar, James. McGuire, Michael. "IN MEMORIAM". University of California. Retrieved 12 September 2013.
  18. Pearson, Robert. ""Secrets of a Grandpatzer" (Part 1)". Retrieved 12 September 2013.


External links(外部リンク)