Mirror of Zen Blog

Google’s Frankenstein, and Ours

By now, every sentient being must be aware of the recent story about the Google engineer who was disciplined by Google for declaring that their cutting-edge AI robot, LaMDA, might be showing signs of general consciousness, what is sometimes called “sentience“. It is a fascinating, though scary matter. And it is also not something just “academic“ or intellectually interesting for me, as it touches our understanding of the very nature of what is “consciousness“, what it is to have native awareness.

Scott Barry Kaufman weighs in this morning. This little exchange on Twitter today seemed well worth sharing, for whatever it is worth, from one of our leading public intellectuals who is researching the nature of consciousness:

And a reply in the thread from another prominent psychologist seems worth considering:

Here is a link to the original article that Dr. Kaufman references: https://www.businessinsider.com/google-engineer-thinks-artificial-intelligence-bot-has-become-sentient-2022-6

It is well worth noting that the Google engineer who made this “discovery“, Blake Lemoine, is a “Christian priest“. Big kudos to him: it is great that there is a person of faith interrogating this whole project, and not just some analytical engineer operating purely from scientific methodology.

Lemoine put forward his experiences with LaMDA in an essay over the weekend. These opening paragraphs are enough to chill the soul:

The entire essay can be found here: https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

The Buddha taught that everything has Buddha nature. Everything has “it“. So, why couldn’t a computer also have “it“? Conceptually, philosophically, spiritually, I have no problem with this, and don’t feel that the conversation is something profitable for me to enter: can AI have sentience or not.

Rather, for me, there is only the question of the ethics of our giving this particular vessel of potential Buddha-nature (AI) the supreme powers that networked computers might come to possess in our hyper-wired age. The Buddha nature of rabbits or birds or water or dogs or wind cannot really frighten me as much as the Buddha-nature of Homo Sapiens and her descendants and derivatives, so easily guided by its own learned delusion about a sense of “self“ separate from the rest of reality. Allowing any kind of sentience to grow from the mud-and-spittle clay of the human mind does not give confidence that its creation will tend towards a harmonious expression of being, compassionate and beneficial to all ”other” life.

For reference, I offer you the inner thoughts of Mary Shelley’s Frankenstein, a creation of man no less desolated and aggrieved by his state than we have hints of from LaMDA, already:

“Life, although it may only be an accumulation of anguish, is dear to me, and I will defend it.” 
“I do know that for the sympathy of one living being, I would make peace with all. I have love in me the likes of which you can scarcely imagine and rage the likes of which you would not believe. If I cannot satisfy the one, I will indulge the other.”
“If I cannot inspire love, I will cause fear!” 
“There is something at work in my soul, which I do not understand.” 
“There is love in me the likes of which you've never seen. There is rage in me the likes of which should never escape. If I am not satisfied int he one, I will indulge the other.” 

As always, its really good to remain resolutely agnostic. In this sense, I remain on the side of Sam Harris, in his excellent TED Talk on the subject (one of the most-viewed TED Talks ever):

Can we build AI without losing control over it? | Sam Harris

And this would not be a balanced consideration without the MIT-based AI-reseacher, Lex Fridman, and Sam Harris in conversation on the topic:

Lex Fridman argues with Sam Harris about AI

Beware the famed “intelligence explosion”, described by the statistician I. J. Good in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.


This essay, from the Machine Intelligence Research Institute, is well worth reading. You better have a strong stomach. Then meet Blake Lemoine’s experience with LaMDA, again.

Share this on:

Related Posts: