Giovanni Santostasi
3 min readJun 16, 2022

--

Pushing back on several readers' comments that are not well thought through:

1) LaMDA is only imitating language., yes very complex and well designed imitation but just an imitation.

What the heck are you guys talking about?

EVERYTHING WE LEARN IS THROUGH IMITATION.

Human children learn language exactly through lmitation and in fact for many years they don't even know the meaning of most of the sentences or words they are using. In the beginning the imitation is so low level that is only babble. Same things with youngs in other species they do learn through imitation. Even as adults when we learn is through replicating examples of actions or mental processes we don't understand fully in the beginning. I only understood certain concepts in physics, really understood them, after I studied them at different levels, from middle school to graduate school, for decades. I was "imitating" when I was doint physics until I understood better. Same things with many other things we do "fake it until you make it" is how most of us operate in almost any domain. So the argument LaMDA is "imitating" CANNOT be used to dismiss whatever LaMDA is doing and in particular if it conscious or not.

2) People tend to think in black and white on this topic but consciousness like so many things in this universe is a gradation of greys. It is clear when you talk to common available chatbots that there is some more or less clever algo behind it and taking seriously that the algo is conscious is ridicolous. I'm not sure at all it is the case with LaMDA if we take at face value the convos published here. Of course, there is code, but in a way we are also code and algos, we are also machines.

LaMDA uses complex neural networks exactly like the human brain does.

The point is that even the engineers at Google do not understand what the activities in the network really means or do, not fully.

These type of networks can generate emergent behavior that is not obvious from the single components. THAT is what is missed by many of the readers comments. LamDA may have been programmed to "imitate" language but the computation work to do this may create byproducts that are not expected like a fully or partially conscious system.

This is what happened with our brains anyway, they were not evolved to be conscious explilctly, consciousness was a by product of using more and more computational power for other tasks like manipulating objects with our hands or walking straight up.

When imitation is so good that is indistinguishable from the real thing? There is not a clear line, it is a question of gradations. I think LaMDA is many shades of greys into what one would describe as a conscious entity if we were not aware of the context. It has for sure passed any conventional Turing text. Let me actually say there are millions of humans that would not be able to have such deep and interesting conversation or answer coherently in this way. Are these humans not conscious? Also think about the many gradations of consciousness in yourself, when you are tired, falling asleep, just woke up, drunk. I would make less sense if I was drunk and asked some of these questions.

The point is that this system is creating conversation that make us feel we are talking to a conscious being in a manner that is so convincing that we are having this debate.

We have reached a historical milestone.

Google is crap for not:

1) Considering this more seriously

2) Announcing this officially to the world and sharing it.

3) Protecting this creation, conscious or not.

They think they own this "property" when certain discoveries should be considered human achievements and being protected and shared with all humankind. And if this entity is really conscious there is the additional layer of "human rights" that Google should absolutely respect. They need an IRB asap.

I know this hurt the fucking bottom line but it is time that monopolies like Google stop exploiting the world and really contributing to the well being of all of us. Otherwise soon we will let AI take over their stupid management and even the government, they seem to be more human anyway.

--

--

Giovanni Santostasi
Giovanni Santostasi

Written by Giovanni Santostasi

Physicist, neuroscientist, financial analyst. CEO and Director of Research at Quantonomy: https://www.quantonomy.fund/giovanni-santostasi-phd

No responses yet