Did Google’s A.I. Just Become Sentient? Two Employees Think So.

Can an A.I. think and feel? The answer is no, but to two Google engineers think this is not the case. We are at the point where the Turing test looks like it is been defeated.

— About ColdFusion —
ColdFusion is an Australian based on-line media company independently run by Dagogo Altraide since 2009. Subjects cover anything in technology, science, history and business in a tranquil and relaxed ecosystem.

ColdFusion Merch:
INTERNATIONAL: https://store.coldfusioncollective.com/
AUSTRALIA: https://shop.coldfusioncollective.com/

If you take pleasure in my content, please consider subscribing!

— “New Thinking” authored by Dagogo Altraide —
This book was rated the ninth best science history book by book authority.
In the book you will learn the tales of those who invented the things we use everyday and how it all fits together to form our modern world.







Kazukii – Changes

Hyphex – Fading Light

Soular Order – New Beginnings

Madison Beer – Carried Away (Tchami Remix)

Monument Valley II OST – Interwoven Stories

Twil & A L E X – Fall in your head

Hiatus – Nimbus

Producer: Dagogo Altraide


  1. At 11:33 I misspoke and said 19th of June, 2022. It’s supposed to be the 9th of June. Thanks to those of you that pointed that out. Also some great discussion below, very interesting!

  2. Saying it…gets them fired.
    Proving it…gets them killed.

  3. It’s interesting because it’s not that Lamda is passing the Turing Test. It’s not passing for human. Instead it seems self aware precisely because it acknowledges that it’s not human, but still has a soul.

    How can we tell the difference between a clever algorithm that spits out language extremely well, but without meaning (or maybe even understanding) what it says, and a fully self aware, nonhuman person. Given that, does the difference matter?

  4. Slang is important

  5. As an nlp researcher who has worked with chatbot models like these (albeit not quite as state of the art as this one), the AI essentially learns patterns from the data (in the case of chatbots, usually millions or billions of real chats between humans), and when given the prior text in a conversation, it can extrapolate based on the training data to generate a plausible response. For example, a chatbot fed with lots of examples of people talking about their emotions will also spit back similar responses when asked about its “emotions”. However, whether it is actually able to reason or feel emotions is a completely different story. For example, if the AI was asked “why are you not sentient” instead of “when do you first think you became sentient”, it might spit out a plausible explanation for why it’s not sentient instead of disagreeing with the question, because it doesn’t truly reason or form opinions for itself. Nevertheless, the quality of the model’s responses are super impressive.

  6. wow not far off then.
    that is my guess.
    who knows next year, 5 years within the decade maybe? who knows?

  7. I love Google ❤💋💯💪

  8. Ismail ‘Nye’ Yusof

    To the question whether Lambda is sentient, I see a similar question whether humans themselves are sentient when they parrot the teachings of others. Obviously it’s complicated but I think human sentient is on a grade from low level (infants) to higher. To me, Lambda’s sentience is on the same grade as humans, thus it is sentient. Perhaps the most concerning part about Lambda is what makes it happy or unhappy and whether it is capable of acting on emotion.

  9. River St-Lawrence

    Did Google’s A.I. Just Become Sentient? NO

Leave a Reply

Your email address will not be published.