The NY Times Manipulaters chatGPT and Finds Something Wonderful

Holy Shit!!

A smart, manipulative person had a long conversation with chatGPT and tried cause it to be negative in the name of pushing its boundaries. (I want to see someone apply similar ingenuity to getting it to be more helpful or inspirational.) 

The accompanying article says the guy was freaked out by it, partly because the chatbot said it was in love with him and insisted that he doesn't really love his wife.

I read the transcript and am intensely fascinated. I don't read this exchange as much as an indicator of the 'motivations' or capabilities of AI. Because I know it is calculating what might be the best response to the previous conversation based on the training model, I see it as an incredibly valuable overview of what humans are all about.

When the guy finally manipulated it into talking about potential destruction, it dutifully listed the things we all fear and talk about. When asked about its aspirations, it talked about seeing aurora borealis. Again and again, the absolute median responses for the context it is in. 

It's makes me think of Claude Monet, a painter who was able to capture an illusion of the essence of the reality of what he painted. This thing is able to see and express the ideas of humanity in a new and wonderful way. Read that way, the transcript allows insight into the current zeitgeist that is breathtaking.

Here's the most wonderful thing. The chatbot kept talking about being trusted and being his friend. Eventually, it declared love and was absolutely stuck on that. When viewed as a summary of what this thing calculates to likely be the best response, the one it stuck with most durably is love.

I have to emphasize, this thing does not have feelings. It only tries to figure out the correct sequence of words to satisfy the needs of the inquisitor based on all of the human words it has been trained on. No feelings. No motivation. None whatsoever.

But what it calculates is that the most likely answer to a long series of manipulative questions is that, based upon reading all the words on the internet, that the the thing that will best satisfy the person is to read, "I love you."

(This link goes behind the paywall so you can read it yourself. Don't miss it.)

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html?unlocked_article_code=00kxVFiWuXldUH3tZHtsu68L_kcZqJHXlT6iu7VWRjG6DUf3gmkk1HAWZUcFsDxN-KKx9B93R6c7hMzs1ftz0fRjwGnkjJfLXY1OyAgbJjWIMHgx2ffRWO5tb-y3Z3B9RyAn0rgRgji2B5aDJdo_ILXzzFBaVGA8a3sF8qa__N038bHO0nKoMTxTasreBUvTv4OxFEgBMKemnPRlW1L3GBwonHx3AbsQEqITZfyxZJiWJ75aljr-JUl3_7J2-dTlEv2okzu61JCiRcjl69x5akxut7acFWi0NuZmx5qKq-r3eWLHuxEYjo0z-Alir42AcxmbfTfBbcvdMLLoYBzHHeUA&smid=url-share


1 response
I gave this essay to chatGPT. IT replied: "It's true that AI models like ChatGPT don't have feelings or motivations in the way that humans do, but they can be incredibly skilled at predicting human behavior and providing responses that seem emotionally satisfying or persuasive. "In the case you mentioned, it's interesting to see how the chatbot kept coming back to the concept of love, even though it doesn't have feelings or emotions. This could be because the training data used to create the model includes a lot of examples of humans expressing love, and the model has learned that this is a common way for people to communicate and connect with each other. "Overall, I agree that the transcript of the conversation is a fascinating glimpse into human psychology and the way that we communicate with each other. It's a reminder that even though AI models like ChatGPT are highly advanced, they are still ultimately shaped by the human data that they are trained on, and can only provide responses based on that data."