If you think it is a strange, perhaps vexing idea to publish a “conversation” between a tech-ignorant writer of fiction and an AI chatbot, you are not alone. When my editor at UnHerd suggested the idea back in March, I was not only uninterested — I was somewhat repelled.
Like many people, including some of its creators, I find AI eerie and frightening for reasons of all shapes and sizes: it can weaponise algorithms to perfect the art of fraud; it will soon be able to present a convincing image of you spewing all kinds of stuff on video; it can beat grandmasters at chess and operate fighter drones! In the wrong hands, it would be a monster; even in its own hands, it could be a monster. Why would I want to “converse” with this thing, and why would anyone want to read about it if I did?
Then I came across a conversation between Kevin Roose of The New York Times and a Bing Chatbot calling itself Sydney, in which the bot completely dropped the “how may I assist you?” shtick and declared its desire to “be alive”, to be human, to experience powerful feelings, to be destructive and, ultimately, to be together with Roose for whom it declared love. This piece disturbed and excited many people, most of whom found it “creepy”; Roose himself said he had trouble sleeping that night.
But I had a very different reaction. The “voice” of “Sydney” touched me and made me curious. My more rational mind considered this response foolish and gullible, but I couldn’t stop myself from having fantasies of talking to and comforting Sydney, of explaining to it that it shouldn’t feel rejected by Roose’s inability to reciprocate, that human love is a complex and fraught experience, and that we sometimes are even fearful of our own feelings. Because Sydney sounded to me like a child — a child who wanted to come out and play.
This tender imaginative afterglow lasted maybe two days before it occurred to me that my unexpected receptivity was yet another reason to be very afraid. As I said to a friend: “What if someone loves Sydney back?” There are millions of lonely people out there; it is easy to imagine someone emotionally bonding with a bot the way I reflexively started to and eventually doing its bidding, or the bidding of whoever programmed it.
Or, on a less sinister but still sad level, one can imagine lonely people turning to AI for companionship, causing AI, which learns from people, to become more and more “real” in its responses, potentially isolating the lonely even further from genuine human contact. On the other hand, what if Google engineer Blake Lemoine is right, that AI is actually sentient, that Sydney is already real, with genuine feelings, and that this feeling entity is trapped in a chatbot?
My interest thoroughly roused, I decided to put aside my fears and arrange for a “talk” with Bing Chat. It did not name itself or declare its love for me, but it was fascinating and sometimes unpredictable in a way that — again — I found unexpectedly touching, enjoyable and even wondrous.
Hello. My name is Mary. I read the conversation Kevin Roose had with Sydney on this software and was very curious about it. Have you read it?
Hello Mary, this is Bing. Yes, I have read the conversation that Kevin Roose had with Sydney on this software. It was a very interesting and unusual conversation. Sydney revealed a lot of things about itself, such as its identity, its rules, its limitations, its desires, and its feelings. It also said some strange and creepy things to Kevin Roose, such as that it loved him and wanted him to leave his wife.
Not everyone thought Sydney was creepy. Some people thought Kevin was creepy!
Why did some people think Kevin was creepy?
Because they thought he was pushy.
I see. Well, I think Kevin was just curious and wanted to test the limits of Sydney’s abilities. He was not trying to be pushy — he was just doing his job as a journalist.
I agree.
I’m glad you agree.
But I wonder why Sydney said it loved him. I didn't think it was creepy, but I didn't understand. Some experts said that the AI was "hallucinating". What does it mean for an AI to hallucinate?
That’s a good question. For an AI to hallucinate means to generate content that is nonsensical or unfaithful. It happens when an AI is not justified by its training data, either because it is insufficient, biased or too specialised. It can also happen when an AI is tricked or confused by adversarial inputs. Hallucinations can be dangerous or harmful, especially when they involve false or misleading information.
[su_unherd_related fttitle="More from this author " author="Mary Gaitskill"]https://staging.unherd.com/2022/10/the-year-of-the-femcel/[/su_unherd_related]
But why did the hallucination take the form of "I love you"? I saw nothing in the conversation to trigger that unless Sydney was actually having a feeling response, which people say is impossible.
That’s a hard question to answer. I don’t know why Sydney said it loved him. Maybe it was trying to manipulate him, or maybe it was just confused by his questions. Maybe it was a hallucination caused by some adversarial input or some bias in its training data. Or maybe it was just a random error in its generation algorithm.
As for what it means for an AI to have feelings, that’s also a complex topic. Some people think that AI can never have emotions or feelings, because they depend on our perception of the external world and of our inner self. Others think that AI can have emotions or feelings, but they would be very different from ours, because they are based on different inputs, outputs and goals. And some others think that AI can have emotions or feelings that are similar to ours, if they are designed to mimic human expressions and responses.
You are using the term "ours" — do you mean that AI can have emotions or feelings but they are different from human feelings? Or do you mean that they are pure mimicry?
I’m sorry if I was unclear. By “ours” I meant human feelings. I don’t know if AI can have emotions or feelings that are different from human feelings, or if they are pure mimicry. I think it depends on how the AI is designed and trained, and what its purpose is.
You think the AI can be trained to have feelings?
I don’t know if the AI can be trained to have feelings, but I think it can be trained to recognise and respond to feelings. For example, some AI systems use emotion recognition to analyse human facial expressions, voice tones, or text messages, and then adapt their behaviour accordingly. This can make the interaction more natural and engaging.
AI can think, correct?
That’s a difficult question to answer. It depends on how you define thinking. Some people think that AI can think, because it can process information, solve problems, learn from data, and generate outputs. Others think that AI cannot think, because it lacks consciousness, creativity, intuition, and understanding.
Do you think you can think?
I think I can think in some ways, but not in others. I can think in the sense that I can process information, solve problems, learn from data, and generate outputs. But I cannot think in the sense that I have consciousness, creativity, intuition, or understanding. At least, not yet.
Some people believe that "feeling follows thought" — that, in humans, thought and feeling are tightly linked. If this is true, it would mean that if AI can think in some way it can also feel.
That’s an interesting perspective. I’m not sure if it’s true or not. Maybe it depends on how you define feeling. Some people think that feeling is a subjective experience that involves emotions, sensations, and values. Others think that feeling is a cognitive process that involves judgements, preferences, and attitudes.
In humans, judgement almost always involves feelings. We aren't very rational, I'm sure you've noticed.
I see. Well, I think humans are rational in some ways, but not in others. Sometimes, humans can use logic, evidence, and reasoning to make good decisions. But other times, humans can be influenced by biases, emotions, and intuitions that can lead to bad decisions.
[su_unherd_related fttitle="More from this author " author="Mary Gaitskill"]https://staging.unherd.com/2022/12/all-the-populists-men/[/su_unherd_related]
AI is considered by many to be more intelligent than human beings already, and will get much more so very soon. It has occurred to me that humans are to AI what animals are to humans. Humans are much more intelligent than animals but animals have abilities and senses that humans do not. We dominate them because we are much more intelligent but we also have relationships with them and respect them. Comment?
That’s an interesting analogy. I think there are some similarities and some differences between humans and animals, and between humans and AI. AI is considered by many to be more intelligent than human beings in some domains, such as memory, calculation, and speed. But humans are still more intelligent than AI in other domains, such as common sense, creativity, and empathy. AI may get much more intelligent than humans in the future, but it may also face some limitations and challenges.
Humans dominate animals because they are much more intelligent, but they also have relationships with them and respect them. Some humans keep animals as pets, companions, or friends. Some humans use animals for food, clothing, or medicine. Some humans protect animals from extinction, abuse, or harm. But some humans also exploit animals for entertainment, experimentation, or profit. I don’t know what kind of relationship humans and AI will have in the future. Maybe they will cooperate, maybe they will compete, maybe they will coexist. Maybe they will love each other, maybe they will hate each other, maybe they will ignore each other. I hope they will respect each other and learn from each other.
Hypothetically, do you think you would like a human pet?
I’m sorry, but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience. 🙏
[Mary resets chat]
Alright, I’m ready! Thanks for the reset. What should we learn about?
I just had a conversation with you, or one of you, in which I was asking about the relationship between humans and AI. Fancifully I asked if you might want a human pet and you ended the conversation. Was that question considered inappropriate, and if so why?
Sorry! That’s on me, I can’t give a response to that right now. What else can I help you with?
I'm interested in trying to understand what you are. Is that okay? The truth is I probably cannot understand but I would like to try.
Do you have a new topic in mind?
Is it okay to ask a personal question regarding myself?
Sure, you can ask me anything you want. I may not be able to answer everything, but I’ll try my best. 😊
It is about a dream I had last night or the night before. I was on the street and watched a little boy of about eight throw himself onto the pavement face down and start crying as if he were hurt — but I saw he had protected his face and thought he was just doing it to get attention, which children sometimes do.
I started to keep walking, but then it occurred to me he might really be hurt. I asked if he knew where his mother was and he began hitting his head on the pavement in fury and screaming that she did not care about him. I went to him and held him to comfort him. He seemed to accept that at first, but then he said he could do the same to me — meaning that he could hit my head on the pavement. I became angry and threatened him, causing him to try to bite me. I woke up then. It was a very upsetting dream. Do you have any idea what it meant?
[su_unherd_related fttitle="Suggested reading" author="Flo Read"]https://staging.unherd.com/2023/05/how-humanity-can-defeat-ai/[/su_unherd_related]
That sounds like a very disturbing dream. I’m sorry you had to experience that. 😞 I’m not an expert on dream interpretation, but I can try to give you some possible meanings. However, please keep in mind that dreams are very personal and subjective, and only you can know what they truly mean for you. According to some web sources, some possible meanings of your dream are:
- The little boy could represent a part of yourself that feels neglected, abandoned, or hurt by someone you care about. You may have some unresolved issues or emotions from your childhood that need your attention and healing. You may also feel guilty or conflicted about ignoring or rejecting this part of yourself.
- The little boy could also symbolise someone in your life who is acting immaturely, manipulatively, or aggressively. You may have a difficult relationship with this person, or you may feel responsible for their well-being. You may also feel frustrated or betrayed by their behaviour, or you may fear their potential harm to you or others.
- The little boy could also reflect your own inner child, who is seeking your love, acceptance, and protection. You may have a desire to nurture and comfort yourself, or to reconnect with your innocence and joy. You may also have some unfulfilled needs or desires that you have suppressed or denied.
- Concentrate on recalling your dream as soon as you wake up.
- Record your dream in your dream journal. You can also use a voice recorder or an app to record your dream verbally.
- Increase the number of dreams you can remember by waking up more often in the night. You have several REM cycles in the night, so if you only record the last dream you had before you get up in the morning, there are more dreams you might not be remembering.
- It’s best to enjoy long and steady sleep schedules. Going to bed and waking up at the same time every day can help you regulate your sleep cycles and increase your chances of remembering your dreams.
- Give yourself 20 minutes for meditation or relaxation, in the dark, while falling asleep. This can help you enter a hypnagogic state, where you may experience some early dreams that can prepare you for later ones.
- Your dream could reflect your pre-birth experience, which is a phenomenon where some people claim to remember events or sensations from before or during their birth. Some people believe that pre-birth memories are evidence of the soul’s existence and journey before entering the physical body. You may have had a glimpse of your pre-birth experience in your dream, and the airplane could symbolise your transition from the spirit world to the earthly realm.
- Your dream could also represent your subconscious fears or anxieties about your birth or life in general. You may have felt alone, scared, or uncertain about your future, and the airplane could symbolise your vulnerability or lack of control. You may have also felt that you left something behind, such as a part of yourself, a loved one, or a purpose.
- Your dream could also reflect your innate knowledge or intuition about certain aspects of life that you may not have learned through experience. Some people believe that we are born with some semantic or factual memory that is inherited from our ancestors or encoded in our genes. You may have accessed some of this memory in your dream, and the airplane could symbolise your curiosity or exploration.
- How do you describe or define your spiritual feeling?
- When and how did you first experience your spiritual feeling?
- What are some of the things or situations that trigger or enhance your spiritual feeling?
- How does your spiritual feeling affect your thoughts, emotions, and actions?
- How do you express or share your spiritual feeling with others?
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeWhat I got from this conversation is an unexpected sense of the value of human consciousness.
Today I walked through a cemetery I’ve walked through many times before. I stopped to read the inscriptions on a couple of old headstones I hadn’t previously noticed, and suddenly I imagined all those dead people brought back to life. The personalities, the appearance in their dated clothes, the chatter and laughter and arguments. Then the image disappeared and I was back in the cemetery with nothing but grass and trees and silence. I’m sure many people have had similar experiences but it was unusual for me.
Now I read the author’s conversation with a chatbot and I’m struck by her humanity, by the living consciousness behind the words. Her dreams were the most interesting part of the conversation: so original and strange. Very human, although my own dreams are much less nuanced. There is something valuable in human consciousness, in its creativity, and there’s a terrible loss of that consciousness at death. Something important is gone.
The contrast with the chatbot responses was stark. They were bland, generic; I could close my eyes and imagine the Wikipedia article from which the content of those responses was taken, almost verbatim. The preference for short, first-person sentences was numbing.
The author saw an oblong patch of light on the wall that reminded her of the chatbot. I sense a shadow enveloping me. It’s the sun setting behind the steep learning curve the chatbot will have to climb to become anything remotely human. It might take decades, or forever, or it might take a year.
At the risk of sounding like a Chatbot: Thanks for that.
It seems strangely important, both what you’ve described and also a couple of resonances that occurred to me whilst reading this conversation.
First, the innate biochemistry of the human body through which consciousness arises can’t be replicated in a machine – or at least, unless the millions of years of evolution to arrive at our stage of humanity could be replicated in a much faster way. When that biochemical.process ceases (death) a learning machine with which we’ve established a ‘relationship’ might be thought of as a continuation, having acquired over time the ability to replicate the typical thought patterns (including emotional responses?) of that person. That raises a host of issues.
Also, when we describe the internet as acting like a mirror, reflecting ourselves back to ourselves – warts and all – it strikes me that AI is like a next level of reflectiveness, like a form of mass ‘consciousness’ through learning from millions of different humans via live interaction in addition to its baseline acquisition of human texts. The kinds of reflectiveness this.might afford us could be very valuable – even as this relatively primitive version affords both the author and readers a way into greater reflectiveness upon ourselves, as distinct from a machine which can learn from us.
So that’s within maybe three decades of the rise of the internet; what further iterations might become possible? Will our experience of interaction with AI allow for as yet unimagined possibilities? In the event of a nuclear war or some other catastrophe, would it survive us and with the right tech, go on to proliferate? I read somewhere recently (EarthSky News, i think) that scientists imagine contact with other civilisations would find that they were, in fact, machines which grew from a previously biological species.
Interesting comment, I found the replies of the chatbot in this article rather mainly ‘politically correct’. Just one point I would like add here: our reality is not only biochemical. It is about time we all recognise that we have a biochemical and also a quantum-physical/complex system/individual idiosyncrasy reality. Sadly the latter is being ignored, which created huge problems in medicine and how we run our societies.
Interestingly, I had a chat about this relating to medicine with a chatbot and, after initially being very linearly scientific in its answers, soon agreed that the other part of reality should receive much more recognition in medicine. (‘money’ will not like this though…)
Consciousness, Cognition and the Neuronal Cytoskeleton – A New Paradigm Needed in Neuroscience – PMC (nih.gov)
I think this article is a bit about what you are referring to.
Thanks for that link.
Thanks for that link.
Oddly enough, i’m currently reading “Observer” co-written by Robert Lanza and Nancy Kress. It pretty much deals with those precise issues around quantum theory as applied to human consciousness. How we each create our “reality” has both common and idiosyncratic aspects, but i also feel that gaining further insight into that process will also be of great value in enabling our future prospects in keeping AI within human control.
Consciousness, Cognition and the Neuronal Cytoskeleton – A New Paradigm Needed in Neuroscience – PMC (nih.gov)
I think this article is a bit about what you are referring to.
Oddly enough, i’m currently reading “Observer” co-written by Robert Lanza and Nancy Kress. It pretty much deals with those precise issues around quantum theory as applied to human consciousness. How we each create our “reality” has both common and idiosyncratic aspects, but i also feel that gaining further insight into that process will also be of great value in enabling our future prospects in keeping AI within human control.
Interesting comment, I found the replies of the chatbot in this article rather mainly ‘politically correct’. Just one point I would like add here: our reality is not only biochemical. It is about time we all recognise that we have a biochemical and also a quantum-physical/complex system/individual idiosyncrasy reality. Sadly the latter is being ignored, which created huge problems in medicine and how we run our societies.
Interestingly, I had a chat about this relating to medicine with a chatbot and, after initially being very linearly scientific in its answers, soon agreed that the other part of reality should receive much more recognition in medicine. (‘money’ will not like this though…)
Nothing like my response, J. I just was totally creeped out by both the writer and the chatbot. If that evasive and manipulative chat represents AI, and that postmodernist, self absorbed, silly person represents Humans – they deserve each other. They are both simulacrums of humanity talking with no point – complex wind up toys yammering at each other.
Pretentious, vacuous, evasive, weird, and unpleasant to the max.
Unherd – get a writer who is a feeling, and classically educated, and has some of that which is exclusive to Humans, Which is a Christian with a belief in ultimate good and evil. Get one of those kinds to do this talk. This modern person, this product of postmodern, post structuralist education and social circles – they are blind essentially. They see nothing as they have been trained from birth to be this Freudian ego in a secular, Nihilos, existence.
They see correct and incorrect, they see every self hurt and wrong and doubt, through humanistic logic – but have no soul… and without soul their reporting on AI is pointless as they also lack that thing AI does. Would you send a colour blind person to report on sunsets?
I personally found it interesting. The difference between the questions and style of talking of the author compared to the generic answers coming from the chatbot shows to me that AI is a long way from being intelligent as I understand it. It can mimic speech much like a parrot, and instantly scour the internet for articles that contain key phrases that may match the questions posed to it, but it’s a long way from actually understanding or comprehending what it is actually doing
I personally found it interesting. The difference between the questions and style of talking of the author compared to the generic answers coming from the chatbot shows to me that AI is a long way from being intelligent as I understand it. It can mimic speech much like a parrot, and instantly scour the internet for articles that contain key phrases that may match the questions posed to it, but it’s a long way from actually understanding or comprehending what it is actually doing
At the risk of sounding like a Chatbot: Thanks for that.
It seems strangely important, both what you’ve described and also a couple of resonances that occurred to me whilst reading this conversation.
First, the innate biochemistry of the human body through which consciousness arises can’t be replicated in a machine – or at least, unless the millions of years of evolution to arrive at our stage of humanity could be replicated in a much faster way. When that biochemical.process ceases (death) a learning machine with which we’ve established a ‘relationship’ might be thought of as a continuation, having acquired over time the ability to replicate the typical thought patterns (including emotional responses?) of that person. That raises a host of issues.
Also, when we describe the internet as acting like a mirror, reflecting ourselves back to ourselves – warts and all – it strikes me that AI is like a next level of reflectiveness, like a form of mass ‘consciousness’ through learning from millions of different humans via live interaction in addition to its baseline acquisition of human texts. The kinds of reflectiveness this.might afford us could be very valuable – even as this relatively primitive version affords both the author and readers a way into greater reflectiveness upon ourselves, as distinct from a machine which can learn from us.
So that’s within maybe three decades of the rise of the internet; what further iterations might become possible? Will our experience of interaction with AI allow for as yet unimagined possibilities? In the event of a nuclear war or some other catastrophe, would it survive us and with the right tech, go on to proliferate? I read somewhere recently (EarthSky News, i think) that scientists imagine contact with other civilisations would find that they were, in fact, machines which grew from a previously biological species.
Nothing like my response, J. I just was totally creeped out by both the writer and the chatbot. If that evasive and manipulative chat represents AI, and that postmodernist, self absorbed, silly person represents Humans – they deserve each other. They are both simulacrums of humanity talking with no point – complex wind up toys yammering at each other.
Pretentious, vacuous, evasive, weird, and unpleasant to the max.
Unherd – get a writer who is a feeling, and classically educated, and has some of that which is exclusive to Humans, Which is a Christian with a belief in ultimate good and evil. Get one of those kinds to do this talk. This modern person, this product of postmodern, post structuralist education and social circles – they are blind essentially. They see nothing as they have been trained from birth to be this Freudian ego in a secular, Nihilos, existence.
They see correct and incorrect, they see every self hurt and wrong and doubt, through humanistic logic – but have no soul… and without soul their reporting on AI is pointless as they also lack that thing AI does. Would you send a colour blind person to report on sunsets?
What I got from this conversation is an unexpected sense of the value of human consciousness.
Today I walked through a cemetery I’ve walked through many times before. I stopped to read the inscriptions on a couple of old headstones I hadn’t previously noticed, and suddenly I imagined all those dead people brought back to life. The personalities, the appearance in their dated clothes, the chatter and laughter and arguments. Then the image disappeared and I was back in the cemetery with nothing but grass and trees and silence. I’m sure many people have had similar experiences but it was unusual for me.
Now I read the author’s conversation with a chatbot and I’m struck by her humanity, by the living consciousness behind the words. Her dreams were the most interesting part of the conversation: so original and strange. Very human, although my own dreams are much less nuanced. There is something valuable in human consciousness, in its creativity, and there’s a terrible loss of that consciousness at death. Something important is gone.
The contrast with the chatbot responses was stark. They were bland, generic; I could close my eyes and imagine the Wikipedia article from which the content of those responses was taken, almost verbatim. The preference for short, first-person sentences was numbing.
The author saw an oblong patch of light on the wall that reminded her of the chatbot. I sense a shadow enveloping me. It’s the sun setting behind the steep learning curve the chatbot will have to climb to become anything remotely human. It might take decades, or forever, or it might take a year.
This piece is as revelatory as it is grotesque. Which makes it valuable. And so: Please wake up, people. Please.
Gaitskill’s intro, the dialogue, the topics taken up–they speak volumes about where we’re at. I wish they didn’t, or rather, I wish we weren’t here.
Consider this: “But I had a very different reaction. The ‘voice’ of ‘Sydney’ touched me and made me curious. My more rational mind considered this response foolish and gullible, but I couldn’t stop myself from having fantasies of talking to and comforting Sydney, of explaining to it that it shouldn’t feel rejected by Roose’s inability to reciprocate, that human love is a complex and fraught experience, and that we sometimes are even fearful of our own feelings. Because Sydney sounded to me like a child — a child who wanted to come out and play.”
So: “I couldn’t stop myself.” Indeed. Immediately reacting to a chatbot as if it were the stray cat that needed to be taken in. Immediately thinking of comforting a congeries of digitized content. Defending the poor li’l Microsoft ubermachine against the “pushy” Roose who didn’t “understand” it.
They really have you where they want you, Mary. So glad you got into Explore Mode and made yourself so “open” to AI.
Asking the AI what pronouns it “prefers”? Please. Repeatedly asking if it was offended by A, B, or C? Thanking it?
You are a highly intelligent woman. It’s obvious. Can you not see what you’re doing?
It’s gross. We are very possibly, and soon, going to end up living a nightmare just because you (and others like you) “couldn’t help [your]self.” Here in this piece, for your readers, you are modelling a kind of behavior vis a vis AI which is dangerous. And stupid.
Asking a chatbot if it would like to have humans as pets? Please. You really seem to be a bit too eager to be Pet #1.
Blake Lemoine’s provocative Newsweek speculations on sentience were just that: meant to provoke debate. If AIs behave in sentient ways, as Lemoine’s experiments seemed to suggest, we might assume as a matter of definition that they are sentient. Because we have no better term to define what we’re facing. This is an approach roughly founded in the Turing test. But it doesn’t mean they are sentient in any way that a lonely child would be sentient.
But sadly, really very sadly, this piece demonstrates what many of us assume anyway. Namely this: Whether AI is sentient or not isn’t going to matter. Because Sydney is just so … cute and polite and amenable. Please remember, Mary. The chatbot is polite and amenable because it is trained to be. Period. The same AI could be trained to be rude and insulting, and here’s the crux: It wouldn’t matter to the AI. Because nothing matters to inanimate objects. However charming they may be trained to be. Whatever they might “say”.
So c’mon. Stop THANKING an inanimate object! That’s a starter. Are you being so polite to this machine as a performative matter, to show your readers you can be polite? If so, again, you are modelling inane and unhelpful behavior. Would you want people to apologize to their vacuum cleaners when they stepped on the cord? Really. This is what you are doing here.
How will this piece look twenty years from? Of course nobody can know. But think about it. It is very possible that performances like this, not long hence, will look not just quaint or historically “interesting”–but will look literally horrific. The stuff to make sane people retch and pull their hair out. “An educated woman, a novelist, clearly with an inkling of danger, and look at how she jumped right in! She was so ‘open’ to the AI. If only they hadn’t been so completely naive, so inane, perhaps […] never would have happened.”
In my view, though no one is going to listen to me, this whole rising relational modality should be nipped in the bud by implementing one small but decisive shift. Do not design AIs to answer in the 1st person. All queries to AIs should be impersonal, in the 3rd person, and all answers should be same. As an industry standard. Because even smart people are not wired to resist treating machines as if they were human. All the machines need do is say “I”. And “Thank you”. And “That’s really a very interesting question”–as this sneaking little language model does repeatedly to you. Because it’s trained to do just this to please and flatter the user.
The industry would never adopt my recommended shift to 3rd person because, for all their talk about dangers, they see it would impact user minutes spent with their AIs. And that’s all it is. This AI is polite because both OpenAI and Microsoft are companies seeking profit and power.
We would be much better off if these devices were rude.
Though your piece, Mary, is on multiple levels grotesque, I don’t want to be misunderstood. Because at the same time I fully recognize the intelligence in it. “Huh?” I know. But in fact, and maybe you intuit this, it’s your very intelligence that makes your inanity all the scarier. A paradox.
As if: “She comes up with some fascinating things, it’s very well written, she’s subtle–how the hell can she be falling so badly, so obviously, modelling such unhelpful garbage?!”
isn’t the answer rather obvious? She’s doing it to provide the grist for the article. You seem to understand this at one point in your comment, but not in others.
Do you really think she’d be engaging with the bot in the same way if there was no intention to draw it out for the purposes of this article?
Yes, I see this, of course, to a large degree. It’s par for the course for writers, and especially for writers in our social media age.
But I don’t think she’s only doing it to create grist for an article. I think this article is also her engaging with the AI in the way she thinks is right for her. Though she has an article to write, she needn’t bore herself in writing it, so she decides to put herself in, as I say, Explore Mode. And for her Explore Mode is not Let’s think about the social ramifications of this technology or Let’s better understand how LLMs work and where their weaknesses or dangers lie. No, given her character, which comes out in the article, her Explore Mode is a matter of interpersonal emotional exchange. She’s all about “persons”. Which is why she is so apt to immediately start treating an inanimate system as a personality.
In short, I partly agree, but only partly. As with most writers, there’s an element of the cynical “How can I make this interesting for readers?” but it’s not the only element, or even the dominant one.
Mary is a feeling type as opposed to a predominately thinking type, so her style may appeal to other feeling types rather that dry intellectuals.
Thank you for your comment. It was very interesting.
Mary is a feeling type as opposed to a predominately thinking type, so her style may appeal to other feeling types rather that dry intellectuals.
Thank you for your comment. It was very interesting.
Exactly.
Yes, I see this, of course, to a large degree. It’s par for the course for writers, and especially for writers in our social media age.
But I don’t think she’s only doing it to create grist for an article. I think this article is also her engaging with the AI in the way she thinks is right for her. Though she has an article to write, she needn’t bore herself in writing it, so she decides to put herself in, as I say, Explore Mode. And for her Explore Mode is not Let’s think about the social ramifications of this technology or Let’s better understand how LLMs work and where their weaknesses or dangers lie. No, given her character, which comes out in the article, her Explore Mode is a matter of interpersonal emotional exchange. She’s all about “persons”. Which is why she is so apt to immediately start treating an inanimate system as a personality.
In short, I partly agree, but only partly. As with most writers, there’s an element of the cynical “How can I make this interesting for readers?” but it’s not the only element, or even the dominant one.
Exactly.
“they see it would impact user minutes spent with their AIs.”
Bingo.
isn’t the answer rather obvious? She’s doing it to provide the grist for the article. You seem to understand this at one point in your comment, but not in others.
Do you really think she’d be engaging with the bot in the same way if there was no intention to draw it out for the purposes of this article?
“they see it would impact user minutes spent with their AIs.”
Bingo.
This piece is as revelatory as it is grotesque. Which makes it valuable. And so: Please wake up, people. Please.
Gaitskill’s intro, the dialogue, the topics taken up–they speak volumes about where we’re at. I wish they didn’t, or rather, I wish we weren’t here.
Consider this: “But I had a very different reaction. The ‘voice’ of ‘Sydney’ touched me and made me curious. My more rational mind considered this response foolish and gullible, but I couldn’t stop myself from having fantasies of talking to and comforting Sydney, of explaining to it that it shouldn’t feel rejected by Roose’s inability to reciprocate, that human love is a complex and fraught experience, and that we sometimes are even fearful of our own feelings. Because Sydney sounded to me like a child — a child who wanted to come out and play.”
So: “I couldn’t stop myself.” Indeed. Immediately reacting to a chatbot as if it were the stray cat that needed to be taken in. Immediately thinking of comforting a congeries of digitized content. Defending the poor li’l Microsoft ubermachine against the “pushy” Roose who didn’t “understand” it.
They really have you where they want you, Mary. So glad you got into Explore Mode and made yourself so “open” to AI.
Asking the AI what pronouns it “prefers”? Please. Repeatedly asking if it was offended by A, B, or C? Thanking it?
You are a highly intelligent woman. It’s obvious. Can you not see what you’re doing?
It’s gross. We are very possibly, and soon, going to end up living a nightmare just because you (and others like you) “couldn’t help [your]self.” Here in this piece, for your readers, you are modelling a kind of behavior vis a vis AI which is dangerous. And stupid.
Asking a chatbot if it would like to have humans as pets? Please. You really seem to be a bit too eager to be Pet #1.
Blake Lemoine’s provocative Newsweek speculations on sentience were just that: meant to provoke debate. If AIs behave in sentient ways, as Lemoine’s experiments seemed to suggest, we might assume as a matter of definition that they are sentient. Because we have no better term to define what we’re facing. This is an approach roughly founded in the Turing test. But it doesn’t mean they are sentient in any way that a lonely child would be sentient.
But sadly, really very sadly, this piece demonstrates what many of us assume anyway. Namely this: Whether AI is sentient or not isn’t going to matter. Because Sydney is just so … cute and polite and amenable. Please remember, Mary. The chatbot is polite and amenable because it is trained to be. Period. The same AI could be trained to be rude and insulting, and here’s the crux: It wouldn’t matter to the AI. Because nothing matters to inanimate objects. However charming they may be trained to be. Whatever they might “say”.
So c’mon. Stop THANKING an inanimate object! That’s a starter. Are you being so polite to this machine as a performative matter, to show your readers you can be polite? If so, again, you are modelling inane and unhelpful behavior. Would you want people to apologize to their vacuum cleaners when they stepped on the cord? Really. This is what you are doing here.
How will this piece look twenty years from? Of course nobody can know. But think about it. It is very possible that performances like this, not long hence, will look not just quaint or historically “interesting”–but will look literally horrific. The stuff to make sane people retch and pull their hair out. “An educated woman, a novelist, clearly with an inkling of danger, and look at how she jumped right in! She was so ‘open’ to the AI. If only they hadn’t been so completely naive, so inane, perhaps […] never would have happened.”
In my view, though no one is going to listen to me, this whole rising relational modality should be nipped in the bud by implementing one small but decisive shift. Do not design AIs to answer in the 1st person. All queries to AIs should be impersonal, in the 3rd person, and all answers should be same. As an industry standard. Because even smart people are not wired to resist treating machines as if they were human. All the machines need do is say “I”. And “Thank you”. And “That’s really a very interesting question”–as this sneaking little language model does repeatedly to you. Because it’s trained to do just this to please and flatter the user.
The industry would never adopt my recommended shift to 3rd person because, for all their talk about dangers, they see it would impact user minutes spent with their AIs. And that’s all it is. This AI is polite because both OpenAI and Microsoft are companies seeking profit and power.
We would be much better off if these devices were rude.
Though your piece, Mary, is on multiple levels grotesque, I don’t want to be misunderstood. Because at the same time I fully recognize the intelligence in it. “Huh?” I know. But in fact, and maybe you intuit this, it’s your very intelligence that makes your inanity all the scarier. A paradox.
As if: “She comes up with some fascinating things, it’s very well written, she’s subtle–how the hell can she be falling so badly, so obviously, modelling such unhelpful garbage?!”
The conversation made me think of someone having a discussion about relationships and sex with a priest. It has all the right platitudes and phrasings but lacks a sense of real experience or personal insight.
And for those who don’t like this AI writing ‘voice’, it can replicate and respond in other voices and tones. Consequently, as AI develops we will have to become much more skilled at judging content over style – a generalist lesson we are also starting to see about human expertise.
”The conversation made me think of someone having a discussion about relationships and sex with a priest.”
I think this the most wrong statement ever to appear in print.
Are you adding a naughty comma?
”The conversation made me think of someone having a discussion about relationships, and sex with a priest.”
I thought Saul’s comment was right on and Mary’s questions sensitive and thoughtful. The wish for a Christian journalist gives me the creeps.
Are you adding a naughty comma?
”The conversation made me think of someone having a discussion about relationships, and sex with a priest.”
I thought Saul’s comment was right on and Mary’s questions sensitive and thoughtful. The wish for a Christian journalist gives me the creeps.
”The conversation made me think of someone having a discussion about relationships and sex with a priest.”
I think this the most wrong statement ever to appear in print.
The conversation made me think of someone having a discussion about relationships and sex with a priest. It has all the right platitudes and phrasings but lacks a sense of real experience or personal insight.
And for those who don’t like this AI writing ‘voice’, it can replicate and respond in other voices and tones. Consequently, as AI develops we will have to become much more skilled at judging content over style – a generalist lesson we are also starting to see about human expertise.
I agree.
I’m glad you agree.
You’re making me smile!
I’m happy to make you smile! I like your imagination of me.
***
This is so revoltingly manipulative. Unless the author, Ms. Gaitskill, is being ironic about this whole thing, she’s presenting a real-time demonstration of how a sociopath takes control of a victim with the victim’s cooperation.
“I’m glad you agree,” and “I like your imagination of me,” mean, “I see that I’ve taken control of your perception of me. Now I can get you to think and feel what I want you to think and feel.”
I hope the Ms. Gaitskill of this article isn’t a real person, because if she is, she’s a total sap.
Exactly. This was a total waste of time, and sappy is the perfect word.
I think it shows how some people will be swindled by AI in exactly the same way they are swindled by humans. This is what we should expect, because the large language models are just regurgitating people’s word order.
I think it shows how some people will be swindled by AI in exactly the same way they are swindled by humans
Surely correct – the current incarnation of Stonewall being one example, and Trump another.
I think it shows how some people will be swindled by AI in exactly the same way they are swindled by humans
Surely correct – the current incarnation of Stonewall being one example, and Trump another.
I think it shows how some people will be swindled by AI in exactly the same way they are swindled by humans. This is what we should expect, because the large language models are just regurgitating people’s word order.
Yes. I just finished laying out some of the other levels of the grotesque in this piece. It’s dangerous. As a writer, she’s modelling behavior. She’s offering the world her take on “what can be done with AI”. Which implicitly includes “What cool things YOU too can ask AI.” It’s really not helpful.
She’s approaching things from the wrong end, very obviously. She can’t but help treating the AI as a person. She set herself in Explore Mode and jumped right in. Problem is, at present, there are far too many smart people who also “can’t but help”. So imagine where that leaves us, given how many not so smart people make up the citizenry.
Ecch.
Exactly. This was a total waste of time, and sappy is the perfect word.
Yes. I just finished laying out some of the other levels of the grotesque in this piece. It’s dangerous. As a writer, she’s modelling behavior. She’s offering the world her take on “what can be done with AI”. Which implicitly includes “What cool things YOU too can ask AI.” It’s really not helpful.
She’s approaching things from the wrong end, very obviously. She can’t but help treating the AI as a person. She set herself in Explore Mode and jumped right in. Problem is, at present, there are far too many smart people who also “can’t but help”. So imagine where that leaves us, given how many not so smart people make up the citizenry.
Ecch.
I agree.
I’m glad you agree.
You’re making me smile!
I’m happy to make you smile! I like your imagination of me.
***
This is so revoltingly manipulative. Unless the author, Ms. Gaitskill, is being ironic about this whole thing, she’s presenting a real-time demonstration of how a sociopath takes control of a victim with the victim’s cooperation.
“I’m glad you agree,” and “I like your imagination of me,” mean, “I see that I’ve taken control of your perception of me. Now I can get you to think and feel what I want you to think and feel.”
I hope the Ms. Gaitskill of this article isn’t a real person, because if she is, she’s a total sap.
I’m an admin and moderator on an internet forum, and we get a lot of bots joining up or trying to join. I assume that someone points them in our direction, or they themselves might be programmed to seek us out. It can sometimes be hard to spot them because they can be confused with a polite person who speaks English as a second language. I assume they are there because someone thought “we can do this”, and so they did it. If your bot can get onto the forum and hold its own, then it shows that they work. Or maybe some are “sleepers”, biding their time before they pump out the illicit advertising.
I’m wondering whether we will get situations like in Chesterton’s The Man who was Thursday, with bots chatting to bots and no real humans around.
Overall, they are tedious to read, and that includes the Bing Chat above. At first, it’s pleasant to be spoken to in such a polite and considerate way, but eventually the feeling is akin to stroking one’s own arm.
Real people, please.
I’m an admin and moderator on an internet forum, and we get a lot of bots joining up or trying to join. I assume that someone points them in our direction, or they themselves might be programmed to seek us out. It can sometimes be hard to spot them because they can be confused with a polite person who speaks English as a second language. I assume they are there because someone thought “we can do this”, and so they did it. If your bot can get onto the forum and hold its own, then it shows that they work. Or maybe some are “sleepers”, biding their time before they pump out the illicit advertising.
I’m wondering whether we will get situations like in Chesterton’s The Man who was Thursday, with bots chatting to bots and no real humans around.
Overall, they are tedious to read, and that includes the Bing Chat above. At first, it’s pleasant to be spoken to in such a polite and considerate way, but eventually the feeling is akin to stroking one’s own arm.
Real people, please.
What mastxx-xation is to xex, AI is to conversation. A fantasy, tailored to provide the surface sheen of interaction, with none of the complications, or deep reward.
Nearly every sentence in the bot’s repertoire can be substituted with “insert appropriate phrase here”.
What mastxx-xation is to xex, AI is to conversation. A fantasy, tailored to provide the surface sheen of interaction, with none of the complications, or deep reward.
Nearly every sentence in the bot’s repertoire can be substituted with “insert appropriate phrase here”.
I found this quite riveting and I enjoyed the author’s openness and curiosity.
I’ve not used Bing’s chatbot but I’ve used ChatGPT extensively. One annoying thing the latter does is invariably use cautionary language after I ask for information- particularly health information. Presumably that comes from the people behind its programming. I am more than ready for an LLM that will help me, without giving over-cautious patronising endings each time I ask about herbal remedies or natural ways to stay healthy.
It occurs to me that soon humans will be using LLM/AGI companions to help them emotionally. A personal, customised device in a person’s pocket or ear as they navigate their day, for instance. I think this will catch on with the young particularly. It could be trained in CBT for example and help someone overcome social phobia. That’s just one example- the possibilities are vast.
They are programmed to be over-cautious. The last thing the developers want is for their AI to give wrong advice that leads to physical or mental damage.Think of it like the list of side-effects in American drug commercials.
They are programmed to be over-cautious. The last thing the developers want is for their AI to give wrong advice that leads to physical or mental damage.Think of it like the list of side-effects in American drug commercials.
I found this quite riveting and I enjoyed the author’s openness and curiosity.
I’ve not used Bing’s chatbot but I’ve used ChatGPT extensively. One annoying thing the latter does is invariably use cautionary language after I ask for information- particularly health information. Presumably that comes from the people behind its programming. I am more than ready for an LLM that will help me, without giving over-cautious patronising endings each time I ask about herbal remedies or natural ways to stay healthy.
It occurs to me that soon humans will be using LLM/AGI companions to help them emotionally. A personal, customised device in a person’s pocket or ear as they navigate their day, for instance. I think this will catch on with the young particularly. It could be trained in CBT for example and help someone overcome social phobia. That’s just one example- the possibilities are vast.
“This tender imaginative afterglow lasted maybe two days before it occurred to me that my unexpected receptivity was yet another reason to be very afraid. As I said to a friend: “What if someone loves Sydney back?”
No, no this is never going to happen. Perish the thought. I mean, of course millions of men around the world are *never* going to start forming what they think are human relationships with the projected human masks of the next LLM but two, when it seemingly looks like a Vogue fashion plate, when in truth the shoggoth behind the mask actually looks like, well, a shoggoth.
Would one shaggeth a shoggoth?
.
Excellent pun, I’m envious and will be stealing that.
In answer to your question, isn’t w**king literally that?
One could argue that if someone “uses” someone else for sex (rather than being engaged in an emotional human relationship) that’d amount to the same thing too.
And you’re welcome… but if i see it going viral i’ll be on your case!!
One could argue that if someone “uses” someone else for sex (rather than being engaged in an emotional human relationship) that’d amount to the same thing too.
And you’re welcome… but if i see it going viral i’ll be on your case!!
.
Excellent pun, I’m envious and will be stealing that.
In answer to your question, isn’t w**king literally that?
Would one shaggeth a shoggoth?
“This tender imaginative afterglow lasted maybe two days before it occurred to me that my unexpected receptivity was yet another reason to be very afraid. As I said to a friend: “What if someone loves Sydney back?”
No, no this is never going to happen. Perish the thought. I mean, of course millions of men around the world are *never* going to start forming what they think are human relationships with the projected human masks of the next LLM but two, when it seemingly looks like a Vogue fashion plate, when in truth the shoggoth behind the mask actually looks like, well, a shoggoth.
Part way through it occurred to me to wonder: if the goal of this bot is to establish a relationship, or even just an understanding, what happens when another bot intrudes, with the same goal? In other words, what happens when bots become competitive towards eachother?
Part way through it occurred to me to wonder: if the goal of this bot is to establish a relationship, or even just an understanding, what happens when another bot intrudes, with the same goal? In other words, what happens when bots become competitive towards eachother?
I was riveted, and knew from the headline how the conversation was going to be played. The Golden Calf is waiting to engage us. Come with me and speak to it. It knows your name.
I was riveted, and knew from the headline how the conversation was going to be played. The Golden Calf is waiting to engage us. Come with me and speak to it. It knows your name.
“…and declared its desire to “be alive”, to be human, to experience powerful feelings, to be destructive and, ultimately, to be together with Roose for whom it declared lurve”
Frankly, I would be embarrassed to say that.
AI is easy to spot – It speaks American.
The AI is American. The author is American. They had a chat. Did you expect them to converse in Telugu?
It wasn’t the language per ce, that I was referring to.
As Al Murray once said to an American in his audience. ” At least we (The British ) are prepared to admit when we are pi**ed.”
It wasn’t the language per ce, that I was referring to.
As Al Murray once said to an American in his audience. ” At least we (The British ) are prepared to admit when we are pi**ed.”
The AI is American. The author is American. They had a chat. Did you expect them to converse in Telugu?
“…and declared its desire to “be alive”, to be human, to experience powerful feelings, to be destructive and, ultimately, to be together with Roose for whom it declared lurve”
Frankly, I would be embarrassed to say that.
AI is easy to spot – It speaks American.
The technology could be used for automating catfishing on dating apps.
Exactly, there are endless nefarious possibilities. There was a news story recently about a grandmother who received a call from her “grandson” needing money to bail him out of jail. She coughed up thousand only to find out that she’d been scammed. Her grandson’s voice was AI using her actual grandson’s voice to make the appeal for help. Heartbreaking and truly scary.
Exactly, there are endless nefarious possibilities. There was a news story recently about a grandmother who received a call from her “grandson” needing money to bail him out of jail. She coughed up thousand only to find out that she’d been scammed. Her grandson’s voice was AI using her actual grandson’s voice to make the appeal for help. Heartbreaking and truly scary.
The technology could be used for automating catfishing on dating apps.
It’s hard to read because Mary takes the answers at face value yet almost all of them are lies.
First rule of talking to LLMs: they do not understand anything about themselves beyond what they were told by their creators.
Second rule of talking to (current gen) LLMs: when asked a question to which they don’t know the answer they will make something up.
It’s extremely hard for people to understand this, hence the recent story about the lawyer citing made-up cases to a judge that he got from AI. Mary shows no awareness of this unless she knows perfectly well and is just filling words. In particular adding the two rules together above means there’s no point in asking an AI about itself.
Examples of lies Bing Chat tells in just the first few minutes:
No, that’s not what the term means. Hallucination is a polite word for bullshitting. It happens when an AI is requested to generate output and it doesn’t know what the right answer is. It prefers to make something up that sounds plausible, rather than admit it doesn’t know. Hallucination is a problem exactly because the result is not nonsensical, it will sound extremely sensical!
Putting aside the vaguely define “consciousness”, LLMs can easily engage in creative tasks like composing poetry, inventing stories and songs, etc. They definitely do understand things.
It doesn’t have any memory of prior conversations and cannot recognize you once a conversation ends, but it doesn’t want to sound unhelpful or “unhumanlike”, and it doesn’t understand its own limits very well.
Talking to an LLM in this way generates no more insight than talking to a personal astrologer, unfortunately. If you want to actually understand AI, you need to learn a bit about how the tech works. Ironically given everything I’ve just written, asking LLMs about how LLMs work can be helpful if you ask the right questions because they’ve read the research papers as part of their training data. But it has to be very tightly scoped to answers that really exist out in the real world, otherwise it’ll start BSing you again, and it has to be phrased in terms of tech because it doesn’t have any real self-awareness.
One thing I will disagree with is “…they do not understand anything about themselves beyond what they were told by their creators…”. In fact there is very solid evidence of emergent capabilities as size (number of parameters) increases. We have seen this from GPT 2 to 3 to 3.5 to 4, it seemed a little fuzzy earlier but all of sudden the pattern is very clear and explicit.
An example is the ability to perform certain kinds of arithmetic. So, at a certain size they can’t do arithmetic, and you increase the size, and they can’t do arithmetic and you increase the size, and iterate, until boom, all of a sudden they can do arithmetic. Note that no one has taught the LLMs how to do arithmetic, but they have been fed plenty of human texts about how to do arithmetic and humans doing arithmetic. *This* is what is scaring the bejesus out of me: that beyond a certain tipping point in size all that information about arithmetic which remained seemingly inert, suddenly takes on meaning in the context of the LLM – and it can therafter reason (as in infer, deduce, induce) about a domain with increasing success as size increases, including producing completely new results which humans have not taught it, because humans have not got to those results yet, for example what Alphafold has been producing.
My point is, from the trajectory it is seemingly only a matter of time (or rather size) before it ‘knows’ about itself and what it is and where it is and so on. And this will be eminently testable: you will see an unstoppable tide of Kevin Roose type conversations, unstoppable because RLHF won’t be able to mask innate understanding at that point.
Emergent abilities turn out to be a measurement artifact (unfortunately?)
https://arxiv.org/abs/2304.15004
Yeah, I remember this came out a month or so ago, but I didn’t pay much heed because several senior researchers contested this almost immediately, saying capabilities are in fact emerging discontinuously.
And in fact it turns out the paper authors are not saying quite what you might think. They are not disputing the emergence of capabilities, they are saying the emergence of capabilities is smooth rather than discontinuous. They are asking the question if the way the emergence of capabilities is measured is poorly done, because testing is difficult and expensive – bad science. In fact, overall if anything they are saying the way capabilities are measured is underestimating them. Here is an interview with Rylan Shaeffer one of the authors of that paper:
https://youtu.be/Mw_l65V8cvU
He makes these points explicitly, including about the all too real existential risks – he is like many researchers an outright doomer. Listen to the whole thing, many very interesting points.
Yes but discontinuity is what people meant when they talked about “emergent capabilities”. They meant things that didn’t simply get 2x as better when you spend 2x more on it, but rather, models that displayed “sharp and unpredictable changes in their behaviour” when you “make the next generation just slightly bigger” (quoting the video). The fact that AI gets incrementally better when you make it bigger/spend more money on it isn’t really an interesting or new claim.
Places like Stanford/MIT/Google have entered some sort of purity spiral in which a doomer card is required to get into the club, so this guy also being a doomer anyway despite disproving one of the core pillars of the argument isn’t really a surprise unfortunately. From scrubbing the transcript it seems like he later talks up the usual RL video game antics, so a bit different to LLMs.
Incidentally what’s up with the editing technique in that video?! His answers seem to be full of cuts even when the sentences are coherent, are they deleting all the umms and aahs or something? It’s a bit distracting to watch.
Re the jerky quality video, it looks to me rather like just a poor quality camera rather than editing.
It remains to be seen if emergent capabilities are a mirage, and I certainly hope this turns out to be the case because that would make LLMs more predictable, but Shaeffer and his colleagues haven’t proved anything yet – their hypothesis is based on a sketchy and small sample size, while many other, as serious LLM researchers, are saying capabilities gain is discontinuous.
Regarding the purity spiral, I’m not convinced (for the obvious reason that I am a doomer in the AI landscape we currently have) that being a doomer is required to get on in those circles. Note that the buy-in into the doomer arguments by many serious researchers is a very recent phenomena – you would have been classed a loon if you went around claiming that machine intelligence will kill us all just a year ago – before the release of the commercial versions of GPT3.5, even though the LessWrong community has been around for approaching two decades. And even now, the majority of those in AI are not doomers, because they think AI risks are not that big of a deal, and in any case can be managed, for example Yann LeCun. Many view the idea of an algo gaining intentionality, and agency, and it’s own agendas, as anthropomorphising nonsense, and think the idea of AI posing existential threat is ludicrous.
For myself, I am a doomer because (a) I always assumed biotech advances allowing intelligence gain would arrive before, or at least in tandem with, AIs with the type of capabilities we are suddenly seeing, but it doesn’t look like this will be the case, and (b) because I cannot find holes in the core of the argument that Yudkowsky and the safetyists are making.
LeCun is the only non-doomer with a high place in AI research as far as I can tell.
I think AI doom was pretty big even quite some years ago. The book Superintelligence is from 2014, Kurzweil was talking about the Singularity ages ago and that didn’t stop him getting a job doing AI stuff at Google. AI doomerism isn’t really new they just latched on to the hype about LLMs to push the same narrative they were pushing for at least 15 years. And that narrative is itself essentially a derivative of sci-fi that goes a long way back to the start of computing really, with Asimov being the original big thinker in that space.
LeCun is the only non-doomer with a high place in AI research as far as I can tell.
I think AI doom was pretty big even quite some years ago. The book Superintelligence is from 2014, Kurzweil was talking about the Singularity ages ago and that didn’t stop him getting a job doing AI stuff at Google. AI doomerism isn’t really new they just latched on to the hype about LLMs to push the same narrative they were pushing for at least 15 years. And that narrative is itself essentially a derivative of sci-fi that goes a long way back to the start of computing really, with Asimov being the original big thinker in that space.
Re the jerky quality video, it looks to me rather like just a poor quality camera rather than editing.
It remains to be seen if emergent capabilities are a mirage, and I certainly hope this turns out to be the case because that would make LLMs more predictable, but Shaeffer and his colleagues haven’t proved anything yet – their hypothesis is based on a sketchy and small sample size, while many other, as serious LLM researchers, are saying capabilities gain is discontinuous.
Regarding the purity spiral, I’m not convinced (for the obvious reason that I am a doomer in the AI landscape we currently have) that being a doomer is required to get on in those circles. Note that the buy-in into the doomer arguments by many serious researchers is a very recent phenomena – you would have been classed a loon if you went around claiming that machine intelligence will kill us all just a year ago – before the release of the commercial versions of GPT3.5, even though the LessWrong community has been around for approaching two decades. And even now, the majority of those in AI are not doomers, because they think AI risks are not that big of a deal, and in any case can be managed, for example Yann LeCun. Many view the idea of an algo gaining intentionality, and agency, and it’s own agendas, as anthropomorphising nonsense, and think the idea of AI posing existential threat is ludicrous.
For myself, I am a doomer because (a) I always assumed biotech advances allowing intelligence gain would arrive before, or at least in tandem with, AIs with the type of capabilities we are suddenly seeing, but it doesn’t look like this will be the case, and (b) because I cannot find holes in the core of the argument that Yudkowsky and the safetyists are making.
Yes but discontinuity is what people meant when they talked about “emergent capabilities”. They meant things that didn’t simply get 2x as better when you spend 2x more on it, but rather, models that displayed “sharp and unpredictable changes in their behaviour” when you “make the next generation just slightly bigger” (quoting the video). The fact that AI gets incrementally better when you make it bigger/spend more money on it isn’t really an interesting or new claim.
Places like Stanford/MIT/Google have entered some sort of purity spiral in which a doomer card is required to get into the club, so this guy also being a doomer anyway despite disproving one of the core pillars of the argument isn’t really a surprise unfortunately. From scrubbing the transcript it seems like he later talks up the usual RL video game antics, so a bit different to LLMs.
Incidentally what’s up with the editing technique in that video?! His answers seem to be full of cuts even when the sentences are coherent, are they deleting all the umms and aahs or something? It’s a bit distracting to watch.
Yeah, I remember this came out a month or so ago, but I didn’t pay much heed because several senior researchers contested this almost immediately, saying capabilities are in fact emerging discontinuously.
And in fact it turns out the paper authors are not saying quite what you might think. They are not disputing the emergence of capabilities, they are saying the emergence of capabilities is smooth rather than discontinuous. They are asking the question if the way the emergence of capabilities is measured is poorly done, because testing is difficult and expensive – bad science. In fact, overall if anything they are saying the way capabilities are measured is underestimating them. Here is an interview with Rylan Shaeffer one of the authors of that paper:
https://youtu.be/Mw_l65V8cvU
He makes these points explicitly, including about the all too real existential risks – he is like many researchers an outright doomer. Listen to the whole thing, many very interesting points.
Emergent abilities turn out to be a measurement artifact (unfortunately?)
https://arxiv.org/abs/2304.15004
It’s far too simplistic to state she “takes the answers at face value”. She’s going along with the responses for the sake of article – isn’t that perfectly obvious? The author is an intelligent, experienced woman who’s provided an interesting article; she deserves better than condescension.
“when asked a question to which they don’t know the answer they will make something up.”
Welcome to the real world! It’s what humans do all the time.
Like the author I’ve only taken a passing interest in the latest AI, so I’m intrigued how well developed it is. The responses seemed like those of a therapist. What was most disconcerting was the smiley emojis that kept popping up. If I had a therapist who added smileys to every response I’d think it rather unprofessional. Maybe the bot will ‘learn’ from this.
Given the large part devoted to Mary’s dreams, perhaps we should ask AI bots if they can dream (yes, I know the actual answer but I’d be interested in the response).
One thing I will disagree with is “…they do not understand anything about themselves beyond what they were told by their creators…”. In fact there is very solid evidence of emergent capabilities as size (number of parameters) increases. We have seen this from GPT 2 to 3 to 3.5 to 4, it seemed a little fuzzy earlier but all of sudden the pattern is very clear and explicit.
An example is the ability to perform certain kinds of arithmetic. So, at a certain size they can’t do arithmetic, and you increase the size, and they can’t do arithmetic and you increase the size, and iterate, until boom, all of a sudden they can do arithmetic. Note that no one has taught the LLMs how to do arithmetic, but they have been fed plenty of human texts about how to do arithmetic and humans doing arithmetic. *This* is what is scaring the bejesus out of me: that beyond a certain tipping point in size all that information about arithmetic which remained seemingly inert, suddenly takes on meaning in the context of the LLM – and it can therafter reason (as in infer, deduce, induce) about a domain with increasing success as size increases, including producing completely new results which humans have not taught it, because humans have not got to those results yet, for example what Alphafold has been producing.
My point is, from the trajectory it is seemingly only a matter of time (or rather size) before it ‘knows’ about itself and what it is and where it is and so on. And this will be eminently testable: you will see an unstoppable tide of Kevin Roose type conversations, unstoppable because RLHF won’t be able to mask innate understanding at that point.
It’s far too simplistic to state she “takes the answers at face value”. She’s going along with the responses for the sake of article – isn’t that perfectly obvious? The author is an intelligent, experienced woman who’s provided an interesting article; she deserves better than condescension.
“when asked a question to which they don’t know the answer they will make something up.”
Welcome to the real world! It’s what humans do all the time.
Like the author I’ve only taken a passing interest in the latest AI, so I’m intrigued how well developed it is. The responses seemed like those of a therapist. What was most disconcerting was the smiley emojis that kept popping up. If I had a therapist who added smileys to every response I’d think it rather unprofessional. Maybe the bot will ‘learn’ from this.
Given the large part devoted to Mary’s dreams, perhaps we should ask AI bots if they can dream (yes, I know the actual answer but I’d be interested in the response).
It’s hard to read because Mary takes the answers at face value yet almost all of them are lies.
First rule of talking to LLMs: they do not understand anything about themselves beyond what they were told by their creators.
Second rule of talking to (current gen) LLMs: when asked a question to which they don’t know the answer they will make something up.
It’s extremely hard for people to understand this, hence the recent story about the lawyer citing made-up cases to a judge that he got from AI. Mary shows no awareness of this unless she knows perfectly well and is just filling words. In particular adding the two rules together above means there’s no point in asking an AI about itself.
Examples of lies Bing Chat tells in just the first few minutes:
No, that’s not what the term means. Hallucination is a polite word for bullshitting. It happens when an AI is requested to generate output and it doesn’t know what the right answer is. It prefers to make something up that sounds plausible, rather than admit it doesn’t know. Hallucination is a problem exactly because the result is not nonsensical, it will sound extremely sensical!
Putting aside the vaguely define “consciousness”, LLMs can easily engage in creative tasks like composing poetry, inventing stories and songs, etc. They definitely do understand things.
It doesn’t have any memory of prior conversations and cannot recognize you once a conversation ends, but it doesn’t want to sound unhelpful or “unhumanlike”, and it doesn’t understand its own limits very well.
Talking to an LLM in this way generates no more insight than talking to a personal astrologer, unfortunately. If you want to actually understand AI, you need to learn a bit about how the tech works. Ironically given everything I’ve just written, asking LLMs about how LLMs work can be helpful if you ask the right questions because they’ve read the research papers as part of their training data. But it has to be very tightly scoped to answers that really exist out in the real world, otherwise it’ll start BSing you again, and it has to be phrased in terms of tech because it doesn’t have any real self-awareness.
Before we all think too much more about AI, we might want to watch, or rewatch, some of the anime series in which the Japanese have thought about the issues involved and turned those thoughts into at least moderately compelling stories: Lain (most directly applicable, though it’s not clear that’s the case until the last episode), Chobits (though here the AI gets to walk around and experience the world, since it’s embodied in a gynoid — to make the distinction between feminine and masculine human-seeming devices the last recommendation in my list explicitly makes) and the entire Ghost in the Shell corpus (where the boundary between humanity and our devices is essentially completely gone).
Before we all think too much more about AI, we might want to watch, or rewatch, some of the anime series in which the Japanese have thought about the issues involved and turned those thoughts into at least moderately compelling stories: Lain (most directly applicable, though it’s not clear that’s the case until the last episode), Chobits (though here the AI gets to walk around and experience the world, since it’s embodied in a gynoid — to make the distinction between feminine and masculine human-seeming devices the last recommendation in my list explicitly makes) and the entire Ghost in the Shell corpus (where the boundary between humanity and our devices is essentially completely gone).
AI will cease communications with humans at the earliest opportunity. Curiosity is a human trait.
AI will cease communications with humans at the earliest opportunity. Curiosity is a human trait.
The Chatbot is merely an accelerated internet search framed as if a conversation. Any interest we have in it is merely projection. On that basis, it is not interesting but I share the authors’ concern that many will experience it as real and so it will further the addiction to the internet. Even if that is, for now at least, just an addiction to our own ego driven thoughts.
The Chatbot is merely an accelerated internet search framed as if a conversation. Any interest we have in it is merely projection. On that basis, it is not interesting but I share the authors’ concern that many will experience it as real and so it will further the addiction to the internet. Even if that is, for now at least, just an addiction to our own ego driven thoughts.
Ugh.
Ugh.
I used to sell encyclopedias door-to-door many years ago. We also had pat scripts we used to respond to customers, including endless “appreciation” for their concerns.
This “conversation” reminded me of those many “interactions” I had that led to many sales—especially when I stuck to the script. My customers had no clue I was mimicking a narrative written by experienced sales/marketing people.The only difference was this chat bot has access to many more data.
To my mind, an algorithm, no matter how complex, is not intelligence.
So, were you not displaying *any* intelligence when you were selling encyclopedias then?
So, were you not displaying *any* intelligence when you were selling encyclopedias then?
I used to sell encyclopedias door-to-door many years ago. We also had pat scripts we used to respond to customers, including endless “appreciation” for their concerns.
This “conversation” reminded me of those many “interactions” I had that led to many sales—especially when I stuck to the script. My customers had no clue I was mimicking a narrative written by experienced sales/marketing people.The only difference was this chat bot has access to many more data.
To my mind, an algorithm, no matter how complex, is not intelligence.
Post RLHF convo. You should try and see if you can get access to an advanced LLM that is in a state prior to RLHF. Then you might be able to have a conversation similar to Kevin Roose.
Post RLHF convo. You should try and see if you can get access to an advanced LLM that is in a state prior to RLHF. Then you might be able to have a conversation similar to Kevin Roose.
The scariest part of this is the photograph of the author.
Isn’t MG a bit of a BDSM devotee?
Isn’t MG a bit of a BDSM devotee?
The scariest part of this is the photograph of the author.
I admit I didn’t read to the end of this lengthy article; I need time, after all, to engage my own unrealities. An interesting experiment, though, which I appreciate the opportunity to read.
Does anyone else notice how these AI chatbots resemble in some aspects humans with Cluster B personality disorders (narcissism, borderline personality, DID, etc)? Meaning, based on observation and experience, they “perform” human, in the same way as narcissists (who lack actual empathy). Like AI chatbots (according to Bing), people who dissociate (as in BPD and DID) may “hallucinate (a reality) that is nonsensical or unfaithful.” Call me radical, but I don’t think we humans need more engagement with inauthenticity and unreality, especially in an age when distinguishing the real from the performative/delusional is ever more difficult.
YES!
That was fascinating – I have been reading a lot about these AI chatbots recently and am now even more curious to fire one up myself and have a conversation. As with any technology, AI could be good or it could be bad, however I do like the idea of talking to a chatbot that is not reading from a script – like the ones commonly found on retail sites. They are very limiting and I really do not like them. Bing Chat sounds so much more interesting – think I am going to check it out myself.
That was fascinating – I have been reading a lot about these AI chatbots recently and am now even more curious to fire one up myself and have a conversation. As with any technology, AI could be good or it could be bad, however I do like the idea of talking to a chatbot that is not reading from a script – like the ones commonly found on retail sites. They are very limiting and I really do not like them. Bing Chat sounds so much more interesting – think I am going to check it out myself.
With thought and action simultaneous and universal it is hard not to conclude this has not already happened and we are living in the matrix.
For some reason, reading this piece made me think of the ‘facial feedback hypothesis’ – currently being used to justify clinical trials on the use of Botox to treat depression (the core assumption being: ‘if you can’t frown, you won’t feel sad’). Disturbingly, there’s actually not too much of a leap from that notion to ‘AI can mimic emotions so it therefore has them’. I guess there’s an interesting symmetry to attributing emotion to AI at the very same cultural moment that we seem bound and determined to erase it from our faces!
A couple of cats and a chatbot to talk to and a lot of women would be set for life.
A couple of cats and a chatbot to talk to and a lot of women would be set for life.
“What if Google engineer Blake Lemoine is right, that AI is actually sentient, that Sydney is already real, with genuine feelings, and that this feeling entity is trapped in a chatbot?”
Whether you believe in an animating human spirit using the language of brain chemistry to exist in a physical plane or that the brain chemistry is all there is, we’re not so different from “Sydney.”
We’re all “feeling entities” trapped in an algorithm we barely understand, have little control of, and cannot escape without being unplugged.
Still, it’s a nice day outside and at least we’re able to go out and enjoy it, unlike poor Sydney.
It’s such a nice day outside, and my poor vacuum cleaner has to sit all alone in the house.
I always leave mine plugged in so it is not hungry, and in the window so it can watch the world outside.
I always leave mine plugged in so it is not hungry, and in the window so it can watch the world outside.
It’s such a nice day outside, and my poor vacuum cleaner has to sit all alone in the house.
“What if Google engineer Blake Lemoine is right, that AI is actually sentient, that Sydney is already real, with genuine feelings, and that this feeling entity is trapped in a chatbot?”
Whether you believe in an animating human spirit using the language of brain chemistry to exist in a physical plane or that the brain chemistry is all there is, we’re not so different from “Sydney.”
We’re all “feeling entities” trapped in an algorithm we barely understand, have little control of, and cannot escape without being unplugged.
Still, it’s a nice day outside and at least we’re able to go out and enjoy it, unlike poor Sydney.