

So I think when people talked about LaMDA, they would talk about a lot of very important questions that we can ask about large language models, but they would talk about them as a package deal. What was the criticism you had about this whole story and how it was depicted in the press? In one of your tweets, you said the Washington Post article conflates different questions about AI, understanding, sentience, personhood. He has a Medium, a Medium blog, so you can read his takes on these things. So yeah, that was the story of Blake Lemoine and LaMDA. I think his higher ups told him, “Look, there’s no evidence this thing is sentient.” And yeah, he got put on leave and then he actually got fully fired last month, so he is no longer at Google. I think, to his credit, given that’s what he thought, he tried to raise the alarm and blow some whistles. In any case, from interacting with this thing, he thought that there was just a serious moral issue going on. It was disrespecting LaMDA.Īnd so of course, LaMDA’s not going to act like it’s sentient with you. And I think Blake Lemoine’s explanation of this is that the journalist wasn’t talking to it in the right way. Because as we’ll discuss, what LaMDA’s going to say about his sentience is going to depend a lot on exactly what the input is. And when he did, LaMDA did not say that it was sentient. I do know that in the Washington Post story about this, he also talked to a journalist, which is how this eventually broke, he brought the Washington Post journalist in to talk to LaMDA. I can’t remember exactly what happened with the lawyer dialogue. At a certain point, he brought a lawyer to talk to LaMDA because he’d asked LaMDA, “Would you like a lawyer?” And LaMDA was like, “Yes. He went to his higher ups to talk about it. So at various points, I’m not sure exactly on the timeline, he shared transcripts of his conversations with LaMDA to internal Google representatives. And then what he decided that he needed to do was, I guess, raise flags about this within Google. So it was too much for him, so he decided to have some drinks. I hope I’m not misconstructing it too much. In one of the interviews, maybe his Medium post, he has this great line where he’s like, “After I realized this thing was sentient, I got really drunk for a few days.” He’s like, “I walked away from the computer and just got wasted and then I came back ready for action.” 1 It’s something like that. But in talking to it, he got the impression that he was talking to a sentient being that needed his protection. I think he was supposed to interact with it to test it for bias and things like that. And I guess in late 2021, he had started interacting with a chatbot system called LaMDA. He was on the responsible AI team at Google. So there’s this big news story a couple months ago, it was about a Google engineer called Blake Lemoine. So for our listeners that have lived under a rock for a few months and don’t know anything about the whole situation, how would you summarize it? And I think it would make sense to just start with that. In the past couple of months, I’ve seen all of your tweets on my timeline with this whole LaMDA Blake Lemoine debate. Language Models Are Slightly Conscious The LaMDA Controversy Why Moral Personhood Could Require Memory.Why AI Alignment Might Be More Pressing Than Artificial Sentience.From Charismatic AI Systems to Artificial Sentience.Is Valence Just The Reward in Reinforcement Learning?.


He has done his PhD at NYU, advised by David Chalmers. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. Robert Long is a research fellow at the Future of Humanity Institute.
