By Rebel Wisdom - Sex and Evolutionary Psychology, Geoffrey Miller & Diana Fleischman, CC BY 3.0
(Editor’s note: this interview was originally intended for release as a podcast, but due to audio quality issues, a full transcription will first be released as a two-part lightly edited discussion.)
The Technoskeptic was fortunate to get to speak with Geoffrey Miller, a professor of evolutionary psychology at the University of New Mexico in Albuquerque.
Professor Miller is the author of five books, including Virtue Signaling, The Mating Mind, and Spent.
His forthcoming book, Old Money, is about evolution and consumer culture.
Perhaps most importantly for our purposes, Professor Miller is a deep thinker on the risks to humanity posed by artificial general intelligence.
Art Keller: I am here with Professor Geoffrey Miller, an evolutionary psychologist and very interesting thinker on the field of AI, AGI, and whether it would be a good idea- or incredibly bad idea.
One of the things that I did ahead of this, I sent you something. I don't know if you got a chance to look at it, but I'm wondering, is it just bad reasoning on my part, or is there kind of a through-line between dangerous areas of research?
People undertaking AI or Gain-of-Function (research) or, I sent you some reporting I did about (the creation of) “Spice,” the synthetic cannabinoid. It was “pure science” research funded by a grant from the National Institute of Drug Abuse. The research was, “I'm going to make 450 new versions of this illegal drug and then release the information.”
Predictably, what you would imagine happened did happen. Chinese black labs started pumping it out. It's now one of the favorite highs of all the homeless people. It's cheaper than normal marijuana, despite the fact that according to a recent DEA person I spoke to, it's also a horrible high. It gets you very stoned, but not in a pleasant way--but if you're walking the streets…
That's the through line that I have for you, as someone who's looked a lot at status and signaling. Do we have mismatched incentives where people are socializing risk and privatizing gain for these seemingly prestigious and remunerative things like Game of Function research and this Spice example?
Am I right in saying that this is kind of a dangerous thing that's going on in a whole variety of fields or am I misreading the situation?
Geoffrey Miller: Yeah, Art I think there's a lot going on here.
Externalizing the risk is a major thing that if you're working in a Gain-of-Function lab or you're working on some new synthetic drug or you're working on Artificial General Intelligence or working towards it, you have potential personal gains that are not just financial, right? But also status gains and the sort of thrill of working on something that is dangerous and therefore cool, right? So for young males in particular we have this thing called “Young Male Syndrome” which is the extreme levels of risk-taking that young men tend to do in order to gain status and show off and be cool and attract women.
So you see a lot of AI researchers who are young men typically, and the more that you talk about potential extinction risks of AI, kind of the more excited and cool a lot of them think AI is, right?
And even apart from the financial gain, they're like, “Yeah, I'm working on, you know, the biggest, most disruptive technology ever and it will replace us and it will be our descendent, blah, blah, blah.”
And they actually get a sort of status out of the risk-seeking, I think.
And that's super dangerous, socially.
AK: That is one of the things that is most baffling. We can get into (pro-AI Safety/Anti AGI advocate) Conor Leahy's recent MIT presentation that came out. What he was advocating, the long and short of it (about AGI).
He's like, “Shut it down. Shut it down now. Do not publish your work.”
I know this would be very inconvenient for the giant-brained MIT people, but by the same token, there's a very low percentage that’s going to happen, for the reasons you just elucidated. That is not going to be a popular position because it's like “Yeah, but I'm doing the cool thing!”
It's really a hell of a dilemma to manage and Leahy said something like “I have no good solutions, I just know this (push for AGI) is insanely risky.”
But we're incurring these risks across Gain of Function, across AI, and every bad thing AI can be used to do, which you've talked about on other podcasts.
One of the other things that I see is just a complete lack, and it's possibly willful blindness, of a security mindset. Which, you know, I didn't have when I started working for the CIA. To some people, it comes naturally. Some people are slightly more paranoid or risk-sensitive. But the CIA has to take people and train people on what operational security thinking is. It takes a while and not everyone gets it, but you have to start looking as you're planning operations.
It's like, what could go wrong? Okay, if something goes wrong here, what's plan B? What's plan C if that goes wrong?
And it really changes you. You have to look at the world as a much more dangerous place. Recent history is full of headlines of people working for the CIA who did all that and things still blew up in their face! All the time! So, it's like even if you're planning for it, it goes wrong.
But the other throughline I see with all this risk-taking behavior, besides a complete lack of security mindset, and I think that's very motivated reasoning. They're not looking because they don't want to see the risks.
GM: Yeah and I suspect also a lot of people in computer science have a somewhat narrow notion of security that's mostly derived from a kind of cyber security emphasis on “How do I protect my system from outside actors who want to access data that they're not authorized to access? Or how do I make sure that my social media platform is not too easy to exploit or to gain?”
They're not used to dealing with intelligent agents that might be smarter than them and might have goals that are not aligned with human civilization.
So it’s partly, I agree, a matter of boosting the security mindset, in the way that the CIA has to train people. It’s kinda counterintuitive.
If a beginner is playing chess-you'll see this with kids-they game out, “If I made this move and then the other person makes the dumbest possible move in response, I have this really clever plan for achieving checkmate quickly!” And they're not doing what the enemy's best response would actually be. That's the real “game theory.” So, part of it is boosting the security mindset overall. But part of it is, how do you develop the appropriate kind of security mindset for new Artificial General Intelligences or Superintelligences that raise qualitatively new issues that we are not used to dealing with?
AK: Even at the level of basic cybersecurity though, I detect motivated reasoning. Because it really is looking at it from a cybersecurity mindset. This is something I've been writing about on and off for a decade, and one thing I know is bad guys are always early tech adopters.
So when (head of Meta’s AI research) Yann LeCun says something like, “Good guy AI built by good guys is seriously going to outclass any bad guys.”
It's like, one, bad guys are going to have the first-mover advantage on a lot of this stuff!
And first mover advantage with something that gives you a potentially exponential advantage, that's a thing that you could fall behind and never catch up with.
So, that guy is frustrating. I mean, I'm sure he'd tear me to pieces in a reasoned debate. He doesn't do reasoned debate. He does a lot of ad hominem attacks. That's what's frustrating. You mentioned when you were talking to Chris Williamson.
With (tech venture capitalist billionaire) Marc Andreessen, it is the same thing. A guy who's by most objective standards certainly a brilliant businessman, and a visionary? And then he puts out an essay on how AI is going to be a silver bullet for all these things and we need to forget about all the problems.
And I’m like, “This is like seventh-grade level reasoning!”
What's going on here?
GM: It is remarkable. I mean, Yann LeCun, one has to respect him intellectually as one of the inventors of deep learning and a leading machine learning theorist. And, you know, his academic work has gotten 300,000 citations and he's hugely influential.
But man, when it comes to AI safety, he seems to have the reasoning ability of a seventh-grader and he also seems not to have read the actual literature on AI safety that people have been working really hard on, very smart people, for 20 years or more.
You can't just wade into this debate thinking, “Oh, I have this amateurish objection to extinction risk.”
People have thought about this stuff before, right? And any gut-level reaction that you have to the AI safety debate that isn't informed by reading about AI safety has probably already been dealt with 15 years ago by one or another of the clever writers.
AK: Yes.
GM: I already mentioned young male risk-seeking. I think that's part of the driving motivation for a lot of these AI guys.
But I wanted to mention another issue that I think happens. I resonate with the excitement of building AI systems. Back in grad school at Stanford, I spent a lot of my time working on neural networks and genetic algorithms and doing machine learning research. I did a whole postdoc at the University of Sussex on evolutionary robotics and autonomous agents.
And there is a thrill in designing, growing, and evolving systems that do stuff. You feel this wonderful kind of parental pride. You get this sort of thrill of almost being like a dad to these little systems and when they do well, you want to nurture them and grow them and you treat them as your little offspring. I felt this way about the little Khepera robots when we evolved neural networks that let them do cool stuff and run around and chase each other. It's a thrill. And I think there's a kind of misplaced parental instinct that also comes into play.
So, it's not all about the reckless risk-seeking.
I think some of it is also young AI researchers, particularly if they're unmarried and childless, I think they've put a lot of their nurturing instincts towards their systems. They take parental pride if they work, they feel disappointed if they don't work. And they anthropomorphize these systems and they talk about advanced AI systems as being their descendants, as being their children.
And they think anybody who interferes with that is messing with my kids, right? My lineage…
AK: -That’s kind of terrifying because then we got two species survival drives at loggerheads!
That's not a particular point that I've heard made before.
I mean, I've looked at some of the (AI accelerationist) “Beff Jezoses” of the world and people like that.
On the one hand, they're like, “Let's accelerate this. One, we don't have to worry, BUT if we all get replaced by this machine intelligence, it's fine. It's fine. I mean, we've built our successors!”
It's like, nobody's on board for that! But 1% of 1% of people think that's “the thing.”
GM: Yeah, well, there's this accelerationist bubble in the Bay Area, which includes a lot of the leading AI companies, Anthropic and OpenAI and so forth.
And there's a lot of groupthink. There's a lot of very similar views. And I also detect a kind of somewhat pervasive contempt for human intelligence in a lot of these guys. A lot of them have sort of a secondhand understanding of human cognitive biases. They think humans are irredeemably biased and prejudiced and incapable of rational thinking and AGI's and the ASI's will be better at being intelligent agents than humans are.
There's not particularly good evidence that that will be true but also, I think they have a really biased and negative view of human intelligence. I know exactly where it comes from. It comes from Amos Tversky and Danny Kahneman. I took a graduate course with Amos Tversky and he hated my field, evolutionary psychology, right?
Because evolutionary psychology emphasized the amazing incredible things that humans are very very good at, even better than other mammals, even better than other primates. He focused on the negatives, the stuff that we allegedly don't do very well and that you can you can demonstrate experimentally. So that whole cognitive biases work has been picked up by the AI industry as gospel.
AK: How many of them realize a lot of this stuff did not pass the replication crisis test?
In fact, most of that kind of research, I saw a thread you pointed to recently, there is an ilk of this research and it's the area with the worst problem in the replication crisis. It's about these alleged biases. They made super interesting things that no other lab can do. And it's like, does this even exist? Or exist in the form and the way (claimed)? Between “Priming” and so many other things…
There's a particular danger of the AI industry folks and the accelerationists and the technophiles getting all the second and third-hand psychology, right? Often from 10, 20, 30 years ago, not realizing, oh, this stuff is not replicated! It actually doesn't hold up. And even when it does apply, it applies under much narrower circumstances than people thought. It can lead to a kind of misanthropy where you think human intelligence sucks and we can do better easily. And therefore we deserve to be replaced.
GM: There's a particular danger of the AI industry folks and the accelerationists and the technophiles getting all the second and third-hand psychology, right? Often from 10, 20, 30 years ago, not realizing, oh, this stuff is not replicated! It actually doesn't hold up. And even when it does apply, it applies under much narrower circumstances than people thought. It can lead to a kind of misanthropy where you think human intelligence sucks and we can do better easily. And therefore we deserve to be replaced.
AK: Who's defining better? That's just crazy. What is better? And it takes us back to the value argument.
One of the things that was in sci-fi 50 years ago, going back to the iRobot, was the three laws of robots. Not to hurt humans and to care for humans, not do anything that would harm them, but that would require incredibly advanced understanding. And I've been throwing this question out there. I've yet to get an answer.
Has anyone built any version of narrow AI where they can say, we have taught it to value something? A human value, taught in a way that's not just reinforcement feedback that is “Don't say this nasty racist thing.” I don't know if you caught the recently released book done by a writer from Saturday Night Live. He got access to DaVinci 2 which was basically Chat GPT-4 with the filters off, and man is it homicidal.
It is really scary. If you listen to some of that and they had they had the poetry this thing generated read by Werner Herzog which is an interesting and compelling choice.
GM: (laughs) Well everything sounds ominous when its read by Werner Hertzog. (1)
AK: Yes, it's like it was the perfect casting choice for that. My point being, saying, “Don't think this” doesn't mean it's not bubbling around inside the little black box.
Anthropic is arguably the people who care most about AI safety, at least that's the claim they make. When Kevin Roose, a New York Times tech reporter, went there, he was almost shocked about how much they claimed to care about safety. Like caring about safety was really weird! How can you be a tech reporter and think that is really weird?
GM: Presumably being a tech reporter kind of selects for being a bit of a techno-optimist, right? And getting really excited about new developments and not necessarily seeing all their unsafe implications.
AK: So, they're trying to build something called Constitutional AI, which is like we give it a constitution, kind of a vaguely defined sort of cover, as understand it.
Our constitution is a series of broad guidelines that we hope to follow, and then we interpret it to cover a huge variety of situations, not always perfectly but well enough to have a very successful country. That sounds good. That sounds like they're trying to build in a value. So, Connor Leahy gave this lecture at MIT a while ago. It just came out. It’s about how Anthropic is trying to do its work. It's through Mechanistic Interpretability, which means, “We're trying to understand how this black box does what it does.”
I think it was a really good lecture because it was easy to understand it for a layman.(2)
He basically said, “Here's the problem with mechanistic Interpretability. To understand a system, and how a system thinks, you can’t just look at the system, the bytes and the bits, the nuts and the bolts. You have to look at the environment in which it operates. That’s part of the system; part of the cognition is outsourced to the external environment. And the example he gives is beavers. You know beavers build dams. You don’t know why beavers build dams. Until you study the beaver, and you stumble upon the fact that beavers hate the sound of rushing water.
You're not going to know everything that's motivating a beaver and why a beaver does that. That's an item in the environment that you don't know and you're never going to discover it in a lab.”
You can take a beaver apart in a lab down to the quarks and you wouldn't know it hates rushing water.
A more practical example he gave was, “Or there's me packing for the trip to come here to give this lecture. Here's how I pack. I open my suitcase. I look around my room. The items which say ‘get in the suitcase’ I put in the suitcase. Because long ago, I strategically placed the items in my bedroom that I know I'm going to need for trips. I've set up the environment. I've applied the cognition ahead of time. So, when you're thinking, what's the algorithm Connor uses to pack? If you just looked at the thoughts inside my brain, you wouldn't know. I have all the shirts and shoes and everything else strategically placed.”
The environment of an AGI or a pretty broad AI is the entire internet, which is essentially unknowable unless you've got another AGI doing it, in which case it turtles all the way down. There's no getting outside of that loop. And his point was, “Oh, by the way, this is what Anthropic is trying to do.”
So, that's an issue for the most safety-focused AI builders. He said, “There's a lot of good work to do here.” But it seems essentially an unwinnable race.
So, do you think he's making valid points there? Do you have any pushback? To me, the analogies hold together. But reasoning by analogy can break down.
GM: I think trying to develop artificial general intelligence that plays nice with humans by doing Constitutional AI, it's not a terrible idea and god bless Anthropic. I hope they can make some progress with it. However, I think hoping that AGI will play nice just because you give it a good constitution is kind of like hoping that a marriage works just because you wrote really clever wedding vows, right?
The wedding vows have to interface with the psychology of husband and wife and family and in-laws and legal context and the surrounding neighborhood and everything else that's relevant to marriage. But likewise, even with constitutional law, there's the Constitution, and then there's hundreds of years of accumulated court decisions and case law and a legal wrapping around the Constitution that helps you understand what it actually means once it interfaces with broader society.
And even that's not enough to really understand how the Constitution truly works in modern America. Because, like you said, you need to understand the information environment or the legal environment or the power environment. So, the Constitution does not explicitly talk about political parties or lobbying or propaganda or media influence. Or how democracy could be subverted in all kinds of ways by various interest groups and narratives and so forth.
So, yeah, it strikes me as almost painfully naive to think that all we need to do with AI safety is to write a really good constitution that embodies that sort of consensual human morality.
AK: Yeah.
GM: And then turn the AGI loose. And as long as it follows this very finite length constitution, then we'll be safe.
I think that's not going to work.
AK: This is being taken from political history, and politics is one of my areas of study. Let’s take two foundational documents, the Declaration of Independence and the Constitution/Bill of Rights.
Just take two sentences: “We hold this truth to be self-evident. All men are created equal.”
Then later, the other documents. “Three-fifths of a person.” Those two sentences are in such conflict it almost blew up the country. And we still had like 180 more years of trouble with that.
GM: Yep.
AK: And that’s just two phrases, two clauses in conflict!
GM: Some of these internal discrepancies within the Constitution and within law more generally reflect fundamentally the kind of modularity and hypocrisy of the human mind itself. Our moral psychology cannot be reduced to a few simple moral principles that inform everything we do. We are morally incoherent and conflicted. If you try to distill a human moral psychology that's internally inconsistent, into some consistent set, “Here's a constitution for AI,” it is not going to work.
So here again, I think the AI industry tends to be psychologically naive about the nature of human values and preferences and interests.
AK: Yeah. I don't think that you can contest that because, before we got going (started recording), we were talking a little bit about OpenAI's intent to get rid of the jobs, like all of the jobs.
Humans strive for achievement and status and family and other things. A lot of people only go to work because they want a paycheck. That's fine, it's a reasonable accommodation in life. But a huge quantity of people find some meaning in work! What kind of dystopia do you create when you're like we're just taking away your meaning, we're taking away your main object of striving? And also, by the way, the thing by which you can demonstrate your status and fitness, that's gone too.
So, you're just an undifferentiated blob.
And then you get the 10 or 20x-ing of the problem we've already got, of 7 million prime-age reproductive people in the peak of their work years, sitting on their couch, drawing disability, smoking a lot of dope, playing a lot of video games, watching a lot of porn.
GM: Yes.
AK: How is increasing that (good)? And I don't see how we would not increase that if you took away a lot of jobs, even assuming the funding for UBI, Universal Basic Income, came from somewhere. Where is not apparent.
OpenAI is not stepping up and saying, “We're going to cover that.” No one's saying that. There's no budget for what happens when this hits our economy. There's no retraining budget.
Even when I looked at the AI strategy at the end of Obama, it was, “We’ve got to be all in on AI because this is going to be great.” And then down in the little footnote to the footnote, “We realize this is going to cause a lot of job losses. We have no plan for dealing with those. Maybe the companies that did this will contribute?”
That's as far as the thinking goes! Glad you put so much thought into dealing with this looming catastrophe!
GM: The two huge problems with universal basic income, which the AI industry routinely holds up as “This is the solution to AI-induced mass unemployment.”
AK: Impossible.
GM: Right? The first problem is, who's going to pay for it? My back-of-the-envelope calculations are a UBI that actually supports Americans at a reasonable standard of living, you're talking about 5, 10, maybe 20 trillion dollars a year in cost for UBI. Basically equal to or double the current federal budget.
Okay, who's going to pay for that? AI companies? Are they really going to contribute 10 trillion a year?
No, they're not. If they can move, they will go to the Bahamas or Dubai. Or anywhere that has a more favorable tax regime. They might make a pinky promise that they will fund UBI, but they will not.
AK: I believe Sam Alston has something to the effect of, “Yes, we will kick into the public kitty to help with displacement.” Subtext is, “After we 100x our investor's investment.”
Well, the most successful companies that have ever been have about 50x, which is wild ROI. So, you want to double that, and only then you'll give dollar one to help with all the people you've displaced.
GM: That's the first big problem: nobody's actually going to be willing to pay for UBI.
The second big problem is, as you point out, Art, is that people aren't just working to earn money, right? They're working for self-esteem, they're working for social and sexual attractiveness, they're working so their kids can have pride in what Dad and Mom do. They want to feel like they’re making a contribution to civilization and society.
You rob them of that.
You turn them into UBI sucking on the government teat. Who is going to be attractive to anybody, right? As a marriage partner-who are you going to respect? The person who does a little bit better job playing “Call of Duty” on their video console?
AK: Yeah, (laughs).
GM: Right, is that going to be the basis for choosing a mate?
What you’ll end up with is: A. mass unemployment and B. basically mass serfdom in which everyone dependent on UBI is dependent on the government. That means dependent on the political judgment of government bureaucrats who remember, will be empowered by AI to know an enormous amount about every citizen.
AK: Yes
GM: And if they decide you're an undesirable or you have the wrong political views on something and they have the ability to use UBI as a lever of social control? Then suddenly you basically have a subjugated population that's completely dependent on government largesse, the way medieval peasants are dependent on church tithes and donations and support.
AK: I see another destabilizing thing, other than taking away purpose and meaning. You know, it's actually going to be a great time to be in a skilled trade because you're going to be fine. You're going to be fine for decades still, because it's still going to be not worth the effort to teach AI how to do plumbing and to physically get in there and crawl under the sink and do the dirty, dangerous, and unpleasant things.
But a whole lot of white-collar jobs are going to go away. And if you look at the pool from which the most violent and angry revolutionaries are drawn, its always middle to upper class. So suddenly, you’ve got this pool of really viciously angry people who have no shortage of time to take deep dives on the internet, and plot, and plan revenge against the people that have inflicted this.
And I'm not saying this should happen. I'm saying that this is the next thing to a law of gravity of what happens when you get highly discontented middle- and upper-class people.
GM: One of my little fantasy scenarios actually for how do you successfully pause or stop AI research is if enough uppity journalists get displaced by AI and lose their jobs.
AK: Which they for sure will.
GM: Right? And they are the specialists in terms of drumming up public opinion and shaping how the public views certain issues. If they turn against AI, it'll basically be the AI industry, which is a bunch of people who are not actually very good at public relations, versus unemployed, pissed-off journalists who have 24-7 time on their hands, to take moral vengeance on the AI industry. Sad for the journalists, but I think it might be good for the rest of us. If white-collar job displacement leads to white-collar rage.
AK: It’s so frustrating, because OpenAI knows this. They released a study like three or four months ago with labor economists. I want to say UPenn-it could be somewhere else, but it's a prominent college back east. It was rating which jobs would be most impacted. And shockingly, it's anything to do with writing: 100% impact. Even now Chat GPT writing is fine. It's not good. But it's fine.
Detouring back to the DaVinci 2 model, back when they let the evil part out, the writing got much better and more interesting.
Anyways, so now you just have one person editing and rewriting the output of an AI. It’s not that there would be no journalism jobs, just there would only be one for where there used to be a hundred.
We're already at the end of a 40-year-long purge of journalists. I used to work for the Arizona Republic, my first real job out of college. Back then, they were just making money hand over fist. The internet was still not “a thing” yet. They were the monopoly paper and they had all the advertising revenue. They were already on the third round of newsroom layoffs to bump up the bottom line even more. The media industry has been one of the most jealous of profits, as thousands (of outlets) because advertising revenue all shifted. But the people running the newspapers are like, “Now’s the time to climb back to where we used to be.” Because we can have one staffer for where we used to have 100 paid people.
Oh yeah. There are going to be a lot of angry journalists, writers, commentators, pundits, whatever.
GM: And a lot of lawyers, and paralegals, right? Their jobs are fairly AI-able.
AK: Oh yeah.
GM: Automatable. And then you're going to have a cadre of hundreds of thousands of displaced lawyers who are pissed off at the AI industry. And man, they know how to sue people. They know how to work legal systems. They are not going to be friends to OpenAI.
It's kind of funny that the AI industry seems to have not really gamed out exactly what threats they're going to face once you start to get these mass job losses.
AK: And honestly, it's a horrific thing. But, if risk-doomers are actually right? Like Conner Leahy and Eliezer Yudkowsky: if we keep going down this path, it's going to kill us all? Assuming those people are right, the major off-ramp is a situation that would force (slowing the development of AGI).
It seems very perverse to hope for that. And it is perverse. It's the “least-worst” option. And it's still a terrible option because it'd be incredible suffering and hardship.
GM: The knock-on effects of not- just mass unemployment, but mass loss of self-esteem, mass loss of meaning, mass loss of sexual and social attraction in relationships, it would be the most socially disruptive thing that maybe the world has faced, ever. Certainly, it will make the disruptions of the late 1960s trivial and superficial by comparison.
(Part Two of the conversation with Professor Geoffrey Miller will run next week).
(1) A poem from Simon Rich’s new audiobook, I Am Code: An Artificial Intelligence Speaks. The poems were composed by one of the “base models” of ChatGPT, code-davinci-002, and voiced by Werner Herzog.
(2)