By Rebel Wisdom - Sex and Evolutionary Psychology, Geoffrey Miller CC BY 3.0
This is the second part of our two-part discussion with evolutionary psychologist Professor Geoffrey Miller. If you missed it, here is the first part.
Geoffrey Miller: The knock-on effects of non-just mass unemployment, but mass loss of self-esteem, mass loss of meaning, mass loss of sexual and social attraction in relationships, it would be the most socially disruptive thing that maybe the world has faced, ever. Certainly, it will make the disruptions of the late 1960s trivial and superficial by comparison.
Art Keller: Sure!
To get to another kind of disruption, I interviewed David Krakauer of the Santa Fe Institute, a super interesting organization, multidisciplinary. It’s trying to break down the rigid stovepipes within academia. He’s a very interesting thinker. He studies human stupidity. That’s really one of his major areas of study. He said, “Everyone laughs when I say that. But it’s worth studying.”
He looks at the flip side of artificial intelligence. When I talked to him five years ago, (AI-caused extinction risk) “X risk” wasn’t much of a thing. It still hadn't gotten the traction it has now gotten. But, he saw the way things were pointed.
He was more worried about his notion of “competitive and complementary cognitive artifacts,” which is a mouthful. To put it simply, if you have a map and you regularly use a map, I'm sure you're of an age that you remember you'd get a map book when you moved to a new town. You'd be cursing, getting lost, and struggling. But at the end of the year, you no longer needed that map book, because you transferred it into your head. It was a “complimentary cognitive artifact” because using it made you smarter, in enhanced your capabilities.
Compare that to Google Maps. It's the opposite. You're a genius in its presence. And because you are utterly reliant, but you're not really paying attention, except for when it says turn left here or turn right here, you're helplessly lost when it's not there.
And his point is that AI is potentially a version of that. It's going to be such an advantage in (different) arenas. I don't see how people aren't highly motivated to use it and to outsource some level of decision-making.
Again, he wasn't worried about X risk. He was thinking beyond that, though. It's going to make us less creative. We're not going to be thinking. You know: the generative part of GPT. We'll be relying on that to come up with new ideas and new things and synthesizing new things.
That's already happening now. People are using GPT to cheat. No learning is happening when GPT writes your paper. So over time, you get less creative. Because you don't even have the baseline knowledge to write a good prompt if that's going to be your goal.
You have to ask a truly interesting question to get a truly great answer even with AI. You have to have a wide knowledge base and we're outsourcing that knowledge base. By itself, that's cratering human creativity and problem-solving and that alone would be a reason not to do it if that was your only concern.
GM: Yeah, I've particularly been interested in how this might affect sort of the visual arts probably because I have a 27-year-old daughter who's a professional artist and does representational painting and sculpture and she and everybody else she went to art school with are absolutely terrified of AI.
AK: Well they might be!
GM: Right? Sure, if you plug some verbal prompts into Midjourney you can get some pretty interesting surrealistic kitsch popping out and it's fun to play with. Given the uncertainties about how AI art systems will develop, which millionaires are going to be willing to buy any paintings in the next 10 years?
They buy the paintings partly because they love the artists and they like going to galleries and they love talking about the art, but partly as an investment, right?
AK: Yes
GM: And if AI art just displaces human talent in this creative domain, then the whole art market collapses. The same might happen with music, writing fiction, and writing movies. And everything else. So then, how does that affect human self-image? What we end up thinking is AI is good at being intelligent and creative. It just happens to be bad at contracting work and plumbing and mowing grass. So, I think the human self-image might shift from us taking pride in our intelligence and creativity and our social skills to us just going, “Well, I guess we're physically better than robots at doing certain tasks.”
But that's the mindset of a medieval serf.
AK: For art in particular, I could be wrong but this is the romantic part bubbling up, which is closely tied to the creative impulse. Part of the reason that you like the art is a human created it. And he knows, and you know he knows, that we're all going to die. We have a finite period of time on this planet. And you know what it is to suffer. All those other things that AI can mimic perfectly well, but it's not imbued with the blood, sweat, and tears of the creator, and can never be. Will people still really want to buy them? I don't know, maybe they will? I mean we don't know the upper limits on how good it can get, so…
GM: The funny thing is what we're talking about here is potentially the biggest disruption in human experience since, what, the invention of the controlled fire? Since the development of language? Since the mastery of farming, since the Industrial Revolution? It's as big or maybe even bigger than that.
AK: But in the compressed time scale, which makes it extra unlikely to be able to weather this disruption successfully.
GM: Yeah, and what is public discourse taken up by, right? There's maybe a tenth of 1% of Twitter is about AI safety, right? But most of it's about what's going to happen in the 2024 election and what's up with China? And if you're really tuned in to the disruptive potential of AI, that'll just start sounding like kind of vacuous nonsense, right?
As if: here's a civilization going to be completely blindsided by these developments. And we should be spending logically 70 or 80% of all public discourse should be about, “How are we going to cope with AI?”
AK: Yes.
GM: And we're not doing that.
AK: It's the classic thing. I've worked in the national security bureaucracy and I know to some degree, if you were to look at an intelligent allocation of what the press should be covering by personal threat. Number one would be you'd be reading 10 stories a day about your heart attack. Number two would be your cancer risk. Number three would be your driving. You wouldn't get to terrorism or the Chinese invading until 60 rows down.
GM: Yep
AK: And that is not the mix we're getting. The whole thing is skewed, and that's the other terrifying thing. The potential for AI to pile on that, to manufacture just a fire hose of bullshit. The classic Russian flooding-the-zone-“I'm gonna put so much out there, you couldn't possibly know the truth.” That’s a disinformation tactic at least several hundred years old. The Russians are past masters at it.
That's going to be automated.
GM: One way to look at this is that we have a very limited window of time when humans will have enough control over this propaganda and the narrative landscape that we can actually raise serious questions about AI risk and the AI industry.
But AI, among other things, will be the most powerful propaganda tool ever developed. When the AI industry masters AI that's very good at influencing people's psychology, they will not use it for our interests. They will use it for the AI industry's interests. Once that happens, it'll be very hard to have an honest public discourse about AI threats because we will be flooded by fire hoses of pro-AI industry propaganda that will be orchestrated by their own AI systems.
AK: Yeah
GM: And that will probably start happening within five or ten years. So, that's the amount of time we have as sovereign citizens, as a functioning more or less democracy, to have serious discussions about this. Once the AI industry controls the public narrative, as they will be able to do, then it's game over for any kind of democratic oversight.
AK: To go back to one of the most popular talking points coming straight out of the national security community, and I get people who only have lightly been involved with that, this sounds a reasonable observation.
“God help us if the Chinese get this first.”
Except, if you know anything about China and how the Communist Party thinks about things, keeping control of the society is the top five items on their list. And putting something that can rapidly rip control away from you is very low down the list.
So, I used to work on missile proliferation. I used to work on nuclear proliferation.
There was the “missile gap” during the Cold War. And there was the “bomber gap.” And then after “The (Berlin) Wall” falls, and Russia falls apart, and of course, there are versions of the “warhead gap.”
We find out that even on items they had parity in, their equipment sucked. It wasn't reliable. And often they didn't have parity or their parity was empty holes dug in the ground. But it sure was useful for boosting up funding and sure was a useful talking point of how we need to do this for national security.
You've interacted quite a bit with students in China. Why do you think that that is not a valid paradigm, that they're (China’s government) going to rush to that?
I've said why the national security stuff is wrong. Why do you think that the average Chinese person is not going to buy into the need to rush?
GM: So as background, I taught online courses for a Chinese university for a year fairly recently. These are mostly courses on judgment, decision-making, evolutionary psychology, educational psychology, stuff like that. I did interact quite a bit with the very bright undergrads at this particular university, the Chinese University of Hong Kong, Shenzhen.
And they were quite open about talking about their culture, their civilization, their hopes for the future, their concerns, China versus the US, etc. My sense was they had quite a long-termist perspective. A multi-generational perspective, right?
One thing I think that Americans tend to miss. A lot of people in the American AI industry say, “Well, if America doesn't develop AGI, China will, and then we lose forever and we're forever under a Chinese hegemony that dominates the world.”
A couple of mistakes there. Number one, they radically underestimate the patience of China and their willingness to play the long game. And the view that if AI is going to be good for humanity, it'll still be good in 100 years. It'll still be there, ripe for the plucking. We don't need to rush towards it if there are huge risks.
Number two, I really did not get the sense that China is an expansionistic, materialistic power, right? I think they have a very defensive mindset. Yes, they want Taiwan, but they are not going to be expansionistic, I think, in the way that the Soviet Union wanted to export Communism.
I think the Chinese view their civilization as, “This is Han China, and we are obviously the oldest, best, biggest civilization. we are restoring our national pride and our identity. People shouldn't mess with us anymore, and we are coming out of a century of ignominy and subordination. But the idea that they want to suddenly put like 200 military bases around the world like the US has, I think is just a fundamental misunderstanding of Chinese civilization.
AK: Getting off the technical, because it's a little concerning. They (China) do have what you were referring to, the “century of humiliation.” That's been played up very much in Chinese educational circles, not without reason. They suffered a lot of the ills of colonization. Britain, just for people who don't know, very briefly, the Chinese emperor said, “Stop selling us opium, it's bad for us.”
And Britain's like, “No, I think we want to keep selling you opium.”
So, they fight a little skirmish, and Britain opens their markets. And keeps on pumping in poison. So, they (China) took umbrage at that, and many other humiliations, and they do want to get their own back.
My concern on the national security point as it comes to AI, is I do think they feel kind of surrounded and hemmed in. And fair enough, cuz to a degree, because they’ve been putting military bases on little islands, so they can claim the South China Sea, and there is not a neighbor they don’t have a border dispute with, from a viewpoint it can look not defensive, but aggressive and imperialistic. But some people are saying, that (aggression) makes a lot of sense, that China is not a rising power, China is a peaking power. Because of demographic issues, their real estate market is imploding. Economic growth is flat.
Then we (the US) do something, and I understand why we do it, and I even kind of agree. We say something like, “You don’t get any AI chips anymore.” And now we're really trying to cut at them.
“(And China reacts) We put out this (development) plan (the China 2050 plan) and say we're going to be tech leaders and you (the US) take that as a threat!”
So, what does the peaking power do when they think, “We're never going to be stronger than we are right now?” Do you go for the gold because you think, “We have no choice (but to attack while we’re strong)?”
That's a version of the thing that Vladimir Putin did. “Demographically, we're as strong now as we're ever going to get, and we think Ukraine's weak, so we're going for it.” Obviously, it didn't work out well.
We could be backing them (the Chinese) into that kind of decision in the way we frame everything in a very competitive way. I hope it's not the case!
GM: One of my concerns is that I don't know if Chinese leadership really understands the AI extinction risk. I don't really know. I'm not in the minds of the committee, the top bureau. But I have watched quite a few Chinese movies and TV shows over the last five years to try and understand my Chinese students. Most Americans have not ever watched Chinese movies.
It's very interesting in the apocalyptic movies and science fiction movies coming out of mainland China, particularly those set in the near to mid-term future. You very strongly get the sense that China views itself as the rational long-term steward of the planet and of humanity and every other country is sort of stupid and impulsive and incompetent and reckless and cannot really be trusted with the planet.
Now, if the Chinese leadership has that kind of view and they understand the extinction risks of AI and they understand that the leading AI companies are basically in the Bay Area and to some degree London. They might think, “Well, look, for the greater good of humanity, we really must hobble or limit or pause the American reckless AI industry that is imposing extinction risks on the whole planet.”
Then what do they do? Well, probably not overt military action, but they have a lot of options.
AK: They're very good at asymmetrical warfare thinking.
GM: My first allegiance is to the human species, not necessarily to the American current political power structure. I'm more concerned about limiting extinction risk from AI than I am about which particular geopolitical power is dominant in any particular decade.
So, I can see a potential source of national security conflict at that level.
AK: I also see, if we're trying to get into the leadership mindset, they could also think, “Let these reckless Americans blow up their economy with this, and once they're shattered in pieces, whatever was worthwhile, we'll take in, and they will have done themselves in with us not lifting a finger.” That's another possible outcome in this scenario.
GM: Yeah, because if you're playing a long game, and you're thinking, “Okay, how will America do in 50 years or 100 years if AI imposes mass unemployment and a mass loss of meaning and a sort of crisis of faith in their civilization? Well, let's just watch and let that play out.”
AK: And the messaging back home will do itself. (laughs) Like, “Look at what we just protected you from! CCP (Chinese Communist Party) forever!”
GM: And I think the CCP has a very good understanding that people who are disaffected and unemployed tend to become troublemakers.
Whereas America seems to have this narrative that says, “Actually, everybody would be perfectly happy with UBI and playing video games and being an (inaudible).” And I think that's true for some people. Some people are lazy, but some people are quite active and ambitious. And if they can't be active and ambitious in a paid job, they will be active and ambitious as activists or rebels.
AK: Yeah, who among us, who are happy enough if they have a productive life and everything, to quote a good line from Batman, “Some people just want to watch the world burn.” Now, there are plenty of people who don’t want to watch the world burn as long as they’re making their way in it. But if it’s like, “You’ve taken everything from me, I’m going to make the world burn.”
GM: Yeah
AK: And people think that's a “doomer” thing. No, that's just a straightforward outcome of robbing people of meaning. And that's why I love evolutionary psychology, it uncovers the hidden programming that we are all running on and that we're to some degree designed by evolution not to understand the software that's running. Because it would if we understood it, it would interfere with it.
GM: Yes.
AK: But once you uncover it, it opens a whole world of meaning and or potential threat which may be one reason you clued in on this sooner than a lot of other people.
Speaking of government, government has a role to play, at least in theory, in slowing it down or regulating it. I'm not one of the people, it seems to be the most popular thing anyway to beat up on the government, and I do see it happening. That happens in a country where, for any of us who have spent extensive time overseas, even as dysfunctional as we are, we shake our heads and are like, “If I could drop you in a Guatemalan village for a week…Just shut up (about how allegedly bad the US is)!”
But acknowledging that there is a lot of dysfunction, the good side, historically, of the US system is it was slow to react by design because the founders were like, “We're going to stop people from ramming through bad ideas really quickly.”
That's not the environment that we need to react to this. The incentives are not super aligned. Then the other thing-and this is a way to discuss a very dry term “regulatory capture” but I like the phrase “Baptists and Bootleggers,” because it's so much more memorable. Have you heard about that?
GM: Vaguely.
AK: Baptists and bootleggers both want to shut down booze. Baptists because they want a “dry” county to get closer to God. Bootleggers for the obvious reason: you're building in a monopoly in a very “Al Capone” way. Some people suspect one of those hidden motivations running by people like Sam Altman (CEO of OpenAI) and others saying, “Oh we'd totally be open to regulation,” is that even if they are there on the side (of regulation), they have the market cap (capitalization) whereby they could abide by it, but it will crush out any (startup) competitors.
Now, since I don't necessarily want a bunch of other competitors, that part of it is fine, but it redefines the chessboard. Even if you bring in the regulation, is it going to achieve what you want, in the way that you want, given the time constraints, the understanding constraints, and competing incentives?
So, that's why when I see Eliezer Yudkowski, or Connor Leahy or others who are like, “We really need this government regulation!” They're not wrong, but it can’t do what you think it could do. If you understood government better you would understand that.
GM: Absolutely. So, regulatory capture is always an issue, particularly if you have powerful lucrative industries that can lobby and that are putting money into lobbying. But it seems like we do have some proofs of concept that it is possible to achieve some level of national and international regulatory effectiveness in terms of limiting the spread of certain kinds of dangerous technology like nukes and bio-weapons.
But regulation is not the only tool we have. In this essay in EA (Effective Altruism) Forum I published a couple of months ago, I talked about the possible moral stigmatization of AI research. I think that's actually a more powerful, faster way to slow down reckless AI development. To convince enough of the general public that AI is reckless and risky and is not your friend and does not have your best interests at heart. Perhaps it should be stigmatized like other really bad things get stigmatized, like pedophilia, or human trafficking, or incredibly dangerous synthetic drugs, right? You stigmatize all that so that the capital and the talent that supports that industry dries up. So investors don't want to invest in an industry that they consider evil.
People don't want to work for AI companies if all their friends and family are like, “Why would you work for something that imposes extinction risks on us all? You sociopaths, why would you do that?”
I think that can be very powerful.
And I think it's quite difficult to have effective regulation unless you have a kind of moral stigmatization supporting the regulation. You need public buy-in where enough people go, “This sucks. We don't want it. Let's control it, handicap it, slow it down.” If you have enough moral stigmatization, you might not even need very good regulation. The regulation in a way is just the crystallization of the stigmatization that might already exist in the public. The recent polls give me hope. Most people are actually quite worried about AI risk. People in the AI industry are actually quite worried about it. We have to get the politicians and journalists on board with the concerns that people already have.
AK: I’m 100% on board. I’ve written (fiction) things where a technology has run amok, and the way to bring it under control is crowdsourcing ostracism, i.e..“You can’t tear society down this way. It’s unacceptable. We are not going to regulate it from the bottom up, we're going to make you pay a terrible social cost for this.”
You won't be able to find a girlfriend, you won't be able to say what you do in polite society, you will look at yourself in the mirror and be like, “What kind of person, now that I understand that everyone thinks that I am juggling with their lives for my benefit and at their risk, what kind of psychopath would continue to do that?”
GM: We've already got all the strategies and tactics of cancel culture that have been honed over the last 24 years.
AK: (laughs) Cancel Culture will save us!
GM: I've been canceled, and so far, cancel culture tends to be applied to mostly the wrong victims and the wrong ideas, and it's become oppressive and intimidating and terrible. But you might as well use that for an actually legitimate cause.
AK: Yes
GM: Cancel culture applied to the AI industry, at this point, given the technology, given our culture, would be a pretty good idea.
AK: I know you've had first-hand experience because there's no subfield within the field of psychology that's more at war with the entire rest of the field because you don't like blank slate-ism. In fact, you're exactly the opposite. I was talking about the biases, everything that we referred to earlier for people who aren't familiar with evolutionary psychology. Evolutionary psychology is where most of those theories go to die. And that's why the rest of the field does not care for it, because it is the most replicable part of the field. It's very little “woo-woo,” and it's a lot of dark truths.
GM: Yep.
AK: Even being in it, I've heard your brilliant wife (Dr. Diana Fleishman) interviewed on another podcast, where they asked her, “Do you feel like you even have personal agency once you know enough about evolutionary psychology? She's like, “You do eventually. You've got through like a crisis of the soul for a while, because you look at how much we are at the mercy of the programming.”
But you've survived it (i.e. the cancellation), and come up a winner.
GM: The useful analogy here, I think, is what people often say regarding science, “The truth will win out, and whatever the scientifically valid ideas are, they will spread and become dominant.”
No, it actually isn't the case at all. Look, evolutionary psychology, when I first got into it in the late 80s, early 90s, you'd go to the big evolutionary psychology conferences and there'd be like three to five hundred people there, right? And then the social stigmatization started kicking in in the mid to late 90s, right? Evolutionary psychology became like the bete noir. Everybody, every “blank-slater” hated it, everyone on the left hated it, the progressives hated it, the ones who were into gender theory hated sex differences research. And they did succeed in slowing down and pausing and handicapping “ev psych,” such that, you go to the big conferences now, it's still the same number of people, there's zero growth, basically.
I've trained a bunch of “ev psych” PhDs. It’s very hard to get jobs in this field, and very hard to get grants. Social stigmatization works. It is capable of limiting the growth of valid, exciting fields like evolutionary psychology, like intelligence research, like behavior genetics. They've all been handicapped successfully by political pressure.
You could do the same thing to the AI industry.
AK: Yeah.
GM: Right?
AK: Yeah. And it is worth pointing out that the good science does not always win. When they're opposed by a huge number of people in favor of the bad science. A medical ethicist whose work is really interesting, he wrote a paper which I really like, because the title sticks in the mind, “The Unbearable Asymmetry of Bullshit.”
And the thesis is pretty quick to explain. If you're a scientist doing really good work and upset at watching bad science crowd out work in your field. Fine, you can start debunking it. Then all you do all day long is debunking bad science, and not any of your new, original work. And everyone wants to do their original work. That’s how the bad stuff survives.
GM: And the debunkings do not get even a 10th as many citations as the original classic finding that ends up in the textbooks. And it is pretty appalling. I’ve looked at what’s actually in “Introduction to Psychology” textbooks, and what is actually in educational psychology textbooks.
Boom, boom, boom: You go through all the claims and then literally about half of them have been debunked, don't replicate, are not actually solid, and they're still in textbooks. They're still being taught to undergrads globally.
AK: So, a really well-written, peer-reviewed, bulletproof debunking of the raccoon dog thing (as an origin of Covid) just came out and all the people who've been watching GOF (Gain of Function research) are like, “Okay New York Times, okay Atlantic? Okay, all these people who put out this truly astronomical chance that it (the raccoon dog) was the intermediate (COVID) host?”
The very shoddy research that said this (raccoon dog) could be the intermediate host was covered in every paper because it supported zoonosis over lab leak. And so that (raccoon dog theory) was just thoroughly debunked by a very reputable journal. And it's like zero articles and counting are covering that (debunking), despite the people who had (originally) covered it (the raccoon dog theory) with breathless, “Here is the proof (of zoonosis origin of COVID).”
We're also operating in that environment.
GM: Yeah. And it's funny when I was teaching the Chinese students, right?
They're raised on quite a different set of inputs about the nature of the human mind and human psychology. And they're actually taught quite a few things at high school that are true and valid. They were taught about IQ and intelligence. That's pretty accurate as far as I can tell. And so that's just part of their zeitgeist, their worldview. So that makes it very easy to build on that knowledge.
But with American students, you have to do so much debunking and clearing away. Telling them, “Almost everything you heard in high school about psychology is actually dead wrong. You actually know less than nothing at a scientific level.”
AK: Well, one thing we have to say, isn't it an old Chinese curse, “May you live in interesting times?” Someone seems to have cast that on our nation. Whatever happens, it's not going to be boring.
GM: The Chinese have all these wonderful four-ideogram sayings. One of my favorites is: “The emperor makes his rules, but he is far away and there are many mountains in between. Which is basically, “Yeah, the central government decides what it decides, but here in this town, in this province, we have our own way of doing things.”
AK: And in a way, that may be the biggest agent of change and hope because we do operate in a federal system and people can make local and state laws and take local actions and that may be the way forward. Not just for this particular dangerous technology, but with a lot of things that people should at the very least give much better due consideration before we run headlong into it.
GM: Yeah, yeah. And you know, my plea for the folks working in the AI industry is basically please have some kids!
AK: (laughs)
GM: So that you have some skin in the game in terms of future generations, so that you're not viewing future AI systems as your only possible descendant. And also, so you get a little more humility about your ability to control intelligent systems. I think parents develop a pretty good understanding that if there's an intelligent agent, even if it's smaller than you and not as smart as you, it's still pretty hard to control. AI systems are going to be quite a bit like that, unruly children but with enormous powers.
AK: Well, the other help in that I think-as I understand it- even for men, much more so for women, but when children come along, their brain rewires itself to change their priorities. Their testosterone level drops a little bit so that they're more nurturing and a less competitive side, more willing to contribute to the family unit, comes out. Probably less of that dangerous risk-taking. So, it might be handy for that reason as well.
GM: Yeah, it does drive me nuts that so many of these AI industry guys who are “accelerationists” and really into “the singularity.” Super pro-technology and technology solves all their problems-a lot of them are childless, right? To me, that's suspect on a personal and societal level because I think they don't understand that most people do have kids. Most people love and care for and will fight to the death to protect their kids. And old people have grandkids. Once parents and grandparents understand that AI does potentially pose an enormous threat to their actual kids and grandkids and descendants, they are going to rise up and they are not going to take it lying down. And if the AI industry is a bunch of young, childless guys, they’re going to be completely blindsided by the parental fury that gets unleashed.
AK: That would be some poetic justice, that the things that make us most human are what shuts down the greatest threat to humanity.
GM: Absolutely.
AK: I think that’s a great place to stop. Geoffrey Miller, thank you very much for your time.
GM: My pleasure, Art.
I appreciate that. China is a threat in a lot of ways. I wouldn't be surprised if we wind up in conflict over Taiwan. Are they a threat in AI? Maybe-but Prof Miller and I both think the leadership prizes regime survival and that means total control over something as potentially powerful as AI. They're not going to rush into that with the reckless abandon we are if there is a tiny chance it might destabilized the CCP or Xi's grip on power. They've got the most restricted internet on the planet (other than North Korea). They're very cautions about outside influence and tech. I think "But China" is mostly a move to get support from the DOD etc for "acceleration" in AI development.
There are so many fascinating bits in the two parts. It is hard to pick out any one thing, but the discussion of China was perhaps the most unexpected. I wish everyone would read that when journalists and politicians go on and on about the threat of China. There is a threat, but it is probably not what most people think it is.