Daniel Schmachtenberger: Ai Risks & The Metacrisis
As of today, we are in a war that has moved the atomic clock closer to midnight than it has ever been. We're dealing with nukes and AI and things like that. We could easily have the last chapter in that book if we are not more careful about confident, wrong ideas. This is a different sort of podcast.
Not only because it's Daniel Schmadenberger, one of the most requested guests, who by the way, I'll give an introduction to shortly, but also because today marks season 3 of the Theories of Everything podcast. Each episode will be far more in-depth, more challenging, more engaging, have more energy, more effort, and more thought placed into it than any single one of the previous episodes. Welcome to the season premiere of season 3 of the Theories of Everything podcast with myself, Curt Jaimungal.
This will be a journey of a podcast with several moments of pause, of tutelage, of reflection, of surprise appearances, even personal confessions. This is meant for you to be able to watch and re-watch or listen and re-listen. As with every TOE! podcast, there are timestamps in the description and you can just scroll through to see the different headings, the chapter marks. I say this phrase frequently in the Theories of Everything podcast, this phrase, just get wet, which comes from Wheeler, and it's about how there are these abstruse concepts in mathematics and you're mainly supposed to get used to them, rather than attempt to bang your head against the wall to understand it the first time through. It's generally in the re-watching that much of the lessons are acquired and absorbed and understood.
While you may be listening to this, so either you're walking around and it's on YouTube or you're listening on Spotify or iTunes, by the way, if you're watching on YouTube, this is on Spotify and iTunes, links in the description, I recommend that you at least watch it once on YouTube, so you periodically check in, because occasionally there are equations being referenced, visuals. I don't know about you, but much or most, in fact, of the podcasts that I watch, I walk away with this feeling like I've learned something, but I actually haven't, and the next day if you ask me to recall, I wouldn't be able to recall much of it. That means that they're great for being entertaining and feeling like I'm learning something, that is the feelings of productivity, but if I actually want to deep dive into a subject matter, it seems to fail at that, at least for myself. Therefore, I'm attempting to solve that by working with the interviewee, for instance, we worked with Daniel, to making this episode and any episode that comes out from season 3 onward, from this point onward, to make it not only a fantastic podcast, but perhaps in this small, humble way to evolve what a podcast could be. You may not know this, but in addition to math and physics, my background is in filmmaking, so I know how powerful certain techniques can be with regards to elucidation, how the difference between making a cut here or making a cut here can be the difference between you absorbing a lesson or it being forgotten. By the way, my name is Curt Jaimungal, and this is a podcast called Theories of Everything, dedicated to investigating the versicolored terrain of theories of everything, primarily from a theoretical physics perspective, but also venturing beyond that to hopefully understand what the heck is fundamental reality, get closer to it, can you do so, is there a fundamental reality, is it fundamental, because even the word fundamental has certain presumptions in it.
I'm going to use almost everything from my filmmaking background and my mathematical background to make TOE the deepest dive, not only with the guest, but we'd like it to be the deepest dive on the subject matter that the guest is speaking about. It's so supplementary that it's best to call it complementary, as the aim is to achieve so much that there's no fat, there's just meat. It's all substantive, that's the goal. Now, there's some necessary infrastructure of concepts to be explicated prior in order to gain the most from this conversation with Daniel, so I'll attempt to outline when needed.
Again, timestamps are in the description, so you can go at your own pace, you can revisit sections. There will also be announcements throughout and especially at the end of this video, so stay tuned. Now, Daniel Schmattenberger is a systems thinker, which is different than reductionism, primarily in its focus. So systems thinkers think about the interactions, the end-to-or-greater interactions, the second order or third order. And Daniel, in this conversation, is constantly referring to the interconnectivity of systems and the potential for unintended consequences.
We also talk about the risks associated with AI. We also talk about their boons, because that's often overlooked. Plenty of alarmist talk is on this subject. When talking about the risks, we're mainly talking about its alignment or misalignment with human values.
We also talk about why each route, even if it's aligned, isn't exactly salutary. About a third of the way through, Daniel begins to advocate for a cooperative orientation in AI development, where the focus is on ensuring that AI systems are designed to benefit and that there are safeguards placed in, much like any other technology. You can think about this in terms of a tweet, a recent tweet by Rob Miles, which says, It's not that hard to go to the moon, but in worlds that manage it, saying that these astronauts will probably die is responded with a detailed technical plan showing all the fail-safes, testings, and procedures that are in place. They're not met with, hey, wow, what an extraordinarily speculative claim. Now, this cooperative orientation resonates with the concept of Nash equilibrium.
A Nash equilibrium occurs when all players choose their optimal strategy given their beliefs about other people's strategies, such that no one player can benefit from altering their strategy. Now, that was fairly abstract, so let me give an instance. There's rock, paper, scissors, and you may think, hey, how the heck can you choose an optimal strategy in this random game? Well, that's the answer. It's actually to be random, so a one-third chance of being rock or paper or scissors. And you can see this because if you were to choose, let's say, one-half chance of being rock, well, then a player can beat you one-half of the time by choosing their strategy to be paper, and then that means that you can improve your strategy by choosing something else. In game theory, a move is something that you do at a particular point in the game or it's a decision that you make.
For instance, in this game, you can reveal a card, you can draw a card, you can relocate a chip from one place to another. Moves are the building blocks of games, and each player makes a move individually in response to what you do or what you don't do or in response to something that they're thinking, a strategy, for instance. A strategy is a complete plan of action that you employ throughout the game. A strategy is your response to all possible situations, all situations that can be thrown your way.
And by the way, that's what this upside-down, funny-looking symbol is. This means for all in math and in logic. It's a comprehensive guide that dictates the actions you take in response to the players you cooperate with and also the players that you don't. A common misconception about Nash Equilibria is that they result in the best possible outcome for all players. Actually, most often, they're suboptimal for each player. They also have social inefficiencies.
For instance, the infamous prisoner's dilemma. Now, this relates to AI systems, and as Daniel talks about, this has significant implications for AI risk. Do we know if AI systems will adopt cooperative or uncooperative strategies? How desirable or undesirable will those outcomes be? What about the nation states that possess them? Will it be ordered and positive, or will it be chaotic and ataxic, like the intersection behind me? Although it's fairly ordered right now, it's usually not like this. The stability of a Nash Equilibrium refers to its robustness in face of small changes, perturbations, and payoffs or strategies. An unstable Nash Equilibrium can collapse under slight perturbations, leading to shifts in player strategies, and then consequently a new Nash Equilibrium. In the case of AI risk, an unstable Nash Equilibrium could result in rapid and extreme harmful oscillations in AI behavior as they compete for dominance.
And by the way, this isn't including that an AI itself may be fractionated in the way that we are as people, with several selves inside us vying for control in a Jungian manner. Generalizations also have a huge role in understanding complex systems. So what occurs is you take some concept, and then you list out some conditions, and then you relax some of those conditions.
You abstract away. Through the recognition of certain recurring patterns, we can construct frameworks, we can hypothesize, such that hopefully it captures not only this phenomenon, but a diverse array of phenomenon. The themes of theories of everything of this channel is what is fundamental reality, and like I mentioned, we generally explore that from a theoretical physics perspective, but we also abstract out and think, well, what is consciousness? Does that arise from material? Does it have a relationship to what's fundamental reality? What about philosophy? What does that have to say in metaphysics? So that is, generalizations empower prognostication, the discerning of patterns, and they streamline our examination of the environment that we seem to be embedded in. Now, in the realm of quantum mechanics, generalizations take on a specific significance. Now, given that we talk about probability and uncertainty, both in these videos, which you're seeing on screen now, and in this conversation with Daniel, thus it's fruitful to explore one powerful generalization of probabilities that bridges classical mechanics with quantum theory called quasi-probability distributions.
Born in the early days of quantum mechanics, a quasi-probability distribution, also known as a QPD, bridges between classical and quantum theories. There's this guy named Eugene Wigner, who around 1932, published his paper on the quantum corrections of thermodynamic equilibriums, which introduces the Wigner function. What's notable here is that both position and momentum appear in this analog to the wave function, when ordinarily you choose to work in the so-called momentum space, or position space, but not both. To better grasp the concept, think of quasi-probability distributions as maps that encode quantum features into classical-like probability distributions. Whenever you hear the suffix "-like", you should immediately be skeptical, as space-like isn't space, and time-like isn't the same thing as time.
In this instance, classical-like isn't classical. There's something called the Kalmogorov axioms of probability, and some of them are relaxed in these quasi-probability distributions. For instance, you're allowed negative probabilities. They also don't have to sum up to one, and doing so with the Wigner function reveals some of the more peculiar aspects of quantum theory, like superposition and entanglement. The development of QPDs expanded with the Glauber-Sedartian p-representation, introduced by Sedartian in 1963, and refined by Glauber and Houssoumis q-representation in 1940. QPDs play a crucial role in quantum tomography, which allow us to reconstruct and characterize unknown quantum states.
They also maintain their invariance under symplectic transformations, preserving the structure of phase-space dynamics. You can think of this as preserving the areas of parallelograms formed by vectors in phase-space. Nowadays, QPDs have ventured beyond the quantum realm, inspiring advancements in machine learning and artificial intelligence. This is called quantum machine learning, and while it's in its infancy, it may be that the next breakthrough in lowering compute lies with these kernel methods and quantum variational encoders. By leveraging QPDs in place of density matrices, researchers gain the ability to study quantum processes with reduced computational complexity.
For instance, QPDs have been employed to create quantum-inspired optimization algorithms, like the quantum-inspired genetic algorithm, QGA, which incorporates quantum superposition to enhance search and optimization processes. Quantum variational autoencoders can be used for tasks such as quantum states compression and quantum generative models, also quantum error mitigation. The whole point of this is that there are new techniques being developed daily, and unlike the incremental change of the past, there's a probability, a low one but it's non-zero, that one of these will remarkably and irrevocably change the landscape of technology. So, generalizations are important.
For instance, spin and GR, so general relativity, is known to be the only theory that's consistent with being Lorentz invariant, having an interaction, and being spin-2, something called spin-2. This means if you have a field and it's spin-2 and it's not free, so there's interactions, and it's Lorentz invariant, then general relativity pops out, meaning you get it as a result. Now, this interacting aspect is important, because if you have a scalar, so if you have a spin-0 field, then what happens is it couples to the trace of the energy momentum tensor, because there's nothing else for it to couple to, and it turns out that does reproduce Newton's law of gravity.
However, as soon as you add an interacting relativistic matter, then you don't get that light bends. So then you think, well, let's generalize it to spin-1, and then there are some problems there, and you think, well, let's generalize it to spin-3 and above, and there's some no-go theorems by Weinberg there. By the way, the problem with spin-1 is that masses will repel, for the same reason that in electromagnetism, that if you have same charges, they repel. Okay, other than just a handful of papers, it seems like we've covered all the necessary ground, and when there's more room to be covered, I'll cover it spasmodically throughout the podcast.
There'll be links to the papers and to the other concepts that are explored in the description. Most of the prep work for this conversation seems to be out of the way, so now, let's introduce Daniel Schmattenberger. Welcome, valued listeners and watchers. Today, we're honored to introduce this remarkable guest, an extraordinary, extraordinary thinker, who transcends conventional boundaries, Daniel Schmattenberger.
So, what are the underlying causes that everything from nuclear war to environmental degradation, to animal rights issues, to class issues, what do these things have in common? As a multidisciplinary aficionado, Daniel's expertise spans complex systems theory, evolutionary dynamics, and existential risk, topics that challenge the forefront of academic exploration. Seamlessly melding different fields such as philosophy, neuroscience, and sustainability, he offers a comprehensive understanding of our world's most pressing challenges. Really, the thing we have to shift is the economy, because perverse economic incentive is under the whole thing. There's no way that as long as you have a for-profit military-industrial complex as the largest block of the global economy, that you could ever have peace.
There's an anti-incentive on it as long as there's so much money to be made with mining, etc. Like, we have to fix the nature of economic incentives. In 2018, Daniel co-founded the Consilience Project, a groundbreaking initiative that aims to foster societal-wide transformation via the synthesis of disparate domains promoting collaboration, innovation, as well as something we used to call wisdom. Today's conversation delves into AI, consciousness, and morality, aligning with the themes of the TOE podcast. It may challenge your beliefs. It'll present alternative perspectives to the AI risk scenarios, by also outlining the positive cases which are often overlooked.
Ultimately, Daniel offers a fresh outlook on the interconnectedness of reality. Say, let's get the decentralized collective intelligence of the world having the best frameworks for understanding the most fundamental problems as the center of the innovative focus of the creativity of the world. So, you TOE Watchers, you, my name is Curt Jeymungel. Prepare for a captivating journey as we explore the peerless, enthralling world of Daniel Schmattenberger. Enjoy. I do not know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.
All right, Daniel, what have you been up to in the past few years? Past few years? Trying to understand the unfolding global situation and the trajectories towards existential and global catastrophic risk in particular, the solutions to those that involve control mechanisms that create trajectories towards dystopias, and the consideration of what a world that is neither in the attractor basin of catastrophe or dystopia looks like, a kind of third attractor. What would it take to have a civilization that could steward the power of exponential technology much better than we have stewarded all of our previous technological power? What would that mean in terms of culture and in terms of political economies and governance and things like that? So, thinking about those things and acting on specific cases of near-term catastrophic risks that we were hoping to ameliorate and helping with various projects on how to transition institutions to be more intelligent and things like that. What are some of these near-term catastrophic risks? Well, as of today, we are in a war that has moved the atomic clock closer to midnight than it has ever been. And that's a pretty obvious one. And if we were to write a book about the folly of the history of human hubris, we would get very concerned about where we are confident, about where we're right, where we might actually be wrong, and the consequences of it.
And as we're dealing with nukes and AI and things like that, we could easily have the last chapter in that book. If we are not more careful about confident, wrong ideas. So, what are all the assumptions in the way we're navigating that particular conflict that might not be right? What are the ways we are modeling the various sides? And what would an end state that is viable for the world and that just at minimum doesn't go to a global catastrophic risk? That's an example. If we look at the domain of synthetic biology as a different kind of advanced technology, exponential tech, and we look at that the cost of things like gene sequencing and then the ability to synthesize genomes, gene printing, are dropping faster than Moore's law in cost. Well, open science means that the most virulent viruses possible studied in context that have ethical review boards getting open published, then that's a situation where that knowledge combined with near-term decentralized gene printers is decentralized catastrophe weapons on purpose or even accidentally. There are heaps of examples in the environmental space if we look at our planetary boundaries.
Climate change is the one people have the most awareness of publicly. But if you look at the other planetary boundaries like mining pollution or chemical pollution or nitrogen dead zones in oceans or biodiversity loss or species extinction, we've already passed certain tipping points. The question is how runaway are those effects? There was an article published a few months ago on PFOS and PFAS, the fluorinated surfactants, forever chemicals as they're popularly called, that found higher than EPA allowable standards of them in rainwater all around the world, including in snowfall in Antarctica because they actually evaporate. And we're not slowing down on the production of those in their endocrine disruptors and carcinogens. And that doesn't just affect humans, but affects things like the entirety of ecology and soil microorganisms. It's kind of a humongous effect.
So those are all examples. And I would say right now I know the topic of our conversation today is AI. AI is both a novel example of a possible catastrophic risk through certain types of utilization. It is also an accelerant to every category of catastrophic risk potentially.
So that one has a lot of attention at the moment. So that makes AI different than the rest that you've mentioned? Definitely. And are you focused primarily on avoiding disaster or moving towards something that's much more heavenly or positive, like a Shangri-La? So we have an assessment called the metacrisis. There's a more popular term out there right now, the polycrisis. We've been calling this the metacrisis since before coming across that term. Polycrisis is the idea that the global catastrophic risk that we all need to focus on and coordinate on is not just climate change and is not just wealth inequality and is not just kind of the breakdown of Pax Americana and the possibility of war or these species extinction issues, but it's lots of things.
There's lots of different global catastrophic risks. And that they interact with each other and they're complicated and there can even be cascades between them, right? We don't have to have climate change produce total Venusification of the earth to produce a global catastrophic risk. It just has to increase the likelihood of extreme weather events in an area. And we've already seen that happening.
Statistics on that seem quite clear. And it's not just total climate change, deforestation affecting local transpiration and heat in an area can have an effect on and total amount of pavement laid and whatever can have an effect on extreme weather events. But extreme weather events, I mean, we saw what happened to Australia a couple years ago when a significant percentage of a whole continent burned in a way that we don't have near-term historical precedent for. We saw the way that droughts affected.
The migration that led to the whole Syrian conflict that got very close to a much larger-scale conflict. The Australia situation happened to hit a low population density area, but there are plenty of high population density areas that are getting very near the temperatures that create total crop failures, whether we're talking about India, Pakistan, Bangladesh, Nigeria, Iran. And so if you have massive human migration, the UN currently predicts hundreds of millions of climate-mediated migrants in the next decade and a half, then it's pretty easy under those situations to have resource wars. And those can hit existing political fault lines and then technological amplification.
And so in the past, we obviously had a lot less people. We only had half a billion people for the entirety of the history of the world until the Industrial Revolution. And then with the Green Revolution and nitrogen fertilizer and oil and like that, we went from half a billion people to 8 billion people overnight in historical timelines.
And we went from those people mostly living on local subsistence to almost all living on dependent upon very complicated supply chains now that are six-continent-mediated supply chains. So that means that there's radically more fragility in the life support systems so that local catastrophes can turn to breakdowns of supply chains, economic effects, et cetera, that affect people very widely. So polycrisis kind of looking at all that, metacrisis adds looking at the underlying drivers of all of them.
Why do we have all of these issues? And what would it take to solve them, not just on a point-by-point basis, but to solve the underlying basis? So we can see that all of these have to do with coordination failures. We can see that underneath all of them, there are things like perverse economic interests. If the cost of the environmental pollution to clean it up was something where in the process of the corporation selling the PFAS as a surfactant for waterproofing clothes or whatever, it also had to pay for the cost to clean up its effect in the environment or the oil cost had to clean up the effect on the environment. So you didn't have the perverse incentive to externalize costs onto nature's balance sheet, which nobody enforces. Obviously, we'd have none of those environmental issues, right? That would be a totally different situation. So can we address perverse incentive writ large that would require fundamental changes in what we think of as economy and how we enact that, so political economy? So we think about those things.
So I would say with the metacrisis assessment, we would say that we're in a very novel position with regard to catastrophic risk, global catastrophic risk, because until World War II, there was no technology big enough to cause a global catastrophic risk as a result of dumb human choices or human failure quickly. And then with the bomb, there was. It was the beginning.
And that's a moment ago in evolutionary time, right? And if we reverse back a little bit before the bomb, until the Industrial Revolution, we didn't have any technology that could have caused global catastrophic risk even cumulatively. The industrial technology, extracting stuff from nature and turning it into human stuff for a little while before turning it into pollution and trash, where we're extracting stuff from nature in ways that destroy the environment faster than nature can replenish it and turning it into trash and pollution faster than it can be processed and doing exponentially more of that because it's coupled to an economy that requires exponential growth to keep up with interest. That creates an existential risk. It creates a catastrophic risk within about a few centuries of cumulative effects. And we're basically at that few-century point.
And so that's very new. All of our historical systems for that, you know, our historical systems for thinking about governance in the world didn't have to deal with those effects. We could just kind of think about the world as inexhaustible. And then, of course, when we got the bomb, we're like, all right, this is the first technology that rather than racing to implement, we have to ensure that no one ever uses.
In all previous technologies, there was a race to implement it. It was a very different situation. But since that time, a lot more catastrophic technologies have emerged, catastrophic technologies in terms of applications of AI and synthetic biology and cyber and various things that are way easier to build the nukes and way harder to control. When you have many actors that have access to many different types of catastrophic technology that can't be monitored, you don't get mutually assured destruction and those types of safeties. So we'd say that we're in a situation where the catastrophic risk landscape is novel. Nothing in history has been anything like it.
And the current trajectory doesn't look awesome for making it through. What it would take to make it through actually requires change to those underlying coordination structures of humanity very deeply. So I don't see a model where we do make it through those. It doesn't also become a whole lot more awesome.
And that's why we say the only other example is to control for catastrophes, you can try to put very strong control provisions. Okay, so now, unlike in the past, people could or pretty soon have gene drives where they could build pandemic weapons in their basement or drone weapons where they could take out infrastructure targets or now AI weapons even easier. We can't let that happen. So we need ubiquitous surveillance to know what everybody's doing in their basement, because if we don't, then the world is unacceptably fragile. So we can see catastrophes or dystopias, right? Because most versions of ubiquitous surveillance are pretty terrible. And so if you can control decentralized action, if you don't control decentralized action, the current decentralized action is moving towards planetary boundaries and conflict and etc.
If you control it, then who, what are the checks and balances on that control? Sorry, what do you mean control decentralized actions? So when we look at what is what causes catastrophe, so when we're talking about environmental issues, there's not one group that is taking all the fish out of the ocean, or causing species extinction or doing all the pollution, there's a decentralized incentive that lots of companies share to do those things. So nobody's intentionally trying to remove all the fish from the ocean, they're trying to meet an economic incentive that they have that's associated with fishing, but the cumulative effect of that is overfishing the ocean, right? So if you try to – if there's a decentralized set of activity where the lack of coordination of everybody doing that, everybody pursuing their own near-term optimum creates a shitty term global minimum for everybody, right? A long-term bad outcome for everybody. If you try to create some centralized control against that, that's a lot of centralized power. And where are the checks and balances on that power? Otherwise, how do you create decentralized coordination? And similarly, if you're looking at things like in an age where terrorism can get exponential technologies and you don't want exponentially empowered terrorism with catastrophe weapons for everyone, to be able to see what's being developed ahead of time, does that look like a degree of surveillance that nobody wants to be able to control those things not happening, right? Do you know what I mean? So if you – how to prevent the catastrophes if the catastrophes are currently the result of the human motivational landscape in a decentralized way, if the solution is a centralized method powerful enough to do it, where are the checks and balances on that power? So a future that is neither cascading catastrophes nor controlled dystopias is the one that we're interested in.
And so, yes, I would say the whole focus is that this is now AI comes back into the topic because a lot of people see possibilities for a very pro-topian future with AI where it can help solve coordination issues and solve lots of resource allocation issues. It also – and it can – it can also make lots of things. The catastrophe is worse and dystopia is worse. It's actually kind of unique in being able to make both of those things more powerful. Can you explain what you mean when you say that the negative externalities are coupled to an economy that depends on exponential growth? Yeah.
So it's – if you think about it just in a first principle way, the idea is supposed to be something like there are real goods and services that people want that improve their life that we care about. And so the services might not be physical goods directly. They might be things humans are doing, but they still depend upon lots of goods, right? If you are going to provide a consultation over a Zoom meeting, you have to have laptops and satellites and power lines and mining and all those things. So you can't separate the service industry from the goods industry.
So there's physical stuff that we want. And to mediate the access to that and the exchange of it, we think about it through a currency. So it's supposed to be that there's this physical stuff and the currency is a way of being able to mediate the incentives and exchange of it. But the currency starts to gain its own physics, right? So we make a currency that has no intrinsic value, that is just representative of any kind of value we could want. But the moment we do something like interest, where we're now exponentiating the monetary supply independent of an actual automatic growth of goods or services to not debase the value of the currency, you have to also exponentiate the total amount of goods and services. And everybody's seen how compounding interest works, right? Because you have a particular amount of interest and then you have interest on that amount of interest, so you do get an exponential curve.
Obviously that's just the beginning. Financial services as a whole and all of the dynamics where you have money making on money mean that you expand the monetary supply on an exponential curve, which was based on the idea that there is a natural exponential curve of population anyways and there is a natural growth of goods and services correlated. But that was true at an early part of the curve that was supposed to be an S curve, right? So we have an exponential curve that inflects, goes into an X curve, but we don't have the S curve part of the financial system planned. The financial system has to keep doing exponential growth or it breaks. And not only is that key to the financial system, because what does it mean to have a financial system without interest? Say it's a very deeply different system.
That formalizing that was also key to our solution to not have World War III, right? The history of the world in terms of war does not look great, that the major empires and major nations don't stay out of violent conflict with each other very long. And World War I was supposed to be the war that ended all wars, but it wasn't. We had World War II. Now this one really has to be the war that ends all major superpower wars because of the bomb. We can't do that again. And the primary basis of wars, one of the primary bases had been resources, which was a particular empire wanted to grow and get more stuff.
And that meant taking it from somebody else. And so the idea of if we could exponentially grow global GDP, everybody could have more without taking each other's stuff. It's so highly positive sum that we don't have to go zero sum on war. So the whole post-World War II banking system, the Bretton Woods monetary system, et cetera, was part of the how do we not have world war along with mutual assured destruction, the UN and other international intergovernmental organizations. But that let's exponentially grow the monetary system also meant that if you have a whole bunch more dollars and you don't have more goods and services, the dollars become worth less and it's just inflation and debasing the currency.
So now you have an artificial incentive to keep growing the physical economy, which also means that the materials economy has to have an exponential amount of nature getting turned from nature into stuff, into trash and pollution in a linear materials economy. And you don't get to exponentially do that on the finite biosphere forever. So the economy is tied to interest and that's at the root of this of what you just explained, not at the root of every catastrophe. Interest is the beginning of what all of the financial services do, but there's an embedded growth obligation of which interest is the first thing you can see on the economic system. The embedded growth obligation that creates exponentiation of it tied to the physical world where exponential curves don't get to run forever is one of the problems. There are a handful.
This is where we're thinking metacrisis. What are the underlying issues? This is one of the underlying issues. There's quite a few other ones that we can look at to say if we really want to address the issues, we have to address it at this level. What's the issue with transitioning from something that's exponential to sub-exponential when it comes to the economy? What's the issue with it? Well I mean, there's a bunch of ways we could go. There is an old refrain from the hippie days that seems very obvious I think as soon as anyone thinks about it, which is that you can't run an exponential growth system on a finite planet forever. That seems kind of obvious and intuitive.
Because it's so obvious and intuitive, there's a lot of counters to it. One counter is we're not going to run it on a finite planet forever. We're going to become an interplanetary species, mine asteroids, ship our ways to the sun, blah, blah, blah. I don't think that we are anywhere near close, independent of the ethical or aesthetic argument on if us obliterating our planet's carrying capacity and then exporting that to the rest of the universe is a good or lovely idea or not, independent of that argument.
The timelines by which that could actually meet the humanity superorganism growing needs relative to the timelines where this thing starts failing don't work. So that's not an answer. That said, the attempt to even try to get there quicker is a utilization of resources here that is speeding up the breakdown here faster than it is providing alternatives. The other answer people have to why there could be exponential growth forever is because digital, right? That more and more money is a result of software being created, a result of digital entertainment being created, and that there's a lot less physical impact of that. And so we can keep growing digital goods because it doesn't affect the physical plan and physical supply chain.
So we can keep the exponential growth up forever. That's very much the kind of Silicon Valley take on it. Of course, that has an effect.
It does not solve the problem. And it's pretty straightforward to see why, which is for – let's go ahead and say software in particular. Does software have to run on hardware where the computer systems and server banks and satellites and et cetera require massive mining, which also requires a financial system and police and courts to maintain the entire cybernetic system that runs all that? Yes, it does. Does a lot more compute require more of that, more atoms, adjacent services, energy? Yes.
But also, for us to consider software valuable, it's either because we're engaging with what's – we're engaging with what it's doing directly. So that's the case in entertainment or education or something. But then it is interfacing with the finite resource called human attention, of which there is a finite amount. Or because we're not necessarily being entertained or educated or engaging with it, but it's doing something for us to – again, to consider valuable. It is doing something to the physical world. So the software is engaging, say, supply chain optimization or new modeling for how to make better transistors or something like that.
But then it's still moving atoms around using energy and physical space, which is a finite resource. If it is not either affecting the physical world or affecting our attention, why would we value it? We don't. So it still bottoms out on finite resources. So I can't just keep producing an infinite amount of software where you get more and more content that nobody has time to watch and more and more designs for physical things that we don't have physical atoms for or energy for.
You get a diminishing return on the value of it if it's not coupled to things that are finite. The value of it is in modulating things that are also finite. So there's a coupling coefficient there.
You still don't get an exponential curve. So what we just did is say the old hippie refrain, you can't run an exponential economy on a finite planet forever. The alt, the counters to it don't hold. What about mind uploading or some computer brain interface to allow us to have more attention exponentially? Yeah, so it's another, it's kind of, that's almost like the hybrid of the other two, right? Which is get beyond this planet and do it more digitally. So get beyond this brain and become digital gods in the singularity universe.
I again, I think there are pretty interesting arguments we can have ethically, aesthetically, and epistemically about why that is neither possible nor desirable. But independent of those, I don't think it's anywhere close. And same like the multi-planetary species, it is nowhere near close enough to address any of the timelines we have by which economy has to change because the growth imperative on the economy as is, is moving us towards catastrophic tipping points.
So if it were close, would that change your assessment or you still have other issues? If it were close, then we would have to say, first, that is implying that we have a good reason to think that it's possible, right? That it's possible to, and that means all the axioms that consciousness is substrate independent, that the consciousness is purely a function of compute, strong computationalism holds, that we could map the states of the brain and or if we believe in embodied cognition, the physiology adequately to represent that informational system on some other substrate, that that could operate with an amount of energy that is and substrate that's possible, blah, blah, blah. So first we have to believe that's possible. I would question literally every one of the axioms or assumptions I just said.
We're going to get to that. We would say, is it desirable and how do we know? How ahead of time? And now you get something very much like how do I know that the AI is sentient, which for the most part on all AI risk topics, whether it's sentient or not is irrelevant. Whether it does stuff is all that matters. But how do you tell if it's sentient and all of the Chalmers, P-zombie questions or whatever are actually really hard? Because what we're asking is how can we use third-person observation to infer something about the nature of first-person given the ontological difference between them? So how would we know that that feature is desirable? Are there safe to fail tests and what would we have to test to know it to start making that conversion? But I don't think we have to answer any of those questions because I don't think that anybody that is working on whole brain emulation thinks that we are close enough that it would address the timeline of the economy issues that you're addressing.
Let's attempt to address one of the questions about substrate independence. What are your views? Is consciousness something that our biological brains do or that requires a development from an embryonic stage? Whatever it is that produced us, there's something special about us or animals or is it something that can be transferred or started up, booted up from scratch into what's not us, like decidedly not us, a computer? Okay. So this is now much more a proper theory of everything conversation than the topic that we intended for the day, which is about AI risk. So what I will do is say briefly the conclusion of my thoughts on this without actually going into it in depth, but I would be happy to explore that at some point.
I think that how I come to my position on it to try to do a kind of proper construction takes a while. So briefly I'll say... I'm not a strong computationalist, meaning don't believe that mind, universe, sentience, qualia is purely a function of computation.
I am not an emergent physicalist that believes that consciousness is an epiphenomenon of non-conscious physics that in the same way that we have weak emergence, more of a particular property through certain kind of combinatorics or strong emergence, new properties emerging out of some type of interaction where that hadn't occurred before, like a cell respirating or none of the molecules that make it up respirate. I believe in weak emergence. That happens all the time. You get more of certain qualities.
It happens in metallurgy when you combine metals where the combined tensile strength or shearing strength or whatever is more than you would expect as a result of the nature of how the molecular lattices form. You get more of a thing of the same type. I believe in strong emergence, which is you get new types of things you didn't have before, like respiration and replication out of parts, none of which do that. But those are all still in the domain of third person accessible things.
The idea of radical emergence, that you get the emergence of first person out of third person or third person out of first person, which is idealism on one side and physicalism on the other, I don't buy either of. I think that idealism and physicalism are similar types of reductionism where they both take certain ontological assumptions to bootload their epistemology and then get self-referential dynamics. So I don't think that if a computational system gets advanced enough, automatically consciousness pops out of it. That's one. Two, I do think that the process of a system self-organizing is fundamentally connected to the nature of experience of selfness. And things that are being designed and are not self-organizing, where the boundary between the system and its environment that exchanges energy and information and matter across the boundary is a auto-poetic process.
I do believe that's fundamental to the nature of things that have self-other recognition. And on substrate independence, I do believe that carbon and silicon are different in pretty fundamental ways that don't orient to the same types of possibilities. And I think that that's actually pretty important to the AI risk argument.
But – so I'll just go ahead and say those things. I also don't think – I believe that embodied cognition in the Damasio sense is important and that a scan of purely brain states is insufficient. I also don't think that a scan of brain states is possible even in theory. And – Sorry to interrupt. I know you said you don't believe it's possible.
What if it is? And you're able to scan your brain state and body state. So we take into account the embodied cognition. Sure. So I think that – okay. It's not simply a matter of scanning the brain state. We need to scan the rest of the central nervous system.
No, we also have to get the peripheral nervous system. No, we have to get the endocrine system. No, all of the cells have the production of and reception of neuroendocrine-type things. We have to scan the whole thing. Does that then extend to the microbiome, virome, et cetera? I would argue yes. Does it then extend to the environment? I would argue yes.
Where does that stop its extension is actually a very important question. So I would take the embodied cognition a step further. The other thing is Stuart Kaufman's arguments about quantum amplification to the mesoscopic level that quantum mechanical events don't just fully cancel themselves out.
At the subatomic level and at the level of brains, everything that is happening is straightforwardly classical, but that there is quantum mechanical, i.e., some fundamental kind of indeterminism built in phenomena. They end up affecting what happens at the level of molecules. Now then, one can say, well, does that just mean we have to add a certain amount of a random function in, or is there something else? This is a big rabbit hole, I would say, for another time, because then you get into quantum entanglement and coherence, so you get something that is neither perfectly random, meaning without pattern. You get a born distribution even on a single one, but it's also not deterministic or with hidden variables.
So do I think that what's happening in the brain-body system is not purely deterministic, and also as a result of that means you could not measure or scan it even in principle, in a kind of Heisenberg sense? Yes, I think that. Have you heard of David Wolpert and his "Limits On Inference Machines"? I have not studied his work. Okay. Well, anyway, he echoes something similar, which says that you can't have Laplace's demon even in a classical world. You can't have Laplace's demon. So let me talk about the economy.
Which only on your podcast would happen. Why is it that if somehow this exponential curve starts to get to where the S is, the top of the S, that the halting or the slowing down of the economy is something that's so catastrophic and calamitous, rather than something that would mutate? And if we need to just, at that point, as it starts to slow down, we make minor changes here and there? Is this something that's entirely new? Like, will they all come crashing down? Um, okay. So... Let me make the question clear. It sounds like, look, the economy is tied to exponential growth.
We can't grow exponentially. Virtually no one believes that. So at some point, and let's just imagine it's three decades, just to give some numbers. So at some point, three decades from now, this exponential curve for all of the economy will start to show its legs and start to weaken and we'll see that it's nearing the S part.
So what? Does that mean that there's been fire in the streets, that the buildings don't work, that the water doesn't run anymore? Like, what will happen? Okay. So people often make jokes about physicists in particular starting to look at biology and language and society and modeling in particularly funny reductionist ways because they try to map the entire economy through the second law of thermodynamics or something like that. And because what we're really talking about is the maximally complex and anterocomplex thing and embedded complexity we can because we're talking about all of human motives.
And how do humans respond to the idea that there is fundamentally limits on the growth possible to them or there's less stuff possible for them or that there – whether it's issues that are associated with environmental extraction. So here's one of the classic challenges is that the problems, the catastrophic risks, many of them in the environmental category are the result of cumulative action long-term where the upsides are the result of individual action short-term. And the asymmetry between those is particularly problematic. That's why you get this collective choice-making challenge, meaning if I cut down a tree for timber, I don't obviously perceive a change to the atmosphere or to the climate or to watersheds or to anything. But my bank account goes up through being able to sell that lumber immediately.
And the same is true if I fish or if I do anything like that. But when you run the Kantian categorical imperative across it and you have the movement from half a billion people doing it to pre-industrial revolution to 8 billion and you have something like in the industrial world 100X resource per capita consumption just calorically measured today than at the beginning of the industrial revolution. Then you start realizing, okay, well, the cumulative effects of that don't work. They break the planet, and they start creating tipping points that auto-propagate in the wrong direction.
But no individual person or even local area doing the thing recognizes their action as driving that downside, and how do you get global enforcement of the thing? And if you don't get global enforcement, why should anyone let themselves be curtailed when other people aren't being curtailed? And that'll give them game theoretic advantage. So this is actually, there's a handful of asymmetries that are important to understand with regard to risk. All right. We've covered plenty so far. And so it's fruitful to have a brief summary. We've talked about the faulty foundation of our monetary system.
Daniel argues that post-World War II especially, our economic system has not only encouraged but been dependent on exponential monetary growth, and this can't continually occur. We've also talked about the digital escape plan and how this is an illusion, at least in Daniel's eye. He believes that digital growth has physical costs because their hardware, their human attention limits their finite resources, linear resources as he calls them, though I have my issues with the term linear resource because technically anything is linear when measured against itself.
We've also talked about how moving to Mars won't save us, us being civilization. Daniel believes that the idea of becoming an interplanetary species to escape resource limitations is unrealistic, perhaps even ethically questionable. We've also talked about how mind uploading is not what it's cracked up to be. It may not occur. And even if it does, it's not the answer because it's either unfeasible, but even if it's feasible, Daniel believes it to be undesirable.
Another resource as we expand our digital footprint is the privacy of our digital resources. You can see this being recognized even by OpenAI as they recently announced an incognito mode. And this is where our sponsor comes in. Do you ever get the feeling that your internet provider knows more about you than your own mother? It's like they're in your head. They can predict your next move. When I'm researching complicated physics topics or checking the latest news or just in general what I want privacy on, I don't want to have to go and research which VPN is best.
I don't want to be bothered by that. Well, I and you can put those fears to rest with private internet access. If you have a VPN provider that's got your back with over 30 million downloads, they're the real deal when it comes to keeping your online activity private. And they've got apps for every operating system.
You can protect 10 of your devices at once, even if you're unfortunate enough like me to love Windows. And if you're worried about strange items popping up in your search history, don't worry. I'm not judging. Private internet access comes in here as they encrypt your connection. They encrypt your IP address so your ISP doesn't have access to those strange items in your history.
They make you a ghost online. It's like Batman's cave before you're browsing history. With private internet access, you can keep your odd internet searches, let's say, on the down low. It's like having your own personal confessional booth, except you never need to talk to a priest. So why wait? Head over to piavpn.com slash TOE, T-O-E, and get yourself an 82, an 82% discount. That's less than the price of a coffee per month.
And let's face it, your online privacy is worth way more than a latte. That's piavpn.com slash T-O-E now and get the protection you deserve. Brilliance is a place where there are bite-sized interactive learning experiences for science, engineering, and mathematics.
Artificial intelligence in its current form uses machine learning, which uses neural nets, often at least. And there are several courses on Brilliance's website teaching you the concepts underlying neural nets and computation in an extremely intuitive manner that's interactive, which is unlike almost any of the tutorials out there. They quiz you. I personally took the course on random variable distributions and knowledge and uncertainty because I wanted to learn more about entropy, especially as there may be a video coming out on entropy, as well as you can learn group theory on their website, which underlies physics, that is SU3 cross SU2 cross U1 is the standard model gauge group. Visit brilliant.org slash TOE, T-O-E, to get 20% off your annual premium subscription. As usual, I recommend you don't stop before four lessons.
You have to just get wet. You have to try it out. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking. The bad is the material from which the good may learn.
So this is actually there's a handful of asymmetries that are important to understand with regard to risk. One is this one that I'm saying, which is you have risks that are the result of long-term cumulative action, but that you actually have to change individual action because of that. But the upside, the benefit, the individual making that action realizes the benefit directly.
And so this is a classic tragedy of the commons type issue, right? The tragedy of the commons at a not just local scales, but at global scales. And some of the other asymmetries are particularly important is people who focus on the upside, who focus on opportunity, do better game theoretically for the most part than people who focus on risk when it comes to new technologies and advancement and progress in general. Because if someone says, hey, we thought Vioxx or DDT or any number of things were good ideas, they ended up – or leaded gasoline, they ended up being really bad later.
We want to do really good long-term safety testing regarding first, second, third order effects of this. They're going to spend a lot of money and knock it first to market and then probably decide the whole thing wasn't a good idea at all. Or if they do decide how to do a safe version, it takes them a very long time. The person says, no, the risks aren't that bad. Let me show you.
Does a bullshit job of risk analysis as a box checking process and then really emphasizes the upsides is going to get first mover advantage, make all the money. They will privatize the gains, socialize the losses. Then when the problems get revealed a long time later and are unfixable, that will have already happened. So these are just examples of some of the kind of choice-making asymmetries that are significant to understand the situation. I only partly answered your question.
Sure. Are you having in mind a particular corporation currently? Totally. Not a particular corporation, but a particularly important consideration in the entire topic. One view is that Google is not coming out with something that's competitive. Like BART is not competitive.
I think even Google would admit that. And so one view is that, well, they're highly testing. Another one, I've spoken to some people behind the scenes and they say Google doesn't have anything.
They don't have anything like chat GPT. It's BS when they say so. Even OpenAI doesn't know why chat GPT works, like GPT-4 works as well as it does. They just threw so much data at it and it was a surprise to them.
And in some ways they got lucky. So do you see what's happening right now between Microsoft and Google as Google is actually the more cautious one and Microsoft is the more brazen one and perha