Sentient Planets & World Consciousnesses

Sentient Planets & World Consciousnesses

Show Video

This video is sponsored by CuriosityStream. Get  access to my streaming video service, Nebula,   when you sign up for CuriosityStream  using the link in the description.   Imagine for a moment an entire  planet as one gigantic brain,   a single functioning mind of unbelievable  scope. Now imagine that was your mind.   So it's another SciFi Sunday here on Science and  Futurism with Isaac Arthur, where we pick concepts   regularly seen in science fiction and look at how  scientifically realistic they are, what variations   or parallels might be allowed in science, and what  other unexpected things we might see as a result.   For today’s topic of Sentient Planets or World  Brains, and the sort of massive minds these might   hold, we have a bit of a twist from the norm,  in that these are typically considered weird or   improbable in a lot of science fiction but seem  considerably more likely under known science.  

Generally that’s reversed, where something  in sci-fi is considered ridiculously easy,   but in reality it probably isn’t even remotely  possible. Such being the case, we don’t need   to contemplate pure theory or fringe science to  consider the concept of a world-spanning mind.   Nonetheless, there are still quite a few examples  in science fiction that let us examine some   weirder scenarios like an algae or fungus spanning  a planet and evolving a mind, as was the case with   the telepathic world-spanning fungus from the  video game classic, Sid Meier’s Alpha Centauri.   And in a literary classic from Isaac Asimov’s  Foundation Series, we have a colony of telepaths   turned hive mind that eventually comes to  incorporate animals, then plants, then even   the rocks and mountains, called Gaia. There are  more examples in fantasy, and the World Turtle and   Elephants of Terry Pratchett’s Discworld come  to mind. In comics we see OA, from DC comics,   where the Guardians of the Universe dwell and  the headquarters of Green Lantern Corps, and Ego,   the Living Planet, from Marvel Comics. Since I  mentioned Marvel, we might also consider the sort  

of entity that consumes planets, like the Marvel  Comics Villain Galactus, the Eater of Worlds.   And something like a planet-sized brain might  need to eat other planets to run itself,   metaphorically or literally. As an interesting side note for Marvel Comics   fans: while Ego appears in Guardians of the Galaxy  2 as Peter Quill’s dad, played by Kurt Russel,   he’s actually an older character that, who  along with the Kree, got dreamed up right   after Galactus’s first appearance in 1966 to form  the foundation of all those off-Earth stories.   And appropriate for building that mythology,  Ego shows up originally in the Thor Comics.   But we also see some living world concepts  in traditional mythology too, and indeed   one of Thor’s main enemies in the Norse tales is  Jormungandr, a serpent encircling all of Midgard,   or Earth. But we also see various Earth Gods,  sentient spirits or deities embodying large   bits of our world or of other worlds. We do  after all name our planets after deities,  

from Mercury to Pluto, and this is probably  a good pick since as we’ll see today,   such a world brain is likely to be godlike  in scale and probably wouldn’t have a hard   time convincing anyone living on or visiting  their world that they merit such a title.   Another trope we often see in both mythology  and science fiction is that of people living   on giant celestial corpses. In Norse Mythology  Ymir gets killed by the gods and dissected to   form the world, and similar fates befall Pangu  of Chinese mythology and Tiamat from Mesopotamia,   while yet another sea monster is slain  in Aztec Mythology to create the land.   The Mining Colony of Knowhere from Guardians of  the Galaxy 1 is a decapitated godhead, and we see   many smaller examples of colonies built on living  or dead giant organisms or space whales.   I opted to call this episode Sentient Planets  & World Consciousnesses in part because while   the term planet has a fairly specific definition  these days, on this show we tend to use World as   our catchall for everything, be it a terraformed  moon or a ShellWorld around a Gas Giant or even   non-spherical options like artificial disc or  donut shaped planets, larger rotating habitats,   and potentially non-Euclidian realms like  virtual realities or pocket Universes.  

Indeed generating vast virtual universes might  be a major activity of planet-sized brains,   and it is quite likely we might build or  bioengineer lifeforms that were essentially   living asteroids or that treated one  of our megastructures as its body,   be it a small little O’Neill Cylinder the size  of county to a Jupiter Brain or Matrioshka Brain   or even multi-light year long or  wide Topopolises or Birch Planets.   Generally, these will fall  into three major categories:   First we have the naturally occurring ones such  as an evolved planet, though this could include   a Boltzmann Brain. Indeed such a Boltzmann  Brain is what Ego is described as in the MCU,   and leads us to our second category, which we’ll  define as a living being that starts out small   and grows into a more massive mind and object.  Our Third Category is the artificial case,   where someone has created a sentient mind, which  might be an artificial intelligence running and   maintaining a habitat or might be an entirely or  nominally biological entity, like a space whale.   Needless to say there’s some room for overlap  between those. For instance, you might engineer   big old sandworms for tunneling out asteroids,  moons, or even planets and refining metal in them,   and those might grow or evolve to be even bigger  organisms, or form cocoons inside that world   and emerge like butterflies, a concept played  with in Doctor Who in regard to our own Moon.  

Also, we’ve got the Skynet example, where  some big computer mind might be engineered   but escape control and evolve or build itself  bigger. Indeed, that is often assumed to be what   a rogue AI will do the moment it slips its  leash, cannibalize everything around it to   make more and more computer parts to expand  its mind and abilities until it absorbs its   entire planet into one massive computer brain. Does that approach make sense incidentally? To   turn sentient then gobble up your  whole world and your creators?   Well kinda-sorta. First, it really depends  on your available tech and what it does,   if you’ve got the blueprints for a fusion reactor  and access to 3D printing or self-replication, you   are probably better off grabbing enough resources  to launch into space and get out deep where heat   from the Sun is less of an issue, and cannibalize  asteroids and smaller moons rather than Earth.   Folks tend to get hooked on the notion of  exponential doubling when talking about AI or grey   goo run amok but you almost always get limiting  factors in the environment preventing that.  

Heat definitely makes disassembling a planet a  slow process, thousands of years at best, not   months or days, see our episode on disassembling  the solar system for discussion of that.   But heat is not the only growth limiter. For  instance if light speed is an unbreakable limit   you can only double as fast as your ships  can reach double your resources, which means   some expanding globe of self-replicating machine  that’s taken over a 10 light year bubble of space   needs to expand to take over double the new  volume, which is 31 more light months in radius,   so 31 months minimum to double again, not a few  seconds, and one at 1000 light years would need   260 years to double again on average. Here down  on Earth your limit isn’t just getting energy but   getting rid of heat and that’s quite hard. Hard in  space too but gravity and friction are no longer   limiting factors on how you build your energy and  heat collectors, distributors, and radiators.   There might be an optimal point where it is  logical to get the heck off Earth to continue your   growth. Trying to pin where that is down is beyond  me, its too dependent on too many unknown factors.  

It might be that it needs to seize all the  raw materials in one loan trashbin to make   this rocket to new worlds or cannibalize a  continent, which is the difference between   us barely noticing and a massive conflict. A  war of obliteration with humanity or it just   firing a lone rocket into space carrying  its replicating gear. Or for that matter,   it just cutting a deal for legal personhood and  purchasing all the components and locations for   colonization openly and legally. This is likely  to be the fastest method, as cooperating with   existing producers represents a jumpstart,  versus devoting resources to fighting them.   Also the common scifi notion of  newborn Technological Singularities   switching on and instantly knowing more and  discovering new things is exactly that: scifi.  

Something often left out of discussion of  intelligences that can make themselves more   intelligent is the notion of diminishing returns.  For example, try putting a dollar in a bowl, then   every ten minutes add more money, half as much  as you did last time. It’s true you’ll be getting   richer every ten minutes, forever--but it’s also  true that you’ll never have two whole dollars.   We’ve been intelligent for  millenia, millions in number,   and definitely put effort into making smarter  humans… our success at this has been limited.   There’s no reason to think any brain, no  matter how massive, can just flip on and ten   minutes later have a deeper understanding of the  Universe than we do either, it still needs to run   experiments. I would like to devote more of this  episode to natural occurring planet brains but  

Super intelligent planetary computers is a real  probability in our future and obviously would be   massively superintelligent so lets quickly  discuss the 3 types of Super-Intelligence,   as defined by philosopher Nick Bostrom. Those are Speed Superintelligence,   Networked Superintelligence,  and Quality Superintelligence.   Speed Superintelligence is the greater  intelligence you would nominally have if we just   put you in an accelerated bubble of time so a day  passed for you for every hour for everyone else.   This can obviously be handy, amazingly so if  we’re contemplating reaction times in combat   or market trading, but its not that huge an edge  for science. Indeed its not even the equivalent   of getting 20 years of research done in a year,  because so much of that is delayed by real-world   factors. We’re not stymied in our attempts to  make fusion work or figure out new particles by   our brain power, we have to wait patiently for  someone to come up with a decent experimental   test of a theory, get the thing funded, and  built, and run it, then cogitate on the results   and new anomalies. Speed Superintelligence  is the easiest for a bigger brain to have,  

more chips, neurons, what have you, but also not  super useful for new science or deep thoughts.   Next is networked intelligence, and it’s the  idea that several people or computers having   their minds connected in some fashion makes them  much smarter. Like thinking faster this definitely   has some advantages, two heads are better at  one for spotting errors for instance, though   can also introduce problems with coordinating  action. Also, ten monkeys having their brains  

laced together results in considerably  more brains in total than you or I have,   but probably results in no better capacity for  deep thoughts. We should not assume simply adding   more processors or more computers linked together  adds greater capacity for revolutionary thinking.   You also start getting lag issues,  communication time between brain   components, and as we’ll see in a bit, this  is a big issue for Planet-sized intellects.  

That somewhat nebulous area of deeper  thoughts is our third type, incidentally:   Quality Superintelligence. To  quote Bostrom’s book definition,   “a quality superintelligence can carry  out intellectual tasks that humans just   can't in practice, without necessarily being  better or faster at the things humans can do.   This can be understood by analogy with the  difference between other animals and humans,   or the difference between humans with and  without certain cognitive capabilities.”  

Incidentally a planet brain might not be terribly  clever. We often discuss mind augmentation on this   show and how signals between neurons in our brain  move slower than the speed of sound, which is a   million times slower than the speed of light. As a  result, one way to make a speed superintelligence   is to replace or augment those neurons so that the  transmission occurred optically at light speed,   making you think millions of times faster. I will  sometimes point out that if you did this, and   spread those hundred billion neurons of the human  brain - modified to light speed now – out to take   up a volume the size of a planet rather than of  your noggin, that the signal lag would be the same   as in your current meat-brain. So a planet-brain  of only 100 billion neurons operating at light   speed is just human intelligence. Obviously it  could pack in a lot more than 100 billion neurons,   though its analogy for neurons might be something  big like a giant crystalline rock-neuron or tree.  

But let us instead imagine it only operated at  human brain neuron transmission speeds. Now those   100 billion neurons are taking a year to have the  same thoughts you and I have in a few seconds.   This is part of why I have been saying sentient  planets rather than sapient or super intelligent.   Sentient often gets used, even by me, as  synonymous for intelligence or consciousness,   and the definitions are fairly loose and variable,  but generally in these conversations we reserve   sapience for being that line between human and  smart animals, and sentience is broader and   arguably as broad as the capacity to feel, which  one might argue even a tree has in its ability to   sense and adapt to light or other environmental  conditions. That’s a bit broad in my opinion,   compared to the ability to experience sensations,  but remember that example when we get back to   discussing the idea of a naturally occurring  planet brain. Consciousness is similarly hard to  

pin down as a concept, and is sometime synonymous  with sentience or awareness or selfhood. You think   therefore You Am, and you are aware You Am. And this raises an interesting point. That   Marvel Comics example, Ego the Living Planet, is  a particularly appropriate name as we would have   to ask when some simple evolving network of a  mind reached the point of having thoughts and   self-awareness, of developing an Ego, and if you  didn’t know Ego is Latin for “I” or “The Self”,   and there’d have to be a point where our  emerging worldbrain had developed that ego.  

It also raises the big point of why it did  develop consciousness or selfhood, and asking   what the survival advantage was, if any. The biggest organism on Earth is not the   Blue Whale, nor does the honor belong to  some now extinct megafauna of land or sea.   The definition of organism can be a bit debated  but the leading candiate for biggest organism is   usually given to a type of parasitic Honey Fungus  we find in Oregon. It’s a colony organism and in   this case the biggest known example stretches  almost 4 kilometers across a piece of the Blue   Mountains. It may not be the biggest either,  we identify it by examining various trees its  

killed and how far apart they are. It’s also  ancient, between 2 and 10,000 years old.   This is maybe an interesting analogy for a  planet brain because it would not be that   hard to imagine it growing bigger, nor that it  might be able to send signals back and forth,   be it electrically or with chemicals such  as pheromones. Indeed on Earth many colony   organisms or insect hives – which can be a  blurry distinction at times - communicate between   components in a way that transmits information and  could be considered basic thought. Kinda-sorta,   again blurry area, but remember that most of your  brain isn’t involved in deep thinking, it mostly   gets used for processing sensory data and motor  control, and elephants and whales have much bigger   brains than us but use most of that for the same  sensory processing and motor control. In other   words, they use their bigger brains to control  their bigger bodies, not to have bigger thoughts.  

It is not that hard to imagine some big  world-spanning fungus, or even one just   spreading a small local biome, developing this  to the point of it being analogous to a spinal   cord or basic brain. And from there it seems an  easy jump to thinking and feeling and pondering.   We see some examples of this in fiction,  Sid Meier’s Alpha Centauri video game - a   personal favorite – and something pretty  parallel in the belligerent alien hive mind   Morning Light Mountain of Peter Hamilton’s  Commonwealth Saga – another personal favorite.   However, speaking of belligerence we should ask  ourselves why an alien fungal mind would develop   consciousness, let alone high intelligence. We  tend to assume growing intelligence in nature   is a process of evolution mostly driven by  predator-prey cycles, and presumably tends   to be limited to things with mobility who can  actually act on incoming sensory data by moving or   acting quickly. We can imagine a big fungus like  that developing sensations for light, moisture,   chemical makeup and so on but the process for  acting on that or warding off predators seems a   big jump. Or for that matter acting as a predator,  that honey fungus is a parasite after all and we   discussed even dumb parasites evolving or seizing  brain power in our Parasitic Aliens episode.  

But let’s move a bit broader since a planet brain  isn’t something we’d expect to really have that   predator-prey cycle operating for it, anymore  than we do in our modern technological period.   It also might be driven by competition from  kindred instead, thousands of regional or   island-spanning intellects each seeking  to expand into a world-brain for instance.   Though as a last note on that, we do have  options besides predator-prey that might operate.   Symbiotic relationships are not evolutionary dead  ends or sand traps and we could imagine something   like that fungus or plant like mind evolving  symbiotic relationships with insects or birds who   delivered it around, potentially even figuring  out how to direct them by incentive or reward,   until the relationship became  as good as hands and fingers.  

Keep in mind we, as humans, already have thousands  of various species of non-human organisms living   inside us that are symbiotic to us and very  evolved to us, in the extreme case mitochondria.   In fiction we often see aliens worlds with  humanoid critters in form and behavior   who are linked together into some telepathic  overmind, and it would be interesting to imagine   what a planet-sized scale up of human gut microbes  would be, what would it be like to be one of the   many specialized organism living on a planet  mind and separate from it but evolved to a   specific and extreme specialized role on it? And  what would be the flavor of that role? Parasite?   Master and slave? Total uninvolvement? Does the  world brain think of even human intellects on   it like we do our own gut bacteria or fingers or  blood cells, or is it more like our relationships   with horses, oxen, dogs, cats, pigeons, or rats?  Would it be plausible there would be room for an   intelligent organism inside that overmind’s  body-ecosystem that we might encounter if we   visited? Or would it have an immune system  response to exterminating intelligence,   since intelligence is capable of massive planetary  alteration and might be viewed like a virus.   And if such human-level intelligence existed  on a sentient world, what would they be like?   Now, speaking of encountering aliens, the  Fermi Paradox is a popular topic on this show   and I often divide the various solutions given  for why we don’t seem to see alien civilization   into various broad camps. The big three  being first that they are really rare,   second, that they are common enough  but we can’t detect them, and third,   that we can’t recognize them, and landing on a  planet with sentient fungal life we didn’t know   was a planet brain might be an example of that  third type of Fermi Paradox solution. On top of   those three categories, each of which has multiple  sub-categories – see our Fermi Paradox Compendium   episode – we also have a fourth category for  miscellaneous answers. However there’s another   approach that has three main categories I’ve  heard of as the Physicists, the Biologists,   and the Historians. I’m not sure of the origin  of it but I first encountered it in Peter Watts  

Novel Blindsight, which incidentally was our first  full Audiobook of the Month winner years back.   I love that book but I’m not sure I agree with  the categorization, indeed I definitely do not,   but the reasoning goes like this. The Physicist  looks out at the huge Universe and says aliens are   surely friendly because with such advanced  technology as is needed to travel between   the stars, you either have mastered your  self-destructive instincts or blown yourselves up,   this is sometimes considered  the Sagan Perspective.   The Biologist looks at Earth and sees the  non-stop pressures for constant survival,   the push for growth, and the many pathways  life might take besides getting intelligent,   and concludes the Universe probably  rarely develops technological lifeforms   because its not a preordained path for evolution  and because any life that can spread to the stars   will and will keep doing it, so that whoever  first arrives on the scene will conquer and   colonize the whole galaxy before anyone else  can. So the biologist says intelligent life is   rare and is belligerent and expansionist. This is  often called the Hart or Hart-Tipler Conjecture,   and we take it a step further in our  discussion of the Dyson Dilemma.  

Then we get the Historian viewpoint, and I’ll  quote the example from Blindsight for this:   “Equidistant from the two tribes sat the  Historians. They didn't have many thoughts   on the probable prevalence of intelligent,  spacefaring extraterrestrials. But if there   are any, they said, they're not just going  to be smart. They're going to be mean. The  

reason wasn't merely Human history, the ongoing  succession of greater technologies grinding lesser   ones beneath their boots. No, the real issue  was what tools are for. To the Historians,   tools existed for only one reason: to force the  universe into unnatural shapes. They treated   nature as an enemy, they were by definition  a rebellion against the way things were.   Technology is a stunted thing in benign  environments, it never thrived in any culture   gripped by belief in natural harmony. Why invent  fusion reactors if your climate is comfortable,  

if your food is abundant? Why build  fortresses if you have no enemies? Why   force change upon a world that poses no threat?” Again, great book if you’re looking for a good   read. So Technology implies Belligerence, by that  reasoning, the human relationship with technology   is inherently belligerent because technology  is always about fighting the environment around   you and status quo. Technology is invented to  improve an edge or confront a challenge or danger,   and inventors and civilizations using it  might be seen, a bit poetically perhaps,   as declaring war on the Universe and reality  itself. Technology implies belligerence.   But not necessarily for something like a planet  mind. We assume here that a predator-prey cycle  

is how intelligence arises, and it's likely to  be a common path. However, let us assume a big   parasitic fungi managed to spread over an entire  planet, having adapted to eat nearly everything.   It may begin adapting to create poisons to kill  off anything else intelligent, say it develops a   neurotoxin its spores emit, because intelligent  critters might harm it and it needs no animals   smarter than insects, perhaps for pollination of  itself or other plants it feeds on, and may not   even need them. It has developed a certain basic  awareness of its environment in the sense of day   and light and temperature and it can spread that  signal, and we’ll say it does it at roughly the   speed of sound, parallel to human neurons. So if its air exposed elements, something like   a flower or mushroom, have a sunny day form and a  raining form, closing during the one, or perhaps   even retracting into the ground, it obviously  helps to be able to send a signal to those nearby   that you’re getting rained on or blown by strong  winds. And since this probably takes some energy   or effort, this may evolve with time to send the  signal only in the direction soon to be effected,   and over time to be able to calculate weather  predictions and season or solar variations.  

Nothing in that incidentally would even vaguely  imply intelligence, just processing power,   but it’s a possible pathway for higher  intelligence that is not predator prey,   and we can also see it as a way it might  seek to evolve an ability to more strongly   alter its environment. Though that latter might  seem an example of our comments a moment ago,   technology implies belligerence, its  seeking to improve its state intentionally.   Now the Fungus example is an interesting one  but the insect hive mind might be a better one.   Here we have critters who already have a complex  system of signaling and coordination that requires   no physical biological connection. So it is not  too hard to imagine that expanding in size or   complexity. However, it is worth noting the big  constraint on hives is how fast they can lay eggs.  

Hives have a single queen, so evolution optimizes  them to produce eggs super-quickly and nothing   else. Queen Bees are literally just stripped  down egg-layers, utterly incapable of surviving   on their own, though the whole is very task built  for most hive species. Indeed this appears to be   an example of convergent evolution – which  we’re having an episode on soon – to have a   single egg-layer or queen with all the rest of the  hive descending from her and not allowing multiple   queens to exist. While that’s the case no hive  could plausible spread out to control a world,   even if they were human-sized and could still  be laid once per second and the workers lived a   century, that’s a hard case for a unified hive. Now there are presumably ways a hive species   could evolve to have coexisting queens, which I  suppose would be viceroys or some node system of   baronesses, but it is also possible the emerging  planetary intelligence was not the Hive, with each   insect acting like a neurons, but rather used the  hives or something they created as each neuron,   and composed themselves of billions of  such hives, possibly even multiple species.  

We see a potential early form of that in  Alastair Reynold’s short story Glacial,   where the glacier itself is being tunneled  by simple stupid worms and the tunels are   becoming a neural network, one which might one day  evolve into a single vast mind over an icy world.   We might also imagine something’s brain  architecture not being the organisms but   the corridors or corals they built,  and indeed brain coral seems like an   interesting conceptual example of that. Brain  coral is cerebral in shape but not in purpose,   but coral lives a very long time, many centuries,  and is an interesting example of symbiosis. The  

coral itself is calcium carbonate deposited by  the hermaphroditic organisms living inside it,   but like most shallow-water coral they also have  symbiosis with algae living on the surface of   that coral that can use photosynthesis still  at that depth to convert carbon dioxide into   food or energy that the coral can siphon and  they also prey on zooplankton blown their way.   Coral can get enormous and it is not hard to  imagine it getting more parasitic or symbiotic   so as to start mind-controlling more mobile  critters it encounters. Again brain-controlling   parasites are the stuff of nightmares in  scifi but also quite common in actual nature.   So we see a lot of potential pathways to that  big old plant brain and it need not be fueled by   predator-prey relations, especially once it makes  that jump into genuine awareness and intelligence.   Albeit I think making that jump absent a predator  prey cycle is one giant and dubious leap.   Now, how smart is it? An artificial  one, a computer the size of planet,   is damned smart unless you were tying to avoid  that. Even if assumed only modern computing  

and running on only solar panels, think trillions  of times bigger or better than the PCs or laptops   rolling off the assembly line today, whatever that  is as at the time you’re watching this episode,   because that’s how much power is available  via sunlight to the planet’s surface.   If we’re contemplating something running at  the Landauer Limit, the theoretical lower limit   of classic computation per unit of energy that  we discuss as the constraint on ultra-advanced   computing, that is partially temperature based  – which is why we expect AI or digital minds to   seek out cold places like deep space – but on a  room-temperature planet would be 3 zeptojoules, 3   x 10 ^-21, or 3 billionths of a trillionth of a  joule per single bit-flip of computation, or FLOP.   We typically estimate Earth gets about 175  petajoules of sunlight every second, 1.75x 10^17   Joules, or about 60 trillion-trillion-trillion  flops per second, at the Landauer Limit. There’s   a lot of debate about how much processing power  it would take to emulate a human brain, but most   estimates put it as in the petaflops which is  also where are current top supercomputers are at,   though they are still horribly energy inefficient  compared to the brain needing megawatts to   perform at those speed. And if we went with  60 petaflops to emulate a human brain, then  

such a solar world brain running at the Landauer  Limit could do a billion-trillion such brains,   comparable to the high end of what we’d expect of  a crowded K2 Dyson Sphere of classic humans and   more than 100 billion times what the current  human population’s combined brain power is.   Now I should point out that raw processing  power and what we mean by consciousness are   likely to have only a very loose relationship  and that again most of your brain’s energy is   devoted to simple processing of sensory data  and motor control. However it is a power glutton   and if we assumed some world brain was running  with roughly human brain efficiency globally,   then your brain runs on something like  20-ish watts of glucose-derived power,   and if we’re assuming the roughly 1% efficiency  rate of photosynthesis turning sunlight into   sugar energy, then something like 10^14  or 100 trillion brains per planet.   Obviously this is assuming an Earth-parallel  case in temperature and size, and it covering   the whole planetary surface, from oceans to polar  icecaps to deserts – indeed such a planet brain   might develop as a means of terraforming its own  surface to optimal conditions. But different sizes   and environments are also possible. A small icy  comet brain might be more 3-dimensional, like  

an actual brain, and run more efficiently by being  colder, whereas some photosynthetic cloud organism   floating around Venus would get more sunlight  per unit of area and a cousin around Jupiter   would get only a small percent of that sunlight  but have it over a much bigger surface area.   And this example of a Jupiter brain might  well make use of a lot more magnetics,   given how powerful Jupiter’s Magnetosphere is  compared to even Earth’s. It is conceivable an   organism very sensitive to magnetics – and Earth  has some itself – might figure out biological   equivalents of magnetic memory storage or electric  or radio data transmission. We often envision   giant blimps organism or gas whales living on  these gas giants, but that’s another scenario,   some big colony organism throughout the clouds. Now we’re running a bit long for a Scifi Sunday   episode so let’s wrap up with one more point in  regard to encountering these things and how they   behave. Natural or artificial, let’s consider  a planet mind from a Fermi Paradox perspective.  

At first we might imagine it has no real reason  to contemplate colonies, especially without some   sort of faster-than-light or FTL travel or  communication to link itself together. If it   does have these it has good reason to colonize and  form a galactic sized brain. But without FTL this   seems nice since it not only makes it unlikely  to be an aggressor in the galaxy but also makes   it less likely we would hear from one, since it  isn’t colonizing and wouldn’t expect others too   and thus would be quieter in general. However, a few notes on this. First,   just because it is a singular organism with no  specific purpose of reproduction doesn’t mean   its actual averse to the idea. It doesn’t see its  kids as competitors because it doesn’t have any,  

so it might think seeding the Universe with copies  of itself was a perfectly fine and interesting   thing to do. Same, it is probably incredibly  intelligent and ancient, meaning it can be very   patient and meaning it can dream up concepts  like life operating on different principles.   It is entirely possible it might decide to devote  some of itself to making or becoming transmission   and reception gear for radio signals and even  devoting a mere few millionth of its available   power to transmission would let it broadcast at  the terawatt scale, a million times louder than   our loudest transmission and by inverse-square  falloff, visible a thousand times further away.   But lastly, even if it has no interest in  replication or communication with others,   if it’s a nasty old codger, it still  has reasons for grabbing material.  

First, it might be self-expansionist and seek  to transform itself from K1 Planet Brain – a   planet-brain is by definition a K1 civilization  I suppose – into a K2 Dyson Brain, or Matrioshka   Brain, and nothing really is stopping it in that  case from sending out automated harvesters to   bring ever more material back to go even bigger,  to becoming a K3 Birch Planet Brain for instance,   a big sphere of many millions or  even billions of solar masses,   as opposed to the millionth of a solar  mass a planet brain would otherwise mass.   Even if it isn’t of a mind to tinker with its own  mind – which is plausible enough, I’m not anxious   to poke at my own brain – my brain is not a house  brain, but it lives in one and keeps a pantry and   freezer and so on. It might decide it needed  no alteration, but be fine with keeping itself   fed on sunlight from artificial suns of its  own making, big fusion-powered orbital lamps   for instance, and drag matter from all over  the galaxy to fuel those or sit as billions of   artificial gas giants in distant orbits awaiting  tapping in a distant future as fuel sources.   Indeed, given the volatile and short lives of  stars from its perspective, it might seek to   move itself from its native sun into a safe pocket  of space fueled by its own artificial sunlight. So   too, while it would presumably send out probes  or spores rather than personally explore,   it might decide to travel itself and we have  contemplated planet-sized spaceships and moving   planets before, and it might decide to utilize  those methods, which are resource intensive.  

As to what one of these things can do, if its  truly a conscious and intelligent mind turned   up to planetary scale, its fundamentally the  same as what any other advanced civilization   might do, and indeed this is often seen as the  fate of civilization in many a scifi novel,   of us turning telepathic, developing a  neurosphere, and becoming one big planet mind.   It’s an interesting potential pathway for  civilizations, to create or converge into one   massive overmind the size of a planet, but I have  to admit I would just as prefer to take a pass,   though evolving individually into a planet brain  is a bit more attractive. Of course you need a   whole planet for that but there are trillions  of them in this galaxy alone. So they are not in   short supply, and the technology for pursuing that  pathway is likely to be developed before we birth   a trillion humans, so we could each claim one.  Perhaps that’s our future then, not quintillions  

of people across a colonized galaxy on trillions  of worlds, but each of us our own world.   We were talking about strange  paths evolution might take today   and we’ll be examining something of the  opposite notion, Convergent Evolution,   later this week, and there’s a great series  on Curiositystream, Leaps in Evolution, that   explores these concepts as well from a focus of  Earth’s own past that’s definitely worth a watch.   One thing we sort of glossed over today in the  original script was something I find myself   thinking of as a Halo Intelligence, minds  built into a planetary ring or asteroid belt   rather than a planet itself, and since  the episode is already long we’ll do an   extended edition over on Nebula. As I’ve probably mentioned before,  

I typically write and record episodes 2-3 months  before airing and do the video a week or two out,   which often gives me some fresh insights  from effectively revisiting the topic,   though ones I normally have to save for a  sequel or let wither because of production   time necessities. And its one of the reasons I  enjoy doing extended editions over on Nebula,   as it gives me a chance to explore the sub-topics  that come to mind during video production without   needing to worry about how Youtube will treat  them with its algorithm. Nebula’s designed to give   creators more freedom than other platforms, like  letting me run long even for a long SFIA episode,   or do trial balloons for sequels and full-length  episodes. Also, our episodes of this show appear   early and ad free on Nebula, and we have a  growing catalogue of extended editions too,   as well as some Nebula Exclusives like  our Coexistence with Aliens Series.   Now you can subscribe to Nebula all by itself but  we have also partnered up with CuriosityStream,   the home of thousands of great educational videos,  to offer Nebula for free as a bonus if you sign up   for CuriosityStream using the link in our episode  description. That lets you see content like “Leaps  

in Evolution”, and watch all the other amazing  content on Curiositystream, and also all the   great content over on Nebula from myself and many  others. And you can get all that for less than $15   by using the link in the episode’s description. So that will wrap up another Scifi Sunday here   on SFIA, but there’s plenty more coming. As  mentioned, this Upcoming Thursday we’ll be looking   at Convergent Evolution, the notion that certain  traits - like eyeballs or a humanoid form - might   tend to be something we would expect to see out on  alien worlds. Then the week after that we’ll take  

a look at notion of artificial intelligence being  used for crimes, or being criminals themselves.   Then we’ll close the month out with our  Livestream Q&A on Sunday October 31st… Halloween.   Now if you want to make sure you get  notified when those episodes come out,   make sure subscribe to the channel, and if you  enjoyed the episode, don’t forget to hit the like   button and share it with others. If you’d like to  help support future episodes, you can donate to   us on Patreon, or our website, IsaacArthur.net,  and patreon and our website are linked in the   episode description below, along with all of our  various social media forums where you can get   updates and chat with others about the concepts  in the episodes and many other futuristic ideas.   Until next time, thanks for  watching, and have a great week!

2021-10-21 01:11

Show Video

Other news