Riskgaming

The Orthogonal Bet: Exploring Alternate Biological Trajectories

Welcome to The Orthogonal Bet, an ongoing mini-series that explores the unconventional ideas and delightful patterns that shape our world. Hosted by ⁠⁠⁠⁠⁠⁠⁠⁠Samuel Arbesman⁠⁠⁠⁠⁠⁠⁠⁠.

In this episode, Sam speaks with ⁠Adrian Tchaikovsky⁠, the celebrated novelist of numerous science fiction and fantasy books, including his Children of Time series, Final Architects series, and The Doors of Eden. Among many other topics, Adrian’s novels often explore evolutionary history, combining “what-if” questions with an expansive view of the possible directions biology can take, with implications for both Earth and alien life. This is particularly evident in The Doors of Eden, which examines alternate potential paths for evolution and intelligence on Earth.

Sam was interested in speaking with Adrian to learn how he thinks about evolution, how he builds the worlds in his stories, and how he envisions the far future of human civilization. They discussed a wide range of topics, including short-term versus long-term thinking, terraforming planets versus altering human biology for space, the Fermi Paradox and SETI, the logic of evolution, world-building, and even how advances in AI relate to science fiction depictions of artificial intelligence.

Produced by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George Ko⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & Suno

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Hey, it's Danny Crichton here. We take a break from our usual risk-giving programming to bring you another episode from our ongoing mini-series, The Orthogonal Bet.
Hosted by LUX scientist and resident, Samuel Arbesman, The Orthogonal Bet is an exploration of unconventional ideas and delightful patterns that shape our world. Take it away, Sam.

Samuel Arbesman:
Hello, and welcome to The Orthogonal Bet. I'm your host, Samuel Arbesman. In this episode I speak with Adrian Tchaikovsky, the celebrated novelist of numerous science fiction and fantasy books, from his Children of Time series, Final Architect series, and the Doors of Eden.
Among many other topics, Adrian's novels often include an exploration of evolutionary history that combines asking what-if questions, with spooling out the space of possible directions that biology can take, with implications for Earth and for aliens. This is seen particularly clearly in the Doors of Eden, which examines other potential paths for evolution and intelligence on Earth.
I was interested in speaking with Adrian to learn about how he thinks about evolution, how he builds the worlds of his stories, and even how he thinks about the far future of human civilization.
We had a chance to discuss everything from short-term versus long-term thinking, terraforming planets versus altering human biology for space. The Fermi paradox and SETI, the logic of evolution, world-building, and even how advances in AI are related to science-fictional depictions of artificial intelligence.
Let's dive in. I guess we'll just jump in. Adrian, so good to be chatting with you. I really appreciate you taking the time. Thanks so much.

Adrian Tchaikovsky:
Thanks for having me on the show.

Samuel Arbesman:
Yeah, there are so many ideas in your novels that are just chock full of many things, but maybe the best place to start is with the idea of evolution. Your books sometimes have different evolutionary trajectories within them, and your novel, The Doors of Eden, has even been described I think as, evolution SF.
I happen to have a background in evolutionary biology, so this is the kind of thing that is my catnip. How would you describe this evolutionary biological approach, this almost alternate biology approach, and how did you get interested in this kind of thing?

Adrian Tchaikovsky:
I mean, I think the feel I'm working in with this is speculative evolution as espoused by writers like Dougal Dixon, often regarded as the father of the genre.
Although I was aware of Dixon's work from decades ago, I was put onto the speculative evolution thoughts myself by Gould's book, Wonderful Life. Where, for those that don't know it, he describes the Burgess Shale fauna, which is a very well preserved set of fossils from the very early Cambrian, about 500 million years ago, which basically doesn't look a great deal like anything that later evolved, but that does seem to include the roots of every modern lineage including vertebrates, and a whole load of other stuff as well.
His point was, that the way things went isn't the way that things necessarily were going to go or even were likely to go, and he's really making the case of there being a huge amount of chance in evolution, rather than it being this manifest destiny, progress towards humanity thing that we're encouraged to think of it as.
As someone who is extremely fond of invertebrates anyway, who's frequently at the short end of the stick in any evolutionary narrative, I was very interested to look at, well, what if we had a dominant invertebrate species? The work on the Portia spiders by Fiona Cross spurred Children of Time out of that, but also as you say, Doors of Eden is very much the book that I go completely mad for this stuff, and it has at least a dozen separate potential evolutionary pathways through Earth's history, that will end with entirely different species as a dominant sapient species in the planet.
Some of which do better than us and some of which do worse than us, and all of which stands as an interesting contrast hopefully to the way we do it. Of course, at the same time there is also a plot going on with characters, where these various evolutionary timelines start to bleed into our world, when we started getting things from other evolutions.

Samuel Arbesman:
Yeah, you mentioned Stephen Jay Gould, his whole thing is rerunning the tape of life. Wait, if we re run it, is everything overdetermined or are things, yeah, is there a lot of chance, and can you have lots of different paths?

Adrian Tchaikovsky:
There's a slightly separate but linked, spec evo discipline is of course alien evolution. What would aliens look like? What on Earth is expressed as a rule, that we might expect to see on another world, and what is unique to Earth and wouldn't necessarily be repeated? That's those two things.
When you get spec evo conferences and things like that, those two things often get mixed together, because the same thought processes apply.

Samuel Arbesman:
That is a really good point. Trying to understand, so what if of evolutionary trajectories also involve, if there were different conditions on Earth, or on different planets, what would that result in?
Related to that, within the realm of alternate history, separate from evolution, alternate history is often, they have the feel of, here are things that are very, very different, but in certain ways they rhyme with certain traditional historical trajectories.
It might be like, okay, these empires are entirely different, or certain people were assassinated in different ways, but things, maybe technology developed in certain similar ways, with these, either speculative evolution or evolutionary what ifs within Earth trajectories.
Do you feel there's the same thing of, that it requires having at least some similarities to make it understandable for the reader, where you have to have it rhyme with certain similar things? Or, okay, it's very, very different, but there are certain social or communal structures that are going to be the same? Which goes back to, there are certain things that are not necessarily over determined, but there are certain invariants.
How do you think about what the balance is between radically weird, versus things that at least the reader is somewhat able to understand?

Adrian Tchaikovsky:
The way I work, certainly with this sort of thing, both with histories and creating alternate societies, as I do a lot in fantasy work, and with the evolutionary logic is, you strip it back to what the principles are, because there's a certain logic to evolution, and as long as you're following that logic, I think it remains fairly comprehensible.
The other thing you tend to get is, and this is something I play with a lot in Children of Time, you can have a completely different setup, but you're going to run into kindred problems. You're going to run into things like resource shortages and you're going to run into problems with epidemics and disease, and you're going to run into potentially factious feuding policies, if that's the way that the setup goes.
Every time you do that, the reader is able to relate that back to human interest. Say, "All right, we ran into a similar problem and I know how we solved it or how we reacted to it." Obviously, because solving is not always what happens. How are the spiders or how are whatever going to deal with it, given that they've got quite a different toolkit and a different set of senses and a different kind of worldview.
That is often a lot of fun, and that I think also helps build empathy when you see non-human societies and creatures dealing with those familiar problems, being in the shadow of those familiar threats.

Samuel Arbesman:
I guess that also speaks to, how broad the space of possibilities truly is. Because on the one hand, right, exactly what you're saying, there are many, many different paths, but there are still certain invariant issues, cooperation, competition, managing information. How do you think about the balance there?

Adrian Tchaikovsky:
One of the problems that science fiction and fantasy both have had in some iterations, is in their conception of aliens or non-human creatures as basically humans minus.

Samuel Arbesman:
Or some subset of humanity.

Adrian Tchaikovsky:
Well, not even that. If you say, "Well, these aliens, they're basically human except that they've got two thirds of the emotional range."

Samuel Arbesman:
These are the angry aliens and these are the emotional aliens.

Adrian Tchaikovsky:
These ways of determining aliens or non-human creatures or whatever you've got, are generally ways of focusing the human in the center. The idea that, well, humans have all of this stuff and we are the most adaptable, and even if we are not huge, super-powered strong or even if we don't have weird mental powers, we are somehow the glue that holds everything together, because we can do all of these things and we are central.
Obviously, these are generally stories about humans, so obviously it's not necessarily that surprising. One of the things I like to do when I'm building alien creatures or evolving creatures from Earth's past is, well actually, what if they're actually just better than us at quite a lot of things?
What if the problem that completely stumped us is one that they breeze past, because they are uniquely set up to deal with it? Then, potentially there are other problems that we don't have a huge problem with that they get absolutely stymied by, because of their particular setup.
That I think is a lot of the interest in exploring that space, is the idea that actually maybe the human solution for any of these things is not necessarily the best way forward, maybe indomitable human spirit isn't the galaxy's trump card.

Samuel Arbesman:
Yeah, and it's also, it's both incredibly humbling and also horrifying. Certainly as the characters in the novel just figure out that this is the solution that evolution has reached for this very competent alien species.
Yeah, and so when you're building, doing world building for these things, does it proceed from an initial kind of what if? Is there a set of what ifs? How do you think about all these different pieces moving? Obviously you also need to think, not just world build, you also need to figure out, okay, what is the story going to be? Is the story co-evolving with the world building? How do you think about all this?

Adrian Tchaikovsky:
What I create at the start is usually more than will find its way into the book. There will still be details that will be fleshed out as the book goes on. I feel the need as a writer, to have this very solid foundation, I need to believe in the world.
You can do that, I mean obviously all writers have different processes. You can basically come to the same outward show if you do all of this enormous detail, as if you were creating it for a role playing game, which is certainly where my own world building started. Which tends to lead you to do fairly thorough things, because you don't know where the players are going to go. Or you can basically just put up frontages.

Samuel Arbesman:
It's much more, it's like a Potemkin village world building kind of thing.

Adrian Tchaikovsky:
Exactly that. Yeah, I mean I was on a panel with John Scalzi a couple of weeks ago, and that was exactly the metaphor he was using.

Samuel Arbesman:
Oh really, okay.

Adrian Tchaikovsky:
Yes, and as long as you do your job, all of these things look just as good to the reader. For me, purely for the creative process of writing the book, I need to feel that I've got this detailed world so that I know where all the people are coming from, what they think of each other, what they believe, what they eat, what they wear. All of these details really help I think, give that world a texture I think, a living texture.

Samuel Arbesman:
Certain readers not only love having that texture, but love seeing it very explicitly. For example, and maybe I'm in the minority, I'm a fan of a really good info dump in a novel, where it's just, okay, huge amounts of stuff.
When well done it's delightful. Not all readers are fans of that kind of thing. Is there a need for almost just having more info dumpy novels, where it's just more of that, almost like source book material? Or is it that kind of thing where, that's very, I am very much in the minority.

Adrian Tchaikovsky:
I mean, it's certainly something I do. My editors take, I these days have to prune it back on my editing, because my editors will take more out. I think, just a straight up piece of exposition is sometimes the most efficient tool in your toolkit to get something over.
You can basically say, "Right, I'm going to spend a page and a half explaining this is how this works, rather than a chapter and a half of someone bumbling about and finding out through inference." It's certainly much better to do it, just you as the author to the reader, than do an as you know Bob thing, between two people who already should know all of the details of this.

Samuel Arbesman:
Which is a very unnatural way to do it, yeah.

Adrian Tchaikovsky:
Yeah.

Samuel Arbesman:
Earlier you said, science fiction and fantasy, and you referred to it singularly as a single genre, and many people view them as different things. What I'd love to hear your thoughts on, how you would distinguish them if they're even distinguishable, and then also, are there differences in the info dumps for those different sub genres or genres?

Adrian Tchaikovsky:
Yes. I mean, for me, the science fiction fantasy thing is a continuum. You have two poles, you have one pole, which is the very, very hard sci-fi, where you are absolutely not going to stray from understood physics, biology, whatever science it is that's relevant.
At the other end you have a full on pure secondary world fantasy, which has no connection to the real world and where you set the rules entirely, and then basically you need to play within the rules that you set.
There's actually an evolutionary term called left wall, and this is, the further you go towards that hard science fiction thing, the more of a left wall you are setting up. The point of the left wall is, it's setting up a barrier where beyond which your kind of creation can't go.
In a hard sci-fi book, that barrier is, well, this is how the science works to the best of our knowledge. If you are writing a historical book, your left wall is, well, this is what we know about this period, and therefore you've got to stick to that in order to be faithful to the historical setting.
There are various other less visible left walls. If you are writing, let's say police procedural, that form carries with it a certain set of expectations from the readers which become your left wall. Certainly, history or science, I think are the two most obvious ones, where it's that obvious, that visible, and where you can adjust to it most easily ahead of time, because you do your research on the period, on the science or whatever.
Then, the more you move away from hard science fiction, the less you have to worry about that. For example, my Final Architecture books are what I would call space opera. Which means that they're not particularly concerned with science, they're concerned with being consistent within the setting, and because it's a science fiction thing rather than a fantasy thing, they're concerned with having a veneer of science, as to why these things are going on.
Basically it's hyperspace, it's faster than light travel, it's weird cosmic horror, all of this stuff with science fiction trappings. Then, going beyond that to, if I'm writing full on fantasy, obviously none of that particularly matters, unless I specifically want it to.

Samuel Arbesman:
I also find that very interesting that you put space opera as almost like a fantasy genre, this veneer of science fiction, which I think when it comes down to that, it is like that, where there's empires and things happening, but it's at the level of the galaxy or larger, as opposed to a medieval setting or whatever.

Adrian Tchaikovsky:
I mean, essentially, in order to have that story, you tend to either have to insert magic levels of technology, like faster-than-light travel, or you need to play fast and loose with certain other scientific logics like, it's usual in space opera that if you have aliens, you tend to have a lot of aliens that are on a human cognitive and technological level and can interact with us on a roughly even footing.
Certainly I've got that in The Final Architecture, as well as having greater, more powerful aliens as well, because that human-alien community is a staple, and it's a lot of fun to write about or to read.
Obviously if I'm doing a hard science fiction book about alien life, you end up with something like Alien Clay, where the alien stuff is very, very alien indeed, because anything that has evolved on an alien world is by default going to be less human than the least human thing you can think of that's evolved on Earth.

Samuel Arbesman:
Yeah, and this actually, I was with my son at the zoo yesterday, and in addition to seeing all these animals, they happen to have, there was a display, I think for some festival, and it had a whole bunch of mythological creatures. We were noticing how all these mythological creatures are basically just taking Earth creatures and just mashing them up and taking two or three and combining them.
Exactly what you're saying, these alien creatures, they're not going to be a Griffin or a chimera or whatever, they're going to be incredibly alien. Is there almost exercises to help stretch your imagination for this kind of thing? Is it basically using a first principle approach for evolution? How do you think about this speculative evolution?

Adrian Tchaikovsky:
Yeah, so it comes back to that idea of just, well, what are the things that will carry over? What is the logic that would still be in operation? Obviously, if you have any alien species that has a sort of generational aspect to it, you need the ability to reproduce itself, and you need the desire to survive so that you get to reproduce.
All of that evolutionary logic works within, it's almost within a certain radius of Earth-like life. Then, if you go beyond that radius, you can end up with something like say, Solaris, where it's a sentient planet, and a lot of what we would think of as evolutionary logic needn't in any way apply to that. Or if you have creatures that don't need us, that basically just go on forever, that would also change a great deal of things.
If you start off with the Earth logic and just say, "Well, how far am I going from that and what are the parameters I'm working with where these things wouldn't apply?" There's a very, very good book by Ian Stewart and Jack Cohen called Evolving the Alien, or alternatively, also out under the title, What Would An Extraterrestrial Look Like? Which is an extremely good set of thought experiments about what would we expect? They look at Earth evolution, what has evolved multiple times on Earth, for example, and what has only evolved once.
For example, wings have evolved about four different times. Flight looks like it might, well, if you've got a world where flight is in any way possible, probably you'll have something like eyes. Eyes have evolved at least a dozen different times, and therefore if you've got light, then probably you're going to have eyes.
Then of course, what you do as a science fiction, if you want to play with aliens, is you say, "All right, I'm going to have a world which is very non Earth-like." I've got a book coming out next year called Shroud, which has a world which has a very dense, high-pressure atmosphere, which is also very murky, so there is zero light on the surface.
Obviously nothing has evolved eyes there, but more than that, what they have evolved is the ability to generate electromagnetic signals. Everything on this world senses and communicates by various electromagnetic bursts, which means that if you crash on this planet, you cannot signal your ship to let them know you are still alive, which is what happens in the book.

Samuel Arbesman:
There's a scientific field, artificial life. One of the definitions, back from the late '80s or '90s, was the idea of life as it could be, and I think that's exactly what you're saying, is "Okay, let's actually figure out, here are the rules, here's the algorithm of evolution, and then based on that, explore that space."
It also reminds me, especially in the earlier days of SETI, when people were trying to figure out, "Okay, how would we search for signals from an extraterrestrial intelligence?" They had to say, "Okay, what are the invariants?"
This was also when I guess searching for it occurred more costs, and so they would have to say, "Okay, let's try to figure out what specific radio frequencies would be more likely, for some alien intelligence that we know nothing about and we can't even imagine, how would we find some shelling point or whatever?"
Yeah, I feel these thought experiments are really interesting, to say, "Okay, how would we relax certain constraints and figure out what things are the same, what are not?"

Adrian Tchaikovsky:
Yeah, and it is fascinating. Even now, with exoplanet searching, we are looking for planets in the Goldilocks zone, let's say. Because that's where Earth is, that is a place where you're naturally going to get liquid water.
Except that, we know within our own solar system there are plenty of places where there there is liquid water that are not in the Goldilocks zone. There are moons of the gas giant and things like that, where we believe, from all evidence, that there is a sub-surface ocean of liquid water. These are actually very good places, that we might even find non-Earth life within our solar system.
This is the reason that one of the answers to the Fermi paradox is literally, that the alien life is sufficiently alien that we don't know what we're looking for. The other fascinating way, the SETI, the search for radio waves thing has dated, is of course we as a planet are no longer producing the volume of radio waves we used to, because of the different ways we are conveying signals and so-

Samuel Arbesman:
A lot of things are on cables as opposed to-

Adrian Tchaikovsky:
Yeah, so if there's an alien civilization out there listening to us, they're going to think something absolutely catastrophic has happened to us, because relatively suddenly, our volume of signal has absolutely died a death. They're going to think we've had some sort of incredible catastrophe.

Samuel Arbesman:
That's interesting. Then, related to that also, I mean one of the study techniques that people have talked a lot about more recently, is this idea of techno signatures. Saying, "Okay, agnostic as to what form aliens are going to take. Presumably at some high level of technological achievement. They're going to be building certain things or, like communicating with lasers or traveling with it, or whatever it is, and so there's going to be some way of detecting these things."
You've thought a lot about these long-term futures of various societies. Do you think the techno signature stuff is, it's certainly provocative and thought provoking, do you feel that is also the wrong direction, given what you're saying in terms of the Fermi paradox? I guess, is everything is just too alien, we're just not going to be able to really know it when we see it?

Adrian Tchaikovsky:
Well, this is the difficult thing, because obviously we have one data point as far as a civilization goes, and that's us. It's natural to assume that a lot of the things that we have done and a lot of the ways that we do things are going to be universal, and they may well not be, there may well be different ways of doing things that we have no concept of. Or they may be very like us and the reason we're not seeing them is that they've come and gone, because we've had a technological civilization for, I mean, what, about 200-300 years? I don't think we've got another two or 300 years in it.
Given the size of the galaxy, let alone the universe, given the size of the galaxy, that's a very, very brief window to detect another civilization. There could have been hundreds of civilizations that have been and gone and whose outward expanding wave of signal has passed us by, before we were ever able to detect it.
In a way, I hope that it is that they are less like us out there, than they are more like us, because if they're more like us they're probably dead.

Samuel Arbesman:
Yeah, I was going to ask you about your personal view of the far future, and I think I just got it.

Adrian Tchaikovsky:
Well, I mean, it's very weird. One of the common comments that you hear between science fiction writers these days is, "Oh, I've written this book, where it's in space and it's dreadful and everything's horrible and everyone's being exploited and living conditions are horrible, and it's an enormously optimistic book, because we're in space."

Samuel Arbesman:
Because we're in space.

Adrian Tchaikovsky:
We actually got to space. I genuinely believe we can get out of the spiral we're in, as far as what we're doing to the planet. I genuinely believe that some fairly serious upheavals are going to happen, and a lot of status quo is going to have to be broken. Because at the moment we're in this mad situation where everything is being driven by people who are motivated almost entirely by a love of money, and they will drive us, in order for a short-term money gain, to the point where money is literally worth nothing.

Samuel Arbesman:
On that despairing note.

Adrian Tchaikovsky:
Yeah, sorry, sorry.

Samuel Arbesman:
No, I understand what you're saying, and so is there a way of, from your perspective, I mean obviously there's this short-term, long-term dichotomy in terms of how we think about the world and how I guess, if you have a longer-term approach, you're going to be maybe more altruistic or thinking about things on a larger scale.
Do you think there are ways of getting us out of the short-termism, back to a longer-term time horizon, or is this just the inherent human nature kind of thing?

Adrian Tchaikovsky:
I mean, I don't think it's inherent human nature, because for a large amount of human history we've been much more long-term thinkers. I mean, just purely looking at Western Europe, people built cathedrals that took many generations to put up.
You would work on a cathedral all your life, you came after they'd started building it, and it isn't finished after you've gone, and that is long-term thinking.
We are entirely capable of long-term thinking, but at the moment the dominant societal paradigm is its incredibly short-term profiteering, where the golden goose is basically being served on every boardroom table, even in very small things like the entertainment industry rather would rather have a tax write off than release a-

Samuel Arbesman:
Yeah, actually publicly release the movie, yeah, that's good.

Adrian Tchaikovsky:
If we're running that extreme of short-term thinking even where people have all of the answers in front of them, if people would rather make those choices, it does rather make you despair about actually long-term multi-generational projects like saving the planet.
Speaking as a science fiction writer, I absolutely need to stress, we do need to save this planet, we are nowhere near just all decamping to Mars or anything like that.

Samuel Arbesman:
This is definitely the best planet we have.

Adrian Tchaikovsky:
Yeah, this is the planet, the only planet in the entirety of the universe that actually suits us. Mars is dreadful. I mean a lot of writers have written about Mars. You can probably do Mars with a technology not too far beyond what we've got, but it will take hundreds of years, it will take generations.
It's not the sort of thing that, "Oh, we've used this, well, let's just go on to the next." We can't be that intergalactic locust thing that some movie bad guy aliens are.

Samuel Arbesman:
Related to the long-term thing, and you mentioned cathedral building. Obviously, what undergirded cathedral building is these organizing ideas and some worldview like Christianity or whatever it is, which thinks on a very long time scale, maybe thinking on a scale of eternity or whatever it is. Are these the organizing principles that we need? We need a new set of these as a society, or is the thing, we already have them, we just need to actually get more people to buy into these organizations?

Adrian Tchaikovsky:
I mean, I think we have them, they are out there. We have, it's not as though everyone has forgotten every idea except rampant capitalism. I personally don't feel that religiously mandated ideas are a terribly good idea, because I don't think that history suggests that works terribly well.
I think that faith-based systems are inherently prone to both abuse and inflexibility, but we have a lot of philosophy, we have a lot of sociology, we have an awful lot of very, very far, far smarter people than me, who've put a lot of thought into, "Well, how could we live and how could things work?" Those ideas are all out there, we just need to flip the script.

Samuel Arbesman:
Going back to Mars, and other things, switching to another planet. What are your thoughts on, even the feasibility or technological approach of terraforming and things like that?
Obviously, in the short and medium and long-term, we need to take care of our own planet, and we do this thing, but in terms of either storytelling or just how to think about humanity very, very long-term, are these the things that should be in our civilizational toolkit? Or is it the thing where we just need to take better care of our own solar system?

Adrian Tchaikovsky:
I mean, terraforming is very attractive, and if you have a considerably better technology than us, you can do a lot of stuff with terraforming. We would need to massively level up, I suspect our biotech, I think the biological side of terraforming is greatly underappreciated.
By stage managing, I mean, a book I'm working on at the moment, has almost an algorithmically controlled microbial evolution to get you the planet you want. That it just bootstraps itself into more and more complex ecologies.
I think those things are entirely possible. I also think that we could end up, for example, adapting ourselves, living in, just in space. Because frankly, there's a lot of that, it's wherever you happen to go.
There is a moral dimension to terraforming, which is seldom explored, which is that do we necessarily have the right to turn everything into a carbon copy of Earth, in the same way that every UK city center has all the same shops wherever you go? It becomes a little uninspiring if wherever you go in the galaxy, you just find another mega mall basically.
The other option, which I think is inherently distasteful to a lot of people, would be the idea that actually it's probably a lot easier to Martioform humans than to terraform Mars. This is something I look at in Bear Head for example, but the idea is, there are a lot of useful biological things in various Earth genomes, that if we could get them working in a human being, would make us much, much more suited to go into various alien worlds.
This opens a variety of cans of worms as far as who is getting it to them, and just how much volition is going in, and all of that thing, but on a technological level, it's probably easier to make a human who can halfway survive on Mars, than a Mars that humans can just walk about on unassisted.

Samuel Arbesman:
I love the biological approach, also just because that's basically it. The way we have Earth as it is now, is because of oxidization. All these things that are done by previous generations of biological creatures. Making humans more easily able to live in these other environments is also interesting.
Related to the moral question of whether or not we should just be terraforming willy-nilly, do you feel it should be somewhere in between, where there can be some nice Earth-like planets, but then most of it should be, we're just living in space? Or maybe you don't even have, it's more just a fun thought experiment.

Adrian Tchaikovsky:
I guess, philosophically, the first key question is, is there any life there already? Because once you get anything that you think might be life, I would come down very heavily on, "Well, no, we shouldn't be terraforming anywhere like that, because at that point you are getting rid of something completely unique in the universe."
Once you've got the run of the universe, those unique things are basically the only thing of interest in it, because everything else, there is zero scarcity for anything else. It's only those freak accidents of life that are worth going out there to see, honestly.
The difficulty there being of course is, do we even recognize alien life if we see it? Going back to what we were talking about earlier, about how distant alien life might be.
If you have the limitless technology, then creating great big arc worlds, arc ships, fleets, ring worlds, that thing, becomes a way of installing ourselves around the galaxy without necessarily messing up whatever's already there.

Samuel Arbesman:
Which is going back to the culture. That is a lot of what the culture does. Yeah, you avoid living on planets.

Adrian Tchaikovsky:
Again, I mean, you need a fairly high technological base to make all of that work. I mean, I almost wonder whether living in space, because it is a much more constant environment, if you can get your self-sustaining space station going to a rather more reliable level than our current orbital efforts, that gives you a great deal more flexibility. Because otherwise, every planet you come to is going to be different, and you'll basically have to start at square one on the terraforming, and the tools you used on Mars will not work on Venus and so forth and so on.

Samuel Arbesman:
Yeah, and this is I think one of the ideas behind Gerry O'Neill's, Cylinders, and that's just great. Which is, "Okay, you have this specific position, and you're just getting huge amounts of energy from the sun, and it's entirely consistent and you don't have to worry."
Versus actual planets are not the science fiction planets where it's like, "Okay, the desert planet, the ice planet, whatever, they're all incredibly diverse and varied, from location to location and it's a very complicated thing," so no, that's interesting.
Related also to going back, and you were talking about preserving the uniqueness of life. Obviously it goes even more so when it comes to intelligence and consciousness. You talked a little bit before about blind sight and intelligence without consciousness, and when I think about that, I think a lot about the current crop of large language models and AI, and how they're very provocative in thinking about that, and of course, he came up with these things well in advance of our current AI crop.
Given the current space and advances in generative AI and large language models and things, has that changed a little bit, in terms of how you think about this space of potential intelligence or consciousness, or has it given you ideas for how to think about certain storytelling ideas at world building? Or is it the kind of thing where science fiction as a genre has basically not came up with everything before, but this is still very much a small subset of what is? What about that?

Adrian Tchaikovsky:
I mean, the problem you have with a lot of this is, is that it's generative models. It's an algorithmic prediction routine, that because of its enormous computational power has a great big data set and can turn out things that look like they are intelligent responses.
I think we have a bit of a problem at the moment, in that this is being sold as something it is not, to wit, artificial intelligence. What it really is, it's a smoke screen, it's muddying the waters when one would like to talk about actual strong AI.
It's probably also set the field of strong AI back by 10-15 years, because at some point there's an investment bubble that is going to burst when people realize they're not getting that out from this field, because it's-

Samuel Arbesman:
Do you think it'll set them back in terms of the investment role, not necessarily because this is some technologically advanced cul-de-sac?

Adrian Tchaikovsky:
Well, I mean, I think it is a technologically advanced cul-de-sac, but I think it'll specifically set them back because we will need an entire new language to describe what they're doing, because the word AI will be tainted by the circus we've got going on.

Samuel Arbesman:
I feel like there was someone even in, early on in the AI movement, because I think artificial intelligence was coined the mid-1950's or something like that, one of the people involved in this space was complaining that basically anytime anything that AI has done solves a problem, whether it's playing a certain game or whatever, then people move the goalposts and say, "Okay, this didn't actually require intelligence, or it's not a part of AI."
On the one hand, I'm sympathetic to that, only because I remember back in my computer science undergrad education, when most of the things were good old-fashioned AI, as opposed to the more neural approaches and statistical approaches. When I learned how these systems did what they did, I was almost disabused of the notion that there was intelligence under the hood.
On the other hand though, they're still able to do sophisticated things. Is that a similar moving of the goalposts thing? We're here, where it's very much this pattern recognition, a very sophisticated, powerful technique, but because it's not manipulating concepts or doing the things that we associate with intelligence or consciousness, that it's not really that. Is that your view?

Adrian Tchaikovsky:
I think the problem is, one of the other things that earlier AI researchers have been saying is, "Basically we have massively underestimated how complicated it is to make something that thinks." All the various models and attempts and so forth, have never really come anywhere near.
You do get some interesting problem-solving algorithms that can find solutions that humans wouldn't have thought of, which I've always felt is one of the more interesting areas that we have had some success in. Because if you get a sufficiently sophisticated system, we are direly in need of solutions to problems that humans haven't thought of.
The problem I think with the current generative models is, a bunch of people looked at how hard it was proving to be and said, "Well, let's not, and say that we did." You get these algorithms, these facilities, which have this human-facing part, that is very good at couching things in a human-like term, and we are extremely prone, we are pre-hacked to respond to things that pretend to be human.
I mean ELISA, from decades ago, is a perfect example of this. There is actually, like you say, under the hood, there isn't actually anything going on that equates with thought. It's not this emergent thinking thing that people seem to believe it is. It is merely that it's got a very, very good output interface.

Samuel Arbesman:
Do you think, if it were not outputting all of its content in text, which we are prone to think of in terms of, "Okay, text and language is part of how we think." If it didn't have that, we would not necessarily ... Which is why, interestingly enough, and when people talk about a ChatGPT thing, which is outputting with language, versus a Dall-E, or one of these, a midjourney one, image generation. People don't necessarily think about the image generation ones in terms of intelligence or consciousness, they think of it as this very sophisticated, interesting tool.

Adrian Tchaikovsky:
Well, there is quite a push. We are encouraged to think of the weird image glitches as hallucinations, and hallucination is specifically a term that relates to thought. Again, I think that is because it is more palatable to the people marketing this, to describe it in that way rather than, saying, "Yeah, we got it wrong, it can't count fingers. No real context for what it's trying to draw."
I mean, I think the key thing is, the big difference between what we are seeing and what science fiction writers have always envisaged AI, is that we have always seen, an AI is going to be a bit like Commander Data from Star Trek. Very logical, very rational, very big on facts and accuracy, bad on vibes.
What we have got is literally the opposite, we've got something that is 100% vibes. It appears to be walking the walk and talking the talk, as far as human interaction goes, but the moment you start trying to quiz it on any logical thing, it will give you made up recipes for poisonous mushrooms.
It will not be able to work out that if so-and-so is someone's child, then that person must be their parent. There's no, a basic level of logical coherence, that I think we can observe quite deeply into the animal kingdom, it does not exist in these systems.

Samuel Arbesman:
It's not a matter of us wanting these more logical approaches and being disappointed, or I guess wanting that.

Adrian Tchaikovsky:
I think it's-

Samuel Arbesman:
It's more, they're required, as opposed to just like, "We've got this weird vibe-based thing. Maybe humans are actually much more vibe-based than we realized."

Adrian Tchaikovsky:
Because otherwise what you've got is, basically an enormously expensive environment destroying parrot. You will ask it a question and it will basically find the thing that people say to that question and tell you that. There's no ... And if that thing is wrong, it doesn't know, and if that thing contradicts the thing it's said a moment ago, it doesn't know.

Samuel Arbesman:
Yeah, I guess reaching into science fiction or fantasy as genres, what would be the closest equivalent for the things that we have now? Are there interesting parallels? I mean, and you mentioned the anti Commander Data, are there actually examples of these weird hallucinatory vibes-based, confusing things?

Adrian Tchaikovsky:
The one I had basically a sit down and think about, is in William Gibson. I had to work out, "Well look, all right, if I am basically saying that what we've got isn't AI, what do I think about ..."
There is an AI entity in Count Zero, the second book, the one that comes after Neuromancer. I think so, or in the third book, I can't remember, because I read them all together, that makes art, and the art is transformative and amazing and you don't know, no one knows at the start where this art is coming from.
It's just these amazing weirdly abstract pieces of physical art of these just little ... I think it's boxes of weirdly placed bits and pieces and objects, that seem to be saying something. Then, you find out it's being made by a computer.
In that case, I mean, A, you know a fair amount about the backstory of that computer and how it got there, and that there was a genuine superhuman AI involved, and this is the weird, almost after effect what happened when it achieved its full-on freedom and sentience.
The art is described in a way that there is genuinely something that is speaking to the human soul in this art, which is not a thing you get with AI stuff. AI stuff is incredibly humdrum, honestly, both the art and the text, it is a vast triumph of quantity over any quality, and so it doesn't feel like that.
That's the only example I could think of in science fiction, where you had this, and in that case it's a bit more like the Solaris thing, it's this entity that has surpassed all human understanding.

Samuel Arbesman:
To push back, I wonder, in the same way you mentioned how we as humans are really good at imputing intelligence and consciousness to these allies that you mentioned, or some of these chat programs when it comes to this art. Is the implication though, we are not imputing a great deal of meaning and humanity to the art? It's more like it is truly under the hood there.

Adrian Tchaikovsky:
Gibson being Gibson, absolutely leaves it open to the reader. This may just be my personal response to the story. It is absolutely true that we as humans will personify anything. We personify the weather, we personify rocks.
Human history certainly, and human religion is 100% us seeing things in the world and giving them personifications, and then building on those personifications with stories and familial relationships and all manner of crazy stuff.
We don't even need a thing that actually interacts like a person, we just need a thing in the world, and eventually someone will give it a name and possibly throw someone off a cliff in their honor, who knows?

Samuel Arbesman:
On that human, albeit somewhat pessimistic note, I think that might be a great place to end. Thank you so much for chatting. This was a fantastic conversation, I really appreciate it.

Adrian Tchaikovsky:
No, thanks for having me on the show.

continue
reading