Protect The Machine
Correct Excess Deviation
It would be an exaggeration to call the great Jack Kirby’s The Eternals an inarguable classic. To most, it’s a curiosity: an abandoned story that introduces some genuinely brilliant concepts into the Marvel Universe (not the least of which is the Celestials). But ultimately the series pales in comparison to its sister story, the DC-published Fourth World saga.
The Eternals have returned numerous times, perhaps most notably in 2006, when Neil Gaiman and John Romita, Jr. relaunched The Eternals with a miniseries which, while much praised, did little to position the Eternals as consistent members of the Marvel Universe. Even as recently as 2018, Jason Aaron, Paco Medina and Ed McGuinness‘ Avengers #4 made the Eternals the ultimate footnote to their more successful Celestial counterparts, when a change in the status quo of the Celestials caused the Eternals to commit mass suicide, taking them entirely off the board for Aaron’s Celestial-centric story.
It’s easy to see why the Eternals had long been relegated to the (metaphorical) ashcan: partly, it’s because Kirby didn’t intend for them to be part of the Marvel universe and resisted editorial attempts to bring them in. The Eternals were meant to be Erich von Däniken-inspired superbeings mistaken for gods by ancient humans, which is in contradiction to so many parts of the Marvel Universe, where actual gods exist (not to mention the idea of ancient astronauts is a highly problematic “theory” on its face). How does the Marvel Universe square the circle of having an Eternal named Zuras, whose name the ancient Greeks corrupted into Zeus, existing alongside real, authentic, canonical Zeus, who is an actual god of Olympus?
Even when Kirby gave in to editorial pressure and had the Eternals cross over with the rest of the MU, it was in truly bizarre fashion. For instance, in Eternals #14-17, Ikaris squares off against The Incredible Hulk. But this isn’t the Hulk per se, it’s a robotic Hulk made by college students, accidentally given cosmic strength through radiation. Are the students meant to be fans of a real Hulk whom they know as an Avenger and a Defender, or are they fans of Marvel comics?
Ultimately, though, comics continuity issues like this are often (constantly) brushed aside with a hand waving explanation: Wolverine can appear in so many titles each month because who cares, Wolverine is cool.
When Kieron Gillen and Esad Ribić relaunched The Eternals for Marvel early this year, it was with the intention of recontextualizing the property for the 2020s. Where Gillen ties the Eternals into the rest of the Marvel Universe is not in a prosaic sense, but in an emotional one: he gives them what he calls their “Uncle Ben” moment of great tragedy. It’s revealed in issue #6 of Gillen and Ribić’s run that the Eternals’ immortality comes at a price: every time an Eternal is brought back from death, the spark of life is stolen from a human being (or a Deviant) somewhere on Earth, seemingly at random. This causes a crisis among certain Eternals who view human life as precious, while others see it as a regrettable cost of doing business (though in other cases the regrettability is, shall we say, debatable).
Gillen and Ribić also take advantage of the current Marvel zeitgeist of using data pages (here designed by Clayton Cowles, though the current craze for data pages in Marvel comics comes from Hickman, Larraz, Silva and Muller’s House of X/Powers of X) in order to introduce us to another new concept in this revised and expanded world of the Eternals: The Machine. The Machine is, for all intents and purposes, the living embodiment of the planet Earth, as set in motion by The Celestials millions of years ago. The comic is narrated by The Machine as it malfunctions across the first six-issue arc of the series, spitting out charts, maps, lists, emoticons (you see, The Machine predates the invention of emojis by aeons) and caption boxes.
The first arc is presented as something of a whodunit: within the first few pages, Zuras, ruler of the Eternals, is murdered, and he’s been murdered by an Eternal. But The Machine is far from the last new wrinkle in the story of the Eternals: Gillen and Ribić have also codified three laws for the Eternals, referred to as Principles: 1) Protect Celestials, 2) Protect the Machine, and 3) Correct excess deviation.
The mystery gets resolved in issue #6, when we discover that Phastos, the Eternal mechanic in charge of maintaining The Machine, has learned about the price taken for Eternal resurrection, and has resurrected Thanos in order to distract the other Eternals while he disassembles The Machine. His argument is that the Eternals are no longer needed, as the Earth is well-protected by the vast array of superheroes who populate the MU, and that they should be allowed to die out in order to prevent human lives from being taken to fuel their resurrection machine. His plan fails as Thanos escapes his control and his near-destruction of The Machine leads to the near-destruction of Earth. Disaster is narrowly averted, but at a cost: Ikaris (the Eternal whose archetype can most easily be summed up as, “he is a superhero”) dies and is returned to life at the expense of the life of Toby Robson, an average teen boy from Queens (shades of Peter Parker, no doubt) that Ikaris has vowed to protect after seeing a future apparition of the boy’s grave.
The Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In interviews, Gillen readily refers to The Principles as “Asimov style.” He is, of course, referring to Isaac Asimov’s Three Laws of Robotics, as popularized in the prolific sci-fi author’s Robot series of novels and short stories. Asimov’s Robot stories span five decades of his career, beginning with the short story “Robbie,” published in the magazine Super Science Stories in 1940, all the way through to the novel Robots and Empire, published in 1985.
Asimov began writing stories about robots as a response to the then-popular Frankenstein mode of robot storytelling, where robots go berserk, harm people and must be stopped. He wanted to tell stories about robots who were helpful and sympathetic. In Asimov’s estimation, robots would be complex tools, programmed with safeguards against harming people. And so, in meetings with his editor, John W. Campbell, Asimov devised the three laws of robotics.
Though the laws started out in Asimov’s stories, they have become a fundamental part of the cultural conception of robots throughout science fiction and real-world robotics. For instance, despite revolving around killer robot assassins from the future, James Cameron’s film series The Terminator exists inherently in conversation with Asimov’s laws. In Terminator 2: Judgment Day especially, without explicitly mentioning The Laws as such, the T-800 (the Arnold Schwarzenegger one) is programmed to let no harm come to young John Connor, to obey Connor’s commands (including the command that he may not kill any humans), and to not, as he puts it “self-terminate.” These programmed rules sound familiar, but the Three Laws are so ingrained into the culture that they needn’t be mentioned by name.
The Laws became the focal point of Asimov’s early stories, presented as edge-case scenarios where robots seemingly disobey them. For example, in “Liar!” a robot named Herbie accidentally develops mind reading powers due to a manufacturing error (it was the 40s!), and begins to lie to a group of researchers tasked with discovering how those powers came to be. Herbie tells the robot psychologist Susan Calvin (a human psychologist whose subjects are robots) that one of her colleagues is in love with her. He tells mathematician Peter Bogert that his senior colleague has resigned and named him as his successor. These are clear lies, which would seem to contravene the Second Law, but since Herbie is a mind reader, he must consider the harm in telling the truth to the humans, in the form of hurt feelings, and therefore can only tell the humans what they want to hear. Of course, in finding out that his lies have caused the humans emotional distress, Herbie glitches out and becomes insane, due to the insoluble dilemma of navigating the three laws while possessing a psychic understanding of human emotional fragility.
After collecting his early robot stories in the fixup novel I, Robot in 1950, Asimov wrote the first novel-length robot story in 1954, The Caves of Steel. While the I, Robot stories are conceptual mysteries exploring the ways in which robot laws cause robots to malfunction, The Caves of Steel, along with its direct sequel The Naked Sun, is a genre-mixing detective novel. It features homicide detective Elijah Baley and the humanoid robot R. Daneel Olivaw (the R stands for Robot, dontchaknow), who become partners in solving a murder mystery in a world where humanity largely lives in giant, overcrowded underground cities on Earth, with a few interplanetary colonies grown rich on robot labor.
While the whodunit in The Caves of Steel does not pivot on the creative contravention of the Laws as in the early robot stories, the laws themselves remain a fundamental part of the novel. Moreso in The Naked Sun, which focuses on the space colony of Solaria, where robots outnumber humans 10,000 to 1, and every human lives in isolation, only ever interacting with one another over holographic video chat (who could imagine such a world!). Because of this human isolation, the murder must have been committed by a robot. While the mystery’s solution is rather mundane (the murderer tricked a robot into poisoning their master), the motivations of the murderer, roboticist Jothan Leebig center around his attempts to circumvent the Three Laws.
Asimov’s stories become logic puzzles, mysteries hinging on a nuanced interplay of the laws, developed in an era when computer programming and debugging were far from commonplace. They’re thought experiments, involving the creation of programmable beings that remain to this day a nigh-unattainable goal, living in societies that remain far-fetched dreams. They’re fun what-ifs, interested more in exploring esoteric counterfactuals that may someday come to pass.
Gillen’s Principles, on the other hand, differ from Asimov’s laws in a few important and subtle ways. First, as befits a race of ancient space gods who handed down the rules millions of years ago, they’re vague. How exactly does one “protect the machine?” Are human beings part of the machine? What is “excess” deviation, exactly, especially amongst the Deviants, whose entire purpose is to deviate (it’s right there in the name!)? Are the principles in order of importance? If they are, how can the Eternals uphold their most important principle, to protect Celestials, when the Celestials have abandoned Earth entirely? Should they go off into the depths of the cosmos to find the Celestials? Wouldn’t that be an abandonment of the second and third principles?
The other major difference is that the Eternals aren’t robots. Each Eternal is their own individual with their own moral center and their own interpretation of the principles. Gillen and Ribić spend many pages in the first arc presenting flashbacks to different points in the lifetimes of the Eternals that provide insight into their morality and ethics. The flashbacks are presented as digressions in the tale being told by The Machine, while also subtly preparing the reading audience for the twist that comes in issue #6.
Issue #2’s flashback to the bronze age involves Ikaris looking through a time distortion and seeing a monster arising from the sea near a boy. When he visits the beach where he saw the boy, he asks the boy to notify him by lighting a pyre if and when the monster arrives. Ikaris returns many years later to discover that the boy he met has grown old and died and the pyre that has summoned him is his funeral pyre. Ikaris is horrified to learn that the boy had wasted his entire life, sitting watch in what he believed was a commandment from a god. The boy’s descendants greet Ikaris with hostility: because of Ikaris, the boy neglected his children and his grandchildren, ignored every fulfilling part of his life because of one moment spent in the miraculous presence of Ikaris.
Besides the lesson Ikaris learns about his great responsibility to the humans he meets, there is a parallel to the Eternals’ own abandonment by their own gods, the Celestials. In the Eternals: Celestia one-shot, Ajak and Makkari, the two Eternal high priestesses, contend with the knowledge that the Celestials have abandoned them, leaving them rudderless regarding the first Principle of the Eternals. Ajak, the more zealous of the two, considers their new status quo to be a test of faith. Makkari, more prone to lateral thinking, sees the lack of Celestials to protect as more of an opportunity: why not just create a new Celestial for themselves to worship/serve/protect? Are these not, to a certain extent, different reactions to the plight of the boy on the beach?
The flashback in issue #3, set 100,000 years in the past, features a meeting between Thena and Sersi. Sersi has made a discovery about Thena’s latest lover among the Deviants, a scientist named Tzaigo. Tzaigo, it turns out, has been conducting Frankenstein-style experiments on humans and neanderthals in order to extend his own life, harvesting their organs to replace his own, in order to emulate his beloved Eternals. Upon searching his lab and finding the grisly evidence of his experiments, Thena kills Tzaigo in a rage.
First, this vignette serves to highlight the difference in outlook between the two Eternals. Thena engages in long affairs with Deviants, investing herself emotionally in the relationships, admiring their ability to do the one thing Eternals are incapable of: change. Sersi, on the other hand, limits her love to “the act, not the heart.” Thena sees her relationships with Deviants as being partnerships of equals (she uses the Eternal pronoun which means “one as real as an Eternal” when referring to Tzaigo). In Sersi’s case, she views her affairs as a “weakness of character.” Her hedonism is akin to a person taking pleasure in a good steak, or a trip to the zoo. We can enjoy the steak while holding concerns about the underlying environmental and ethical effects of consuming (non-human) animals, justifying our own pleasure in their suffering however we can (or not even bothering to justify it at all). She views the human capacity to change as a weakness. Their ultimate mutability is growing old and dying, after all, while Eternals do neither. Little wonder, then, that Sersi is among those Eternals that know that The Machine kills people, but can live with the knowledge. We even see her, in issue #5, casually paralyze Tony Stark in order to draw out Gilgamesh, a renegade Eternal. Even as she apologizes to Stark for manipulating him, she edits his memory of the events for a purpose yet to be fully revealed.
Tzaigo, on the other hand, views Thena as something greater than himself, a perfection to be emulated, an end-goal for his deviance: a stasis to be achieved. Conversely, when Sersi refers to Tzaigo as considering Thena “just another specimen,” he replies, “Not just another specimen. A prime specimen.” His non-denial of her specimen-hood reveals that he views Thena as beneath him, in a certain sense, even while he holds her perfection up as a desired goal. To Tzaigo, Thena is a goddess, but also an object.
Second, this flashback foreshadows Thena’s reaction to the dark truth about Eternal longevity in issue #6. Thena reacts to Tzaigo with righteous fury (which The Machine labels “love”), and much later with shock and horror to the revelation that Eternal immortality comes at the expense of human lives. While the particulars (and, possibly as importantly, the aesthetics) of Tzaigo’s experiments differ from the more sterile Eternal resurrection achieved by The Machine, the results are more or less the same: extended life for some, death for others.
At this point, we need to discuss two conflicting ethical frameworks: virtue ethics and utilitarianism. Utilitarianism concerns itself first and foremost with the consequences of actions, while virtue ethics is concerned chiefly with intentions. In utilitarian terms, the consequences of Tzaigo’s experiments and Eternal resurrection are, if not identical, largely similar: one dies, another lives.
“But surely,” you’re thinking, “there’s a big difference in extending the life of a superhero like Ikaris, who is likely to save hundreds, perhaps billions of lives and a Deviant who views humans as fodder for his experiments?” Here’s where we come to one of the classic arguments against utilitarianism: we are terrible at judging consequences. Ikaris might go on to save more lives, but he might not. He might, upon learning that his death will mean the death of an innocent, just decide to mope around. He might, at a crucial moment, not risk his own life and in so doing get many people killed. There’s also the consideration that The Machine doesn’t discriminate between good and evil Eternals: all are equally given resurrection.
Even Tzaigo might have gone on to use his research to benefit all Deviants and even humans, but was never given the chance. He’s a Deviant after all, and capable of profound change. (NB that I am not arguing that Tzaigo was either morally ethically or morally just, simply that we don’t know, in factual terms, what the ultimate consequences of his work would have been had he been able to continue it).
And this is not to say that virtue ethics doesn’t concern itself with consequences. It does, but it foregrounds the intention behind the act, acknowledging that consequences are a tricky thing to predict. So, a utilitarian might look at the project of the Eternals, and weigh the costs and benefits, and say, “sure, Ikaris’ resurrection has killed a random boy in Queens, but Ikaris has just saved the lives of nearly 8 billion people, and as an unchanging Eternal, is likely to do so again.” A virtue ethicist might look at the situation and ask, “is it ethical now for Ikaris to risk his own life, knowing that to do so would also endanger an innocent human being, even if it weren’t the intended consequence? They might look at the entire project of the Eternals, look at their principles and say, “Protect Celestials? Is that necessarily a good thing? Celestials famously sit in judgment of the Earth and may well annihilate us all. Protect the machine? What, the machine that uses human lives as its fuel? Correct excess deviation? Isn’t that just code for maintaining the status quo?” And as to the intentions behind the Celestials themselves: who knows? Who could possibly know the mind of unknowable Kirbyan space gods?
This debate becomes even more salient in the flashback in issue #4, set in 1241, during the Mongol invasion of Europe under General Subutai. Kingo is horrified by the death and destruction in the army’s wake, and finds his fellow Eternal Druig among the Mongol soldiers. At first he assumes that Druig is behind the invasion, but Druig assures him that he is merely there to observe, to learn from the invading army. He even gives Kingo the means of ending the invasion: the location of the sleeping general. Kingo sneaks into the general’s tent, but does not kill him.
It’s not that he won’t murder a sleeping man, but rather “He would protect the humans, but if he protected them from themselves, it was something else. If he did that, he may as well crown himself king.” And, just to emphasize the unknowableness of consequences, the army retreats east regardless, as the Khan has died and thus the army must return to Mongolia to choose their new ruler. Druig even points out to Kingo that had he murdered Subutai, the army might not have returned east at all, but rather continued their slaughter across Europe. He luxuriates in the opacity of causes and effects, seeking only to increase his knowledge through experience.
So what are we to make of Druig and Kingo’s choices here? A utilitarian view would be that Kingo’s choice not to kill Subutai likely saved millions of European lives. The fact that he didn’t know it would, that he spared Subutai, believing that he was condemning the millions in Europe, wouldn’t enter into the equation at all. A virtue ethical view of Kingo’s choice would be happy that those lives were saved, but also would look at the reasons Kingo made his choice. If he had chosen to murder Subutai with the sole consideration of saving people in Europe, it would be a virtuous thing. But that was not Kingo’s sole consideration in this decision: he examined his own reasoning and found his motivations wanting. He wanted to save those people, of course, but to murder Subutai in his bed would be a statement that Eternals are above humanity. That they are shepherds of an entire species, deciding which humans would die for the greater good. He may as well, as the Machine puts it, “crown himself king.” But he acknowledges that he may be an Eternal, but that does not make him a god to sit in judgment. His principle is to protect the machine, not to protect the humans from themselves, no matter how distasteful.
To make the parallel as close as
possible it may rather be supposed that he is the driver of
a runaway tram which he can only steer from one narrow
track on to another; five men are working on one track
and one man on the other; anyone on the track he enters is
bound to be killed.
We’ve all heard of the Trolley Problem by now. You’ve probably seen it acted out on The Good Place. It’s a thought experiment where you’re given the option to divert a runaway trolley that is about to kill five people and instead send it down another bit of track and only kill one person. All other things being equal, the obvious moral choice is to kill the fewest people, even though we are killing a person by diverting the trolley. But here’s the thing: this is not the be-all rule to all things in life. In order to understand why that’s the case, let’s look at the original use of the Trolley Problem in the philosopher Philippa Foot’s essay “The Problem of Abortion and the Doctrine of the Double Effect.”
In this essay, Foot, a proponent of virtue ethics, poses the trolley problem as a very simple moral dilemma, with an intuitive solution. Why then, Foot argues, do we find it repulsive to, say, murder an innocent man to harvest his organs (shades of our old friend Tzaigo there) in order to save five people who need them? In a strictly utilitarian universe, the two dilemmas are the same, as the choice between consequences in both cases are the same: one life versus five lives.
For Foot, it comes down to a distinction between positive and negative duties. What she means by positive duties are duties that we have that compel us to do some good in the world. It could be curing a sick person or feeding our child. Where things get tricky are when we have negative duties, which is to say, our obligation to not do harmful things. In the case of the runaway trolley, we’re faced with a choice between two negative duties: we shouldn’t kill one person, but we also shouldn’t kill five. When faced with two negative duties, we choose the one which causes the least harm. In the case of the doctor and the organ harvesting, the doctor has a positive duty to help people, and a negative duty not to kill. Intuitively, the negative duty not to kill an innocent trumps the positive duty to help others. In cases where we are faced with two conflicting positive duties (Foot uses the example of having enough drugs to treat five sick people or one really sick person who would need five doses), one must go with the positive duty which helps the most people.
To reject this conclusion, Foot argues, is to put us at the mercy of evil people. Suppose Thanos, for example, asks you to kill one innocent person, lest he kill five innocent people. A utilitarian answer would be to kill the innocent person, thus saving the five. Congratulations, you now kill people for Thanos. Nobody wants that. It would also be a different dilemma to have Thanos make you choose between his killing five people or one.
I also mentioned in my description of the trolley problem “all other things being equal,” which, of course, they very seldom are. The problem itself begins with the assumption that people are as interchangeable as protons. For example, we would understand the decision to pull the trolley lever to save one’s own child rather than the lives of five strangers. We might judge the person harshly who makes this choice, valuing the five lives less than one, but we could certainly understand their decision. But also have no doubt that we would have the utmost suspicion of a person who’d choose five strangers over their own child.
Which is something Gillen and Ribić understand completely. They devote pages and pages of the first arc to Toby Robson, the boy whom The Machine eventually kills in order to bring Ikaris back. We see his family, he hangs out with the teenaged Eternal Sprite, and they get to know each other. We don’t learn that he’s special or remarkable in any way. He’s decidedly average, but also unquestionably human. All of this attention paid to Toby serves to remind us that even though Toby is chosen at random from amongst the vast sea of humanity, he is still a full person, deserving of life and dignity. It’s one thing to handwave away a random stranger somewhere off in the world in a cold utilitarian equation, it’s another to know that The Machine will kill off Toby Robson, person.
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
So where does that leave Asimov’s robots? Late in Asimov’s life, he set about merging his three major storytelling universes into one cohesive whole. These three universes were The Galactic Empire universe, where mankind rules over a vast empire and Earth is radioactive, The Foundation, where a small foundation of academics keeps the flame of civilization alive long after a central galactic empire has risen and fallen, and our old pals, the Robot series. Much like The Eternals and the MU, the Robots series was not originally intended to fit in with the other two series. This connection only came about in the 1980s, when Asimov returned to writing about his beloved Foundation. Over the course of the novels Foundation’s Edge, The Robots of Dawn, Robots and Empire, Foundation and Earth, and the posthumous Forward The Foundation, Asimov crafted a metanarrative that conclusively tied Robots, Empire, and Foundation together, by positioning R. Daneel Olivaw (yes, the one who teamed up with a homicide tec to solve robot murders) as the puller of secret strings over the course of millenia, guiding the development of the human race through its empire phase and finally becoming the basis of a group consciousness for the entire galaxy, called Galaxia.
One of the chief creations of this phase of Asimov’s career is a new Law of Robotics, which he dubbed The Zeroth Law: A robot may not injure humanity, or through inaction, allow humanity to come to harm. Asimov writes about the development of the Zeroth Law in Robots and Empire (the novel that serves as the bridge between the Robot and Empire stories, natch). In the novel, R. Daneel and his companion R. Giskard Reventlov, an older robot with unique telepathic powers (in the mold of Herbie, but with the added power of being able to manipulate the minds of humans as well), discover a plot to irradiate the planet Earth in the mind of the scientist Kelden Amadiro. Amadiro’s motives are genocidal, and so R. Daneel and R. Giskard circumvent their programming directives not to harm human beings by inferring a Law that overrides the First Law, since all of humanity is at stake.
So, in this case, we have a pretty clear ethical directive, from both a utilitarian and a virtue ethics perspective: it’s evident that R. Giskard is right in erasing the plot from the mind of Amadiro. Heck, they don’t even need to kill or incarcerate him, only erase the bad plan! But then things get a bit iffy: Amadiro has a co-conspirator, Levular Mandamus, who had intended to draw out the irradiation over the course of decades. The idea is that humanity is stagnant on Earth, and needs a shove to begin colonizing space in earnest. R. Daneel and R. Giskard decide to allow Amadiro to go forward with the plan of irradiating the Earth, ultimately making it uninhabitable. I’ll repeat that: in order to allow humanity to flourish, these two robots decide to allow the Earth to become a radioactive wasteland. How do their positronic brains allow them to reach such a conclusion? Well, for one thing, they’ve developed a primordial version of psychohistory, which readers (and now viewers) of the Foundation series will recognize as the mathematical study of statistics in order to predict the future behavior of large groups of people. With Asimov’s invention of psychohistory, Daneel and Giskard have circumvented one of the major shortcomings of utilitarianism: an accurate prediction of the outcomes of actions.
This doesn’t, however, make their (in)action any less horrific. It’s unclear if Asimov condones the logic of the robots here, or if he only does what is necessary to tie the Galactic Empire books and the Robot books together, but even at the time of the book’s publication, reviewers found the application of the Zeroth Law to be ill-thought through. Dave Langford, writing for White Dwarf, opines that the Zeroth Law “works out approximately as ‘the end justifies the means’. For some reason the author doesn’t even seem mildly worried by the implications.” Indeed, the robots here fall into the trap that Kingo avoids: they set themselves up as the benevolent rulers over humanity, allowing an atrocity to happen for a theoretical greater good.
Asimov, for all his early desire to depict robots as helpful companions for humanity, has his creations subvert the intentions behind his rules. He renders his robots into grotesque manipulators, engineering the deaths of millions, the collapse of empires, for the projected good of humanity as a whole. The programmers of the Three Laws had some notion of Philippa Foot’s positive and negative duties, with the use of the phrase “through inaction,” but the robots do not possess the fundamental hierarchies of the interactions of positive and negative duties when they come into conflict with one another. Asimov’s robots are the result of a rudimentary attempt to distill into a few simple lines of code the entirety of a system of ethics, which, when applied without constant guidance, turns them into a twisted mockery of morality.
Gillen and Ribić’s Eternals, on the other hand, for all their protestations that they are separate and distinct from humanity, show the same range of virtues, vices, intentions and motivations that are to be found in humankind. While they have their Principles, they remain just that: a set of guiding truths to help them to make decisions as fully sentient beings. In the end, they are not laws to be strictly observed, but rather the basis for an ethical framework that allows for different interpretations, different behaviors, and ultimately different values. What remains to be seen is how a change in their understanding will affect their supposedly unchanging natures. Will this new knowledge provide a jolt to their static natures, which have caused their society to stagnate over the course of millenia, or will it serve as the crisis that ultimately brings it all crashing down? Perhaps a more useful comparison would be to Isaac Asimov’s masterpiece, The Foundation series, based loosely on Edward Gibbons’ Decline and Fall of the Roman Empire. It tells the story of an ancient Empire, grown decadent and moribund, under the leadership of an inept emperor, collapsing as crisis after crisis brings it crashing down. Will the empire of the Eternals suffer a similar fate? And if so, upon what foundation will they build whatever is to follow?