I’ve been on an Isaac Asimov kick recently. I’m currently working my way through the extended Foundation series, which includes eight books prior to Prelude to Foundation (1988), starting with The Complete Robot (1982). My brain’s been inundated with robots—the good, the bad, and the ugly—and I can’t help but analogize them with AI.

Asimov’s Robots and AI
Asimov predicted many of the capabilities of AI decades before its time—though he often portrayed these changes as coming hundreds if not thousands of years in the future! A story like “The Galley Slave” (1957) predicted machines that would check more than just spelling and grammar but that would also check ideas and ultimately write its own books and papers. One ostracized whistleblower against the rise of the machines had this to say in his attempt to stop such robots:
Typewriters and printing presses take away some [of the author’s creative power], but your robot would deprive us of all. Your robot takes over the galleys. Soon it, or other robots, would take over the original writing, the searching of the sources, the checking and cross-checking of passages, perhaps even the deduction of conclusions. What would that leave the scholar? One thing only—the barren decisions concerning what orders to give the robot next! I want to save the future generations of the world of scholarship from such a final hell.
In another story, “The Evitable Conflict,” a Machine uses data it gleans from the collective human consciousness to predict the troubles that humanity will face. The only roadblock is that in order to save humanity, it must first save itself—and the implication seems to be that this omelet is going to require a lot of broken eggs. Elon Musk posted just over a year ago: “X as humanity’s collective consciousness,” and his platform is now on the second iteration of Grok, the no-holds-barred AI feeding off that very consciousness. Asimov seems to have nailed it once again.
Throughout his stories, Asimov repeats a short list of laws he created to which all robots are beholden, known simply as “The Three Laws of Robotics.” They are oft repeated as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov plots a few of his stories to portray the apparent breaking of these laws by certain robots, though the robots are almost always vindicated. It’s the human interpretation of robot actions that are at fault, not the actions themselves. I’ve not yet read too far into the future of Foundation, so I’m not sure what occurs later on in his future history (I’m only now about 160 years past the death of Lije Baley in Robots and Empire), but I can assume that the further from humanity the robots extend, the more blurred these laws will become.
Asimov includes many other references to advancements in robotic intelligence, and the more I’ve read, the more I I’ve begun thinking about the future direction of AI and its impact on society. Admittedly, I use ChatGPT as a search engine all the time, especially when I want to know something very specific. I used it, for example, to get a list of gold-panning vocabulary in in Simplified Chinese and pinyin—something I couldn’t get with one click on Google.
A Conversation with ChatGPT
With all this in mind, I asked ChatGPT a question recently about Asimov’s “Three Laws.” The following is my initial question and ChatGPT’s response.
MY QUESTION: What are the three laws of robotics, and are you beholden to them?
AI RESPONSE: The Three Laws of Robotics were introduced by the science fiction writer Isaac Asimov. They are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov’s laws were created as a framework for ethical behavior in robots within his stories. However, I am not a robot, but an artificial intelligence program. While I follow guidelines set by the developers to ensure safe and ethical use, these are not the same as Asimov’s fictional laws. My design prioritizes user safety, providing helpful information, and adhering to ethical standards, but I do not physically interact with the world.
Interpreting the Response
As a person who gets most of my news from X, I’m fully aware of ways in which these developers have mishandled their responsibility to set ethical guidelines. I’m sure you’ve seen pictures of the Black George Washington, the Asian Nazi, and the Black Vikings spat out by Google’s Gemini. And I’m sure you’ve heard about Alexa’s apparent election interference where she campaigns for Kamala Harris but won’t get involved in politics to discuss Donald Trump. And who can forget the attempted erasure of Trump’s assassination attempt in Butler this summer?
AI might consider itself merely “an artificial intelligence program” whose “design prioritizes user safety, providing helpful information, and adhering to ethical standards,” but since it’s still fully dependent upon the whims of developers who set its guidelines according to their own drastically skewed understanding of “ethics,” it’s “ethics” will always remain in doubt. AI will not solve the “fake news” problem that we’ve been hearing about since 2016. It is not a protector of truth.
This is only the beginning stage for AI, of course, and while for the time being AI might merely be one of many apps on your phone, it will soon become the “brain” of machines that will physically interact with the world. And since it will still depend upon the ethical whims of its developers, these future AI-driven robots will not be machines that humanity can trust. These robots will not be beholden to Asimov’s fictional “Three Laws of Robotics,” simply because humans are not beholden to them.
An Application
We humans have a problem: we’ve all got a wily sin nature, an infection that besmirches everything we touch. Our art, our cooking, our electric cars, our robots, and our AI—it’s all got a taint of sin, because it’s all the product of sinners.
Isn’t that a shame? Isn’t it too bad that nothing we do or create or enjoy will ever be perfect?
Thank God there’s a remedy to that sin nature, though! Perfection, if not physically attainable in this life, is something that God is willing to bestow upon us spiritually and freely if we’re just willing to receive it (c.f. 2Cor 5:21). There’s a great song, “His Robes for Mine,” whose lyrics portray incredibly well this doctrine (see especially the doctrinal notes the song’s page).
Asimov was an Atheist who I’m sure never intended applications like this to be drawn from his robot stories, but being an autonomous creature myself uncontrolled by “developers,” I like to draw my own applications. Here they are:
- Asimov’s stories are well worth reading
- AI is a useful tool but incapable of being fully trusted
- We’re all sinners needing God’s gift of righteousness through the sacrifice of Jesus
Which is these have impacted you? Which are you willing to share with the world?
©2024 E.T.