The Square Inch

The Square Inch

Share this post

The Square Inch
The Square Inch
The Robot Read My Books

The Robot Read My Books

No.253: April 4, 2025

Brian Mattson's avatar
Brian Mattson
Apr 04, 2025
∙ Paid
9

Share this post

The Square Inch
The Square Inch
The Robot Read My Books
9
Share

Welcome to The Square Inch, a Friday newsletter on Christianity, culture, and all of the many-varied “square inches” of God’s domain. This is a paid subscription feature with a preview before the paywall, so please consider subscribing to enjoy this weekly missive along with an occasional “Off The Shelf” feature about books, a frequent Pipe & Dram feature of little monologues/conversations in my study, and Wednesday’s “The Quarter Inch,” a quick(er) commentary on current events.

Dear Friends,

I’ve got some thoughts about “Liberation Day” and tariffs all the way at the end if you want to skip down there or at least stick around.

But let’s discuss something else. The other day I ran across this Substack essay by Katelyn Beaty about how Meta (Facebook’s parent company) “stole” or “pirated” a whole bunch of books to train their Artificial Intelligence platform. It strikes me as a very good example of how AI is confusing to people—it appears, with all due respect, to have confused Ms. Beaty.

Last weekend in Florida an attendee asked a question about Artificial Intelligence during the Q&A, and all I could really tell them is that AI is here and there is no putting the genie back in the bottle, and that AI is way scarier than they imagine. I didn’t say that to scare them, actually. What I meant—and I explained this—is that AI’s capabilities are far beyond what the average person thinks they are. Even one of my co-speakers analogized AI to a “really good golden retriever.” That is, it is good at finding things and organizing them for you. It fell to me to demur:

AI is not a golden retriever. That sort of thing has been around since AmericaOnline or Netscape or even Prodigy (anyone remember that?). A search engine is a “golden retriever.” AI is something else entirely.

I shall tell you what concerns me. There is a really deep-seated need or desire, especially for Christians, to minimize AI and its capabilities. I get it; I really and truly do. We believe that human beings are imago Dei and there is something about a machine or digital entity exhibiting “rational” thought that seems to threaten our very anthropology.

Here’s a perfectly conventional example, from Beaty:

We’ve all come across AI writing that sounds so close to human writing, but it’s missing something — the mysterious something that separates (hu)man from machine. It’s worth remembering that AI can’t actually create; it can only mimic. We, though, can create — not ex nihilo as only God can, but by taking the raw elements of the world of objects and ideas and adding our own inimitable voice and imagination. The world needs more of our creative work.

This dismissiveness of AI works for her and for other casual readers. But it is extremely short-sighted, not to mention wrong in important ways. What is she going to do when she encounters AI writing that doesn’t sound so close to human writing, but is actually indistinguishable from the real thing? What will she do when she encounters AI that “creates” things in a way that the product is indistinguishable from a human work-product? What will she do when she encounters AI that manifestly is not “mimicking”? The criteria she has set for herself means that she is setting herself up for an epistemic crisis. I hate to break it to you, but AI has blown through every single one of these barriers.

I wholeheartedly admire and agree with Katelyn Beaty about the importance of the imago Dei, but she seems to be operating with a weak, and therefore fragile, version of it. The image of God is not found in “creativity,” per se, as the paragraph above suggests. The squirrel in my backyard exhibited incredible ingenuity and, well, creativity last week when he figured out how to crack open the bird suet container hanging out of reach from a branch. Nature is full of creative creatures. Look at how some birds craft their nests; watch a pod of Orcas hunt. Watch a documentary about an octopus sometime. We need a more holistic doctrine of imago dei than just creativity. But even so, here’s something evangelicals would do well to internalize: AI is itself a monument of human creativity. We “created” it. The knee-jerk need to imagine an antagonism or competition between AI and “humans” is to overlook that monumental fact. Artificial Intelligence is not sui generis.

We are in a brave new world, no doubt. This is uncharted territory. It is, in many ways, scary! And it seems to me Christians can do one of two things:

1) Pooh-pooh AI, call its work-product “schlock,” insist that is it “missing that mysterious something,” claim that it “can’t” do this or that, and then get embarrassed when it turns out it can. I am old enough to remember people insisting that computers would never beat a Grandmaster at chess (haha!) and laughing uproariously at the notion that automobile factories would someday build cars with robots. People thought that ludicrous. And, well…

2) Squarely face reality, recognize the technological advances, be amazed at them, and get to work at the cutting edge of these breakthroughs, exploring its potential and finding edifying and God-glorifying uses for it.

Option 2 is far better than Option 1.

That said, that wasn’t really what Ms. Beaty’s essay was about. She’s upset at Meta “stealing” or “pirating” her books. You see, apparently Meta accessed millions of books through an illegitimate database of stolen or “pirated” books called LibGen, and used them to “train” their AI program. I am sure this is probably true—my own books have, in fact, been stolen by LibGen. But before we jump on the outrage bandwagon, it might be helpful to ask some definitional questions. What do we mean by “pirated” or “stolen”?

LibGen stole and pirated books. What I mean by that is that they obtained digital copies of the book without paying for them and is distributing them. This is clearly theft of intellectual property. LibGen did not pay publishers or authors for the books, and is yet (presumably) financially profiting from the stolen asset. Clear enough.

Meta is a slightly different question. If they just obtained all the books from LibGen, then, yes, they too obtained a “stolen” or “pirated” copy of the books. This is a no-no, of course. But unlike LibGen, they are not re-selling or distributing the books. They simply had their AI robot read the books. More on that in a minute. It is worth noting that other AI companies spent a lot of time and resources negotiating with book publishers to get their training datasets. They paid for the access to the books, in other words. And that has caused a different sort of conflict between publishers and their authors; book contracts did not anticipate revenue from AI training, and now that publishers are getting paid, authors have to somehow negotiate with their publishers to get a share of something that isn’t in their contracts. That’s a separate problem.

But Ms. Beaty’s ire seems to be up for a different reason. She seems to think an AI robot reading her book is itself “stealing” or “piracy.” She calls this akin to “plagiarism.” Honestly, I fail to understand this. Someone who (or something that) reads your book is not “stealing” it nor “pirating” it. If they reproduce it, distribute it, or quote it without attribution, they might be. But reading and learning from and being influenced by a book is not “stealing.” It is what the author had hoped for and intended, one would think. So, as far as Meta is concerned, I guess their crime is obtaining this book without paying the publisher and author. Fair enough. But something tells me that if Meta had paid her publisher the $19.95 or whatever, she would still be unsatisfied because she thinks having a robot read the book is somehow nefarious in and of itself.

This is a category mistake. AI is akin to a student—an unbelievably brilliant student—who was assigned a reading list of millions of books. In that scenario, I want AI to be influenced by my scholarship and writing. I want the program to know what it is talking about when someone asks it a question about, say, Herman Bavinck’s doctrine of the imago Dei. I don’t know if Ms. Beaty is a teacher or not, but supposing she is, if one of her students isn’t “stealing” by reading a required book, then neither is MetaAI “stealing” a book by reading it.

So, yeah, I am bummed I didn’t get a few bucks from Meta for having their robot read my books, and I wish all the luck in the world for those who are taking legal action on that front.

But it doesn’t bother me in the slightest that the robot read my books.


For those who, like Ms. Beaty, are skeptical of AI, thinks its product is “schlock,” and can only “mimic,” I will include here a conversation with Claude. It is on a topic I bet would be of great interest to her. C.S. Lewis.

Keep reading with a 7-day free trial

Subscribe to The Square Inch to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Brian Mattson
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share