Photo Credit: ChatGPT
Image created by ChatGPT; Editor’s note: Yes, we do recognize the irony in using an AI-generated image.

 

Rabbi Yaakov Emden relates (in his Shu”t Yavetz 2:82) a story told by his father about Rabbi Eliyahu Ba’al Shem of Chelm: He created a golem, and it kept growing and growing, and he feared that it would consume the whole world, and he sought to destroy it. R. Eliyahu was successful, but the attempt left him injured and disfigured. A similar fear is reported to have led the Maharal (Rabbi Yehuda Loew) to nullify the famous Golem of Prague, ostensibly done on Lag Ba’omer in 1590 (some sources name the concern that the Golem would desecrate the Shabbat). These parables are not subtle; they demonstrate emphatically that Jewish tradition recognizes the dangers of engaging with the chimerical, those none-too-human intelligences crafted by human hands.

Advertisement




We now confront this very phenomenon in the form of artificial intelligence. AI, specifically what is called Generative AI, is a collection of computer algorithms trained on massive amounts of data that use mathematical probabilities to produce new content by predicting likely outcomes. In simpler terms, when asked something, it takes all the information it has been fed and decides what the most correct next piece is and does that a billion times a second. In even simpler terms, it is a fancy random number generator (a modern Purim lottery). AI mimics human intelligence rather than thinking on its own. But it is remarkably good at doing so. Programs like Claude and ChatGPT can write essays, create artwork and compose songs, and respond in ways that make the average person feel like they are speaking to a real person. They can do so to 98% accuracy, and they are getting better with each software update. If the computer, as Gershom Scholem once called it, is Golem Alpha, then Generative AI is nearly Golem Omega.

This sort of technology unlocks immense potential in many fields – for its ability to increase efficiency, to handle menial tasks, and even shortcut the difficult work of creative synthesis, what we would call chiddush. It is no wonder then that the public has been engaged in a frenzied adoption of AI in nearly every field of activity.

The Modern Orthodox community is not excepted from this frenzy. The past year has seen a veritable barrage of enthusiastic endorsement of AI from religious and institutional leaders, in everything from the mundane to the most sacred, even as study and scandal have started to illuminate the many dangers of its use. We have seen teachers using AI to create pictures of themselves learning with past sages; rabbis proudly and loudly using AI to translate great sefarim and prepare shiurim; a slew of heavily advertised AI apps for curated “Torah learning experiences.” It has become a phenomenon that bears the unquestioned imprimatur of the roshei kehilla. And therein lies the rub. It is not that all of these are problematic uses in and of themselves, but that they are being done without any ex-ante reflection on whether they might be actively harmful, or whether they lock us into negative paradigms, or whether they simply do not align with Jewish ethics – as, to some, was true of the Maharal’s Golem when it threatened to be mechalel Shabbos.

And there are ethical implications to consider, complex ones that should be defined and worked through before we gleefully fling ourselves into the singularity. The most apparent are issues about the use of AI in general: whether we undermine amelut baTorah, the toil of study, by having a chatbot do the work of interpretation; whether we are sacrificing the last vestiges of the Oral Law when we make everything explainable by machine; whether the role of posek is being undercut by an oracle that delivers prophecies from the cloud; whether there is any value, or ethical danger, in growing parasocial relationships with objects that mimic human speech patterns. But there are also many questions about AI as it is now, things that may caution against its use: whether we can accept the risks of a tool that is often wrong; whether we can disregard the enormous environmental damage done in growing and sustaining the use of AI; whether we should rely on a tool whose existence depends on the wholesale theft of others’ intellectual property. These are but a selection of the things we need to grapple with in defining a Jewish Ethic of AI.

Regrettably – truly regrettably – Modern Orthodoxy has of yet not risen to the occasion. Our communal and institutional leadership has been almost universally unwilling to grapple with the major and majorly complex issues that confront users of AI (with some notable exceptions; Sara Wolkenfeld and Rabbi Jonathan Ziring come to mind). By ignoring it, they fail at providing one of the basic benefits of subscribing to an institutional faith: the defining of ethical norms around major contemporary matters. And it is not enough that some of these leaders can perhaps defend their own AI use. Without a clear and public treatment of the ethics of AI, the many in our community who are not deeply practiced in these finer distinctions will simply see role models giving their full endorsement to the use of chatbots in whatever guise they may appear. It becomes a true case of mar’is ayin, of giving others the impression that something verboten is in fact perfectly fine.

AI is not a value-neutral tool; it represents a fundamental shift from what came before in the way humans relate to technology. These are not the statements of a luddite. In some sense, both are true about every new technology; each leap forward unlocks untold potential for good, but also for ill, “like clay in the hands of the potter.” But it also creates novel questions that demand nuanced answers; its potential is exactly why we need to define a clear and detailed ethic, one that can be encouraged in our communities and implemented in our schools. Fortunately, we have the wisdom of Rav Emden, of the Maharal, and of more than two millennia of others to draw on. We need to do so before the AI golem grows too big to be contained; at that point, it will take a great and painful effort to put up guardrails, if such a thing can even be done.

Last month, the charedi community announced a public fast day as well as a complete ban on the use of AI applications. It is a decidedly charedi answer to the dangers of AI, one that, like for the Internet before it, will not work within our modern halachic community. But at least they’re asking the questions; we fail to do so at our own (and our children’s) peril.


Share this article on WhatsApp:
Advertisement

SHARE
Previous articleHow Amalek Breaks Through
Next articleAn Encounter with a King
Joshua J. Freundel is an attorney whose work focuses on religious discrimination and free exercise, including recent Title VI suits against universities. A graduate of Harvard Law School, he lives in Cambridge, Mass.