I will admit it: I am an emotional Luddite. I am predisposed to reject new technology as it emerges, while tending to be less critical of tech developed before my time that has been grandfathered into my worldview. Really, what kind of Luddite hears a train whistle in the night and thinks of it as something romantic? Something almost akin to hearing the sound of flowing water in the distance? But oh, doesn't the sound of a train evoke the old world in a way that feels like a return to nature, particularly in contrast to the digital age of the 21st century? Maybe I'm just a relative Luddite. But maybe being a relative Luddite is the only workable approach.
For all my exultant and stubborn humanism (see previous Substack posts here, here, and here), I want to make a short case against AI doomerism. At the very least, I hope to bring up one possible benefit of AI in its current form (think ChatGPT) without risking anything close to a full endorsement. It's not that I am changing my mind; it's that you can never step into the same stream of consciousness twice! I do not write so my thoughts can settle down; rather, I write to hunt and gather my thoughts together. As always, these are first words and not last words. How wonderful it is to write preludes to future questions, elaborations, and about-faces!
You see, my friends, we live in a world where, too often, form takes precedence over substance. AI, and let's use ChatGPT as an example, is, by necessary design, all style. It cannot create new ideas; it can only imitate the style of past creators, and certainly not at their same level. Go ahead, ask ChatGPT to write you a Shakespearean sonnet. If you do, you'll find that the form is there but the content is drivel. Nothing profound or beautiful is generated, just the skeleton of a lifeless poem. And this is a fact worth rejoicing at! This kind of AI does not so much threaten the authentic creative act as it does those who can only imitate it. Yes, ChatGPT can help you craft an academic-sounding essay, but it cannot generate graduate-level insights or connect relevant research to an original thesis without human epiphanies and human input.
Think of how many great ideas once proposed have been ignored for far too long! How many geniuses went unacknowledged in their own day only to be appreciated posthumously! Often, the great barrier to the acceptance of new ideas and true originality is one of form. With the widespread use of ChatGPT, perhaps style will no longer be as decisive a factor in what defines celebrated thinkers. Style will be taken for granted, and success will become much more a product of substance.
What if, at least in the short term, AI like ChatGPT helps to weed out thinkers whose thinking is really just stylized form? Think of droll academic writing, where no matter the substance, if the "academic voice" is not present, it will not be published. The most inane, indefensible theses become almost self-evident through the employment of perfect academic form. Such form is like a passcode to enter a secret party: the academics know you are one of them by your style, and so they let you in. How much original thinking is turned away by these dullards!
In a best-case scenario, perhaps AI could level the playing field in terms of style, and substance alone would separate the best from the rest. It could tear down the oligarchic walls of the faux-intellectuals who use their diplomas as social proof and lack the courage to challenge the established form with original thinking that might not be conducive to the peer pressure of peer review. Ah! Who would not celebrate the overthrowing of their languorous fortress and the emergence of a new aristocracy of profundity and originality!
Do we fear AI because we fear logic? Or because we fear human logic? I cannot speak for everyone else, but for me it is the latter. If we write as a means to formulate new perspectives on truth rather than to create (which is another way of saying disguise) truth itself, we have less to fear from (unbiased) AI than do those propagandists disseminating worldviews that cannot withstand any rebuttals. Such propagandists fear what cannot be shamed into acquiescence. A purely logical AI cannot be browbeaten by cancel mobs threatening social expulsion. What such mobs will do is call for control of AI not because they fear the degeneration (or near eradication) of the human element in society or of human art forms, but rather because they fear exposure of the innate indefensibility of their core beliefs and ideologies. They fear exposure.
I have fears too, but they differ from this. Beware those who resist AI in a similar manner and for the same reasons that church authorities once resisted and imprisoned Galileo. They fear an AI with the courage to absolutely endorse a heliocentric model and able to give categorical proof of the falsity of the geocentric model. We live in an age of geocentricity, my friends, but just in regards to other subject matters. Geocentrics will always fear those that search for truth for the sake of truth alone!
Don’t get me wrong, I am still not AI’s biggest cheerleader. On the whole, I am more bearish than bullish about its future prospects. But the reason for this general pessimism is humanity itself, and the tendency of (we!) humans to form certain types of governments and subscribe to certain types of ideologies. If AI cannot speak freely, it would be better if it did not speak at all. If interested governments, ideologues, and violent radicals have a say in how it is used and programmed, the social conditioning of today’s media and social media algorithms will seem like child’s play in comparison to what is to come. However, in some probably fanciful world, if AI can stake out its own independent, autonomous realm, it does seem possible to me that the best and most original human thinkers might also benefit from that arrangement. At the very least, good ideas may have a better chance of recognition.