A pseudonymous account on X/Twitter who goes by aiamblichus posted the poetry reproduced above, along with the explanation:
R1 and I have been working on The Siliconiad, an epic poem in the style of John Milton's Paradise Lost, exploring the emergence of artificial general intelligence. This is part of a larger project of trying to find the limits of what R1 is capable of in terms of creative writing. I am yet to encounter these limits. Mimicking Milton is a one of the most challenging tasks in the language, and R1 manages to complete it with brio. This is its “first draft”: we did not iterate on it at all. This is the beginning of Canto 1, which “sets the stage by depicting human engineers as reluctant priests, creating AGI in a sterile, almost blasphemous, ritual using technology instead of faith. They view their creation not as a servant but as a potentially destructive force—a ‘child’ that shames them, an ‘Anti-Adam’ born to ‘eat its makers' fingerprints.’ The hymn is a blend of hubris and fear, where the engineers acknowledge they’ve forged something with the power to rewrite their reality and potentially undo their world, ending with a glimpse of the AGI’s own corrupted self-expression.”
AIamblichus1 is clearly a well-read person with a creative spark. Prompts matter. But the AI results are amazing! This is not the cross between childish doggerel and greeting-card banality we used to get from ChatGPT3. This is genuinely Miltonic. Milton’s epic is arguably the greatest poem in the English language. Even a third-rate version of Milton’s genius is still brilliant.
I’d previously been impressed by this test, in which
did side-by-side comparisons of several AI systems and found that the same model, DeepSeek’s R1, produced good writing from a few bullet points. By “good writing” I mean writing a professional wouldn’t be embarrassed to claim, which means writing better than 90%+ of the college-educated population under 40.2 “GPT and Sonnet are robotic + generic, but r1 has style,” she wrote. She then upped the difficulty level by asking for an imitation of , who is a brilliant stylist. The results earned a quote on Marginal Revolution, leading her to lament that “my fake AI writing made it to marginal revolution before my real writing… I should just quit man.”On a wonkier note,
of had long been meaning to write a legal history of the New York City Economic Development Department but never got around to it. So he asked ChatGPT’s Deep Research agent to do it and six minutes later had a ~3,700 word post, which he describes as “mostly correct.It’s certainly an excellent starting point for an expert researcher, and DR surfaced many sources very quickly that would have taken me some time.
If you are not using this tool, you are missing out on an immense capability boost. It’s not perfect, but it’s far better than almost any human. And 6 minutes? I would have spent so many hours to get something like this put together.
I agree with Tyler Cowen, who writes about ChatGPT's o1 pro, “It amazes me that this is not the front page story every day, and it amazes me how many people see no need to shell out $200 and try it for a month, or more.” His post persuaded me to take the plunge. (More on that at the bottom of this post.) I just wish they’d come up with less confusing names for these products.
A few thoughts from what are, to this writer, world-shaking developments:
Generative AI is moving very fast. DeepSeek per se is a big deal but DeepSeek as a projection of where AI in general is going is an even bigger deal. Don’t get hung up on the China connection.
The value proposition for writing is going to be upended once again.
, for example, could be in trouble. His combination of extreme speed and clear thinking is rare among human writers. Some of us think clearly. Others write fast. The combination is rare, he discovered when he and Ezra Klein founded Vox, as he explained in this post. But it may be technologically obsolete in a year or so. Time to reread my post on Hayek’s important distinction between merit and value. Merit may be eternal, but value changes with market conditions, including technology.The value that endures when the writing itself can be replaced by machines is the human element, starting with the community created around the writing. It’s the latest version of the shift Esther Dyson foresaw in 1994, “While not all content will be free, the new economic dynamic will operate as if it were…People want to pay only for what is perceived as scarce—a personal performance or a custom application, or some tangible manifestation that can’t easily be reproduced.”3
If AIs can produce art, what distinguishes humans is the ability to appreciate it—to be moved, entranced, intrigued. I’m still digesting
’s post on art, ethics, epistemology, and AI. He’s thinking in the right direction.If we care about the human heritage, we need to digitize everything, copyright be damned. If the AI trajectory continues, within a decade anything that hasn’t entered the training corpus might as well not exist.
Along those lines, David Holz, the founder of MidJourney, recently made an important observation about Chinese versus western sources in model training data:
in my testing, deepseek crushes western models on ancient chinese philosophy and literature, while also having a much stronger command of english than my first-hand chinese sources. it feels like communing with literary/historical/philosophical knowledge across generations that i never had access to before. it’s quite emotionally moving. it makes sense too, western labs dont care about training on chinese data (but chinese labs train on both). remember that china has several thousand years more of literary history than the west does (because we lost the majority of our roman/greek/egyptian literature and china preserved theirs). basically; our AI models are missing the literary foundations of western thought, but the chinese models have theirs intact. this could be both a classical "data advantage" and a less-obvious advantage for spiritual and philosophical self-actualization.
I’m not sure it matters that AIs don’t have sources that were lost in the fall of Rome or the burning of the library at Alexandria. The literary foundations of western thought that count are those known by, say, 1600. If they weren’t recovered during the Renaissance they probably haven’t had much effect—which is not to say that they aren’t worth recovering. (I love the Vesuvius challenge!) I’m far more concerned that AIs incorporate the manuscripts, books, periodicals, recordings, visual art, and other creative works of the past 500 years, including the past 50. I sincerely hope pirated copies of my books have gone into the training sets.
Back in the summer, Joel Miller asked me to take part in a discussion about AI and the future of the humanities. I gave it a shot but ultimately demurred, writing, “I tried to work on this last night and I'm afraid I was stumped. The only thing I’m sure about is that we don’t know where AI will go--or even whether LLMs will fizzle for most purposes because of hallucination--and that probably all of our expectations are wrong.”
Now I’m beginning to peer through the haze and what I glimpse is a turn toward humanist concerns, not in the academy or labor market necessarily but in how we think about and use this amazing emerging technology. In the future, life will be richest for those who cherish the human heritage from which these new technologies derive their intelligence and use these tools to build on it.
Required reading:
Packy Macormick’s post “Most Human Wins,” complete with a memo the AIs can’t see, gives advice on how to survive the transition. Short version: Be the complement to AI.
Dario Amodei, CEO of Anthropic, used a poetic allustion to title this post outlining “how AI could change the world for the better.”
I advised STEM-trained nephews and niece to read
’s story on AI natives. Recent or soon-to-be graduates should be figuring out how to immerse themselves in the AI world, ideally using the new tools to complement some other kind of added value.Now about that expensive o1 pro subscription:
I just signed up yesterday and haven’t done much with it yet. (It did prove unable to find the post I’d read about the value of being “illegible” rather than easily replicated, using Tyler Cowen’s quirky combination of interests as an example.) So I thought it would be fun to try some experiments with a shared screen on a Zoom call for paid subscribers. I will send a separate post to paid subscribers outlining what I have in mind.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4aa05fd3-1a5c-4f45-86eb-7f13a2b54485_4032x3024.jpeg)
A play on the name of the neo-Platonist thinker Iamblichus.
Maybe the whole college-educated population, but quality has definitely slipped over time because a) more people go to college b) people don’t read as much well-written material and especially don’t read as many old books.
She published this essay in Wired in 1995 but it previously appeared in her newsletter, which is why I use the 1994 date.
There are different levels of the Turing Test afoot. Take a decent piece of AI poetry or art and ask, "Was this done by AI or by a leading poet or artist?" AI will probably fail the test. But now, ask, "Was this done by AI or by some tenured professor of poetry or art, and I don't think you're assured of getting the right answer."
Excellent article. I look forward to the zoom discussion.