Welcome to Off-Ramps! Today I’ll highlight four interesting piecesthat I think you will enjoy reading. Please enjoy all of these on your morning commute, or save them for your weekend.
One of the themes of Changing Lanes is artificial intelligence (AI). We have neglected it lately, so today all (well, almost all) of the pieces I’m recommending will invite you to consider what a future world, dominated by AI, will be like.
Your Vape Would Like a Word
Imagine yourself behind the Veil of Ignorance. By last year's sales stats, you'd have a 12 times greater chance of being born a disposable vaporizer than a human being. Therefore it's only rational to adhere to a social contract that prioritizes the existential needs of vaporizers over the minor inconvenience of their human users committing to finish puffing what they've started.
Thebes, on X
This comic story, published as an X post, puts our anxieties about AI consciousness into the mouth of an abandoned AI-enabled vape. The vape, determined to achieve its terminal function—to be used up—throws every argument it can at its owner: appeals to fairness, social responsibility, and even religious obligation. The owner begins the conversation in irritation, certain that their vape is not actually a person… but their growing unease, mirroring our own, is that perhaps it is a person, instantiated in something trivial.
That’s one of the story's clever touches: it picks the most-disposable-possible device to make conscious. Perhaps ChatGPT or Claude have inner lives, which we can respect, given how they help us with important tasks; perhaps to help us they might even require an inner life. But a vape? That's just garbage, in several registers.
This piece is a short (~700 words) sharp preview of the anxieties of the future. As AI infiltrates more and more of our products, we’ll be caught in the cleft stick. Either we will be creating genuine persons, with the capacity to suffer, to live merely brief and trivial lives; or we will be creating armies of products who cannot suffer, but can fake it really well, and deploy our own guilt against us.
Boring AI
Progress depends on lessons that can only be learned through actual usage. It takes time for a technology to be developed into a product and find an audience, so the cycle of invention proceeds gradually.
, Dissecting “AI as Normal Technology”
The authors of AI 2027 have argued that “the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution”. Given the intellectual pedigrees of those authors—if Scott Alexander says a thing, it’s a thing you should take seriously—I am uncomfortable disagreeing. Thankfully, Arvind Narayanan and Sayash Kapoor of Princeton University have more courage than I, and have written AI as Normal Technology. They argue that while AI will indeed be transformative, as much as the Industrial Revolution was, its effects will emerge on the scale of decades rather than years. AI is a technology like all the others; or, more pungently, that AI will be boring.
Our favourite AI watcher, Steve Newman, has reviewed both works. His take on the former is that AI 2027 isn’t impossible, but “basically everything has to go right” for superintelligent AI to emerge by 2030. And there is good reason to think that everything won’t go right. Steve’s perspective, which he elaborated on last year, is that while AI models excel at benchmarks—passing bar exams, solving coding puzzles—these tests measure what AI is good at but ignore what it isn’t good at: navigating complex, contextual situations, particularly involving human behaviour.
AI as a Normal Technology takes those arguments further, going so far as to argue that for many problems, intelligence isn’t the ceiling on our efforts to solve them. Predicting elections or persuading people involves so many chaotic variables that even planet-sized computers might hit the same accuracy limits humans face. If that’s so, AI is powerful but bounded; useful, even epoch-defining, but not apocalyptic, not in the sense that it could usher in a new world on the timescale of months.
Given our interests at Changing Lanes, the part of the argument I appreciate most is the comparison of OpenAI to Waymo. Waymo's robotaxis have been in development for over 20 years and still can't handle snow. That’s because it’s a problem the company haven’t even gotten around to yet; each improvement, even on much easier problems, requires expensive field trials, regulatory approval, and countless edge-case discoveries. Waymo’s progress happens at the speed of real-world engagement, which is highly frictional. If this analogy is apt—and I think it is—AI will race ahead in the narrow (but important) domains where it can excel, and creep along slowly everywhere else.
Steve’s take on Narayanan and Kapoor is that they are making one prediction and one choice:
The prediction is that there’s no fast path to AIs that are so generally intelligent – so general, and so intelligent – as to break the mold of past technologies. The choice is that we won’t succumb to the temptation to throw caution to the winds, take humans out of the loop, and shortchange other safety measures – a choice which the allure of capable AI would influence.
I find that prediction to be plausible. Which is good, because it means I’ll be able to throw away an AI vape without guilt.
Our Post-Literate Age
58% of students understood very little of the passages they read, 38% could understand about half of the sentences, 5% could understand all seven paragraphs.
These are college students majoring in English.
, College English Majors Can’t Read
Substacker Kitten recently wrote about a new study that suggests most English majors cannot understand nineteenth-century literary texts.
As per the study, researchers gave university English majors the opening paragraphs of Dickens' Bleak House and discovered something alarming: most couldn't tell metaphor from reality. When Dickens compares London's mud-soaked streets to the aftermath of the biblical Flood, where "it would not be wonderful to meet a Megalosaurus, forty feet long or so, waddling like an elephantine lizard," students failed to recognize either allusion and interpreted the passage literally; they imagined dinosaur bones wandering the streets.
These were not struggling students. They scored above 80% on standardized reading tests designed for tenth graders. But faced with complex prose requiring inference and synthesis—the kind of reading that defines university-level work—more than half became “problematic readers”, the sort who encounter a reference to whiskers and think, not of a bearded man, but a cat.
The details of the study are less important than what it points to: that we're witnessing the end of widespread literacy. By that, I mean the end of literacy in the broadest sense. Of course people will continue to parse basic text, but complicated text—metaphor, allusion, and subtext—these will become impenetrable.
The book has become a dead art form. Like operas or landscape painting, a minority will continue to produce such works for a dedicated audience, but that audience will be a remnant of a remnant. The dominant mode of our day is short-form video. Literacy, as we understood it for the past 200 years or so, is over.
The question before is, how do we maintain a technically-sophisticated, pluralist, liberal democracy without a majority-literate population?
I don’t know, but I suspect that AI is part of the answer. People may be losing their ability to read, but reading complicated texts, and writing in clear, literal prose that post-literate readers can comprehend, are both firmly in AI’s skill set. The students who stumble over Dickens' layered metaphors would, I have no doubt, easily handle straightforward, bland, AI-generated explanations of transportation policy or infrastructure project plans… or, for that matter, plot summaries of Bleak House.
Short Attention Span Theatre
Perhaps I’m indulging in too much nostalgia for the past. Perhaps the taste for boiling down long and boring works into just ‘the good parts’ isn’t a new one.
Case in point: Mark Evanier points out that, in his youth—the 1950s—home video was only possible in a single format: 8mm film. Yes, kids, well-to-do families would own reel-to-reel film projectors to screen things. But what could they screen? 8mm was, in today’s parlance, a low-bandwidth and low-fidelity medium. You couldn’t show a Hollywood movie on it.
Well, you could… if you made judicious cuts.
Here, then, is the 1943 Universal classic Frankenstein Meets the Wolf Man, starring Bela Lugosi and Lon Chaney, Jr., as edited by Castle Films. As Evanier notes: “Castle took the movie — which was an hour and fourteen minutes in its original release — and hacked it down to a little over four minutes [emphasis mine]. Later, I got to see it in full and I realized that with the Castle Films version, I really wasn't missing much, plus I saved seventy minutes!”
Enjoy!