Welcome to what has become a recurring segment of Off-Ramps! Today I’ll highlight four interesting pieces that I think you will enjoy reading. Please enjoy all of these on your morning commute, or save them for your weekend.
And a brief programming note: as of next week, our publication schedule, which has been twice-weekly since August 2024, will revert to once a week for the summer months.
Tesla Fumbles Its Cybercab Robotaxi Launch
Long-time readers will recall that the inaugural issue of this newsletter argued the thesis that Tesla Isn’t Going to Succeed in Robotaxis; a thesis we have since re-committed to twice.
Earlier this week, on 22 June, Tesla finally launched its robotaxi service in Austin. So how have I updated on my predictions?
So far, not at all.
To be clear: as I opened my very first piece, “Elon Musk [has] claimed that Tesla Motors is going to enter the robotaxi business, and thereby increase the firm’s stock market valuation from around $740 billion to several trillions of dollars”.
Nothing we saw this week gives us any reason to believe this will happen.
Wind the clock back to as long ago as earlier this year, when CEO Elon Musk said:
So, we're going to be launching unsupervised full self-driving as a paid service in Austin in June… We feel confident in being able to do an initial launch of unsupervised, no one in the car, full self-driving in Austin in June… It's pretty cool. Like I said, these Teslas will be in the wild with no one in them in June in Austin… our sort of solution is a generalized AI solution. It does not require high precision maps of locality. So, we just want to be cautious. It's not that it doesn't work beyond Austin.
We didn’t actually get these things.
We did not get a Cybercab, a dedicated robotaxi vehicle. Instead, we got regular 2025 Model Y SUVs outfitted with Tesla’s automated-driving system.
Secondly, we didn’t get “unsupervised Full Self Driving”, we got company employees riding shotgun as safety chaperones, and invisible tele-operators monitoring behind the scenes.
Thirdly, we didn’t get go-anywhere vehicles, but carefully circumscribed geofenced service areas.
To be clear: these are all things that Musk not only promised he would deliver, but also that he mocked other robotaxi companies for doing, and which he said didn’t count as “real self-driving”.
That’s bad enough, but what’s worse is the quality of the Tesla system. This video is not for the faint of heart. If that’s you, I’ll describe it: the Tesla car is in the left-hand turn lane and is signaling that it will turn left. It enters into an intersection, but does not make its turn. Instead, it jitters back and forth between making its turn and going straight; then enters the oncoming traffic lane and proceeds; then it briefly inches back towards where it should be, but ultimately decides to continue against traffic, until it reaches an upcoming belled-out turn lane.
That’s a ghastly series of errors. And they happened on the very first day of service; on a roadway with clear markings; in broad daylight; with no inclement weather.
I will keep an eye on the rollout, but so far, I stand by my prediction that the firm will run out of road before it can get robotaxis right.
Waymos Living Down to Human Driving
I don’t want to make too much of this, as it’s based on anecdata collected by one observer. Still, the claim here is troubling enough to be worth examining.
William Riggs, a University of San Francisco engineering professor, claims that he has observed “a lot more anticipation and assertiveness” from Waymo robotaxis recently, including at least one instance of a Waymo creeping forward at a crosswalk even though there were pedestrians using it at the time.
The reporter from the San Francisco Chronicle interviews Waymo’s director of product management, who doesn’t address that particular alleged incident, but does state that Waymos have become more “assertive” recently. He frames this as progress; by operating as humans operate, Waymo robotaxis become more predictable for other road users, and as such there is less chance of a road incident.
I have sympathy with this argument. I do agree that when driving, one’s behavour should be maximally legible; every other road user should be able to make clear predictions about how you will behave, and govern themselves accordingly. So by all means, Waymos should claim the right of way when they have it, and shouldn’t brake suddenly because a pedestrian on the sidewalk looks like she might enter traffic. This is as it should be.
But what is not as it should be is intimidating pedestrians in crosswalks. The entire point of driving automation, as I see it, is that they are legible to all other road users as rule-followers. It’s that combination—obeying the rules of the road, and doing so predictably—that makes them so safe, which is their principal value-add in a world where they are not the majority.
I’ve written before of the 50% problem, the dangerous transition period when neither human nor automated vehicles dominate our roads. I had thought it would be the bad behaviour of humans that make it a problem, not bad behaviour on each side. I don’t know if this kind of activity—stipulating it is real—stems from a new emphasis on assertive driving, or if it is a reflection of machine-learning automated-driving systems overfitting to human misbehaviour in the training data.
Whichever it is, I hope Waymo squelches it soon.
AI Training Doesn’t Violate Copyright
Three authors sued Anthropic last August, arguing the company infringed copyright by training Claude on their books without permission. Their class-action lawsuit aims to represent thousands of authors whose works ended up in Anthropic's training datasets. This week, California federal judge William Alsup delivered a ruling on the case that, if it stands, will stand as an important precedent for future AI development.
As per
of , the decision is a mixed bag for Anthropic but good for the AI industry as a whole.It’s bad news for Anthropic because Judge Alsup found the company clearly infringed copyright by downloading millions of pirated books from black-market repositories. The Anthropic team knew these were unauthorized copies, yet downloaded them anyway. That decision could cost Anthropic hundreds of millions in statutory damages. And it’s hard to disagree with it: they knew they were stealing material and proceeded anyway. As I said up top, this is something I cannot condone.
But the good news I can condone, which is that Alsup ruled that training AI models on legally-obtained copyrighted material constitutes fair use.
As a reminder, in copyright law, fair use of copyrighted material is permitted without license or permission. And what constitutes fair use? To put it very simply, it is taking elements of the work and ‘transforming’ them into something new, i.e., something that does not replace the original in the marketplace, but stands next to it. So, for example, The Tao of Pooh may fairly use The House at Pooh Corner as source material without permission, because the former is not a competing book of children’s stories, but a philosophical treatise; a documentary film about the Beatles may use brief clips of the Beatles’ music, even without permission, because the film isn’t competing with the music. It’s transforming it into something new, in a new medium.
In Judge Alsop’s view, creating a large language model is “spectacularly” transformative.
Presuming it stands, this decision draws a bright line that every AI company will now adhere to, namely that AI models may be trained legally on any text that is legally acquired. Anthropic ultimately switched from bootlegging books digitally to buying hard copies legally, scanning them, and then throwing the paper away. This creates a clear compliance pathway for all companies in the sector: pay market prices for datasets, filter outputs to prevent copyright violation, and everything is above board.
However, AI companies shouldn't celebrate prematurely. This case will spend years winding through appeals courts, and other judges may take dimmer views. Still, this ruling is a relief to me, and anyone who remembers this story from the Atlantic, about a previous attempt by Google to build a universal library that was destroyed by clumsy interpretation of copyright law. I had feared the courts would make building an LLM de facto impossible by an American company, and am heartened that it seems they won’t repeat past mistakes.
Your Friends Spread Climate Misinformation Too
Between now and 2050 'output declines by 19%' [doesn't] mean that it literally goes down. Based on underlying growth trends, world economic output is likely to double by 2050. What the authors are saying is that, rather than just the expected increase of 100% between now and 2050, the world economy could be increasing by 119% if there were no climate change.
A new piece by Joe Heath, Changing Lanes’ favourite living philosopher, is always worth reading; his latest, a takedown of “highbrow climate misinformation”, certainly is. He’s not going after climate-change deniers, but rather the other side; his target is the systematic distortions peddled by the sources with impeccable progressive values, like The Guardian.
Put bluntly, these are Blue Tribe people spreading misinformation, which Heath argues is, in its own way, just as destructive.
The deception in question has to do with baselines. When researchers say climate change will reduce GDP by 23% by 2100 “relative to scenarios without climate change”, it’s easy to read that as a severe economic blow… and too many commentators encourage that misreading. But it is a misreading, because that reduction means being 23% less rich than we might have been, not 23% less rich than we are today. Future generations will still be several times wealthier than we are… just not as wealthy as they would have been if climate change had been averted.
Heath, who specializes in public governance, is concerned because distortions like these encourage us to support policy that won’t actually help. As Heath notes:
“Just 100 companies responsible for 71% of global emissions, study says” [reports The Guardian]. If one reads the article carefully, one will discover that investor-owned corporations are responsible for less than half of these emissions, and that of the top 10 emitters, only two of them are private corporations. And yet the average person who reads the word “companies” will assume that it refers to private corporations (not states or state-owned enterprises). As a result, I often run into people who think that large corporations are predominantly responsible for climate change. (I have even seen this claim repeated in peer-reviewed books and articles.)
For myself, I worry less about bad policy and more about bad vibes. Insistence on the destructiveness of climate change does not, in my experience, lead people to work toward solutions. It instead leads them to collapse into doomerism, and younger people especially.
If you don’t believe me, do a search on your social-media app of choice for the phrase the planet is on fire.
At Changing Lanes, we think the greatest advocate for progress was Orwell. Like him, we believe that the first job is to insist on the truth, and the first insight is that those who think they can build a better world by lying are the most dangerous friends one can have.
In Austin, where we've been using Waymos pretty regularly for a few months now, we've also noticed the more assertive "behavior"—though I haven't witnessed anything like the pedestrian incident. Still, the goal should not be to have a fleet of asshole robotaxis on the road!
So why can't an AI company just "go to the library" and take out all the books available to scan them rather than buying them outright? It seems they are allowed to use written material, but the lawsuit is about authors' compensation. Or is it about payment to the publishers, since presumably the authors have already been paid by their publishers? That said, as an indie musician (i.e. without a record company) I would be a little upset if my music was used by someone else without any reference or compensation.