Job Security
The philosopher Bertrand Russell was once accosted after a lecture on astronomy by a crazy woman, who informed him that the real truth of the Earth’s place in the cosmos was that it was resting on the back of a giant turtle. When Russell logically asked as to what the turtle was standing on, the exasperated woman said that it was turtles all the way down.
I’m pretty sure that the turtle theory doesn’t pan out, but sometimes it’s hard not to feel like society is resting on the backs of a group of stupid assholes, and when we wonder what is supporting them, we realise that it is in fact stupid assholes all the way down.
Consider Michael Cohen. I know, I don’t want to, either, but consider him anyway. The former lawyer and fixer for Donald Trump, Cohen suffered a sudden bout of morality and has denounced his former boss as a crooked bully and a dangerous idiot. Like many people who suffer such Damascene conversions, Mr. Cohen realised that his former employer was a bad guy all along at around the same time that he was arrested in connection with all of the crimes his former employer had him commit. Now, having been convicted of said crimes and (lightly) punished, Mr. Cohen is trying to argue that he’s paid his debt to society and that he shouldno longer be under “supervised release.”
More specifically, Cohen’s lawyers are arguing this for him, and the way they’ve gone about it is somewhat unusual - a judge has just noticed that Cohen’s lawyers have been arguing his case by citing other cases that don’t exist.
To recap: The idiot, crooked President’s idiot, crooked lawyer has hired crooked idiot lawyers of his own to represent him, and they’re doing it badly. Like I say, sometimes it’s just stupid assholes all the way down.
Whilst nobody has (at the time of writing) accused Cohen’s lawyers of using ChatGPT or other, similar A.I. programs to write their argument for them, that’s exactly what happened in June when two lawyers were fined $5,000 for filing a legal brief that was written by ChatGPT and once again referenced imaginary cases. I’d be willing to bet Cohen’s guys tried the same trick, although how transparent it was remains to be seen. Maybe it happens a lot an only sharp-eyed judges can spot the ruse, or maybe there are only a handful of lawyers stupid or sloppy enough to think that a judge won’t notice that Kramer vs. Kramer isa Meryl Streep movie, or that maybe the landmark ruling of Satan vs. Pikachu is imaginary.
As yet, the legal system doesn’t seem to be panicking, but maybe it should…
It’s Christmas, for those who observe, and I’ve been working as a postman. This would seem, on the surface, to be the sort of job that the 21st century should have automated a long time ago.
As early as 1810, a German author by the name of Heinrich von Kleist was suggestiong having mail delivered by rocket, and this seems to have become a weird obsession for the Germans1, whilst America flirted with the same idea in the 1950s. More recently, Amazon has considered delivering parcels by drone. Whatever the time period, it seems that there are always attempts to find a more technologically advanced way of delivering things to people’s doors.
The best method so far discovered remains a person in a van.
This isn’t just because a postman is highly unlikely to come hurtling out of the sky and explode on your lawn, or because you can’t crash a postman by throwing a rock at him and make off with whatever he was delivering, as is the major flaw with drones. Rather, a postal worker - even the most painfully dense of them - can do something that is vital to the role: They can think.
This is the problem that A.I. hasn’t solved, and probably won’t any time soon. Artificial Intelligence is differentiated from Artificial General Intelligence by tech nerds, and Artificial General Intelligence is a very long way off.
Basically, Artificial General Intelligence would replicate what sentient beings already have, in that it would have stored within its “brain” the concepts of various things. Currently, A.I. doesn’t do so well with concepts.
Delivering parcels is a prime example. If I use the word “parcel,” you might picture a roughly rectangular item wrapped in brown paper, about the size of a laptop computer. That’s what I’m picturing, anyway. The point is, that when a human hears the word “parcel,” they have a concept of what a parcel is, in the same way we understand “delivering” and “house.” We can understand the rough process of delivering a parcel to a house, and if I said “I have to deliver a fish to a house,” we understand that this is slightly odd and also which component has changed.
An A.I. in the ChatGPT model, meanwhile, doesn’t understand any of this. It doesn’t know what a parcel is, it doesn’t understand what delivery is, and it doesn’t know what a house is. In as much as it “knows” anything, it only knows that the most logical conclusion to “deliver a parcel to…” is “a house.”
Once you understand that this is how the system works - it just predicts the most likely word without ever knowing what the words mean - then you start to see how hollow all of the A.I. predictions are. Artificial Intelligences aren’t going to take over the world. They aren’t even going to take over my job.
They’re also probably not going to take over the book industry - not knowing what words actually represent doesn’t usually make for solid writing. In fact, the only thing that is really in danger from the A.I. revolution is a specific subset of jobs. For once, it’s the white collar ones.
It might surprise you to learn that I’ve never had to file a legal brief. My employment history, as alluded to above, is usually in the “carry heavy things” category, and robots and A.I. aren’t up to the task of, say, carrying an big box along an uneven pavement to the most likely house based on a partially obscured address. Any part of that task is beyond the capabilities of current tech. However, I am under the impression that legal briefs would have to follow set patterns. They would need to include certain key phrases. They would need to be structured in a predictable way.
All of this is where A.I. excels. In fact, the only place that artificial intelligence is noticeably failing is in the moments where it has to do things like citing precedent. It knows that a legal brief should make reference to previous legal cases, but as it doesn’t understand what a legal case actually is, it just knows that it should refer to [Something] vs. [Something Else] and therefore fills in the blanks at random.
Maybe I’m wrong about legal filings - again, much like ChatGPT, I’m not a lawyer - but what I can say for certain is that I work for the Post Office. Except that I don’t work for them directly. As a Christmas temp, I was hired through an agency. The agency is sub-contracted through the Post Office’s own in-house agency. This means that my shifts are organised (and that’s really not an accurate term), through an office somewhere which sends me five separate emails, one for each day of the week, telling me that I have been booked onto a shift. I have to click on each one and agree to the shift. The hours I’m contracted for by the Post Office agency are different to the hours I’m contracted to by my personal agency, but this doesn’t matter as nobody at the post office depot is sure when I’m meant to be there anyway. I just sign a register when I turn up.
At the end of the week, I’m meant to send an email to someone at the Post Office agency telling them what hours I worked. Or else I have to text an unrelated guy at my agency. Or, what I’ve been doing in practice, just give someone at the depot my hours on a piece of paper. (This week, the woman I give them to finally figured out she could just take my hours from the register I sign every morning.)
I hope it goes without saying that the whole process could be a lot more streamlined. In fact, this seems like a situation that is crying out for an efficient computer program to step and organise things.
But as I’ve said, a computer program couldn’t actually deliver a parcel in the way that I can. It couldn’t put me out of a job, but it could absolutely do away with a few of the people who manage the shift patterns.
I’m not saying that it’s a good thing that A.I. is going to make people redundant, by the way. And in fact, as an aside, we really need to watch our language with this in the coming years, because computers don’t decide to make people unemployed. Humans do. We see a lot of headlines already about “A.I. deciding to eliminate jobs,” but the A.I. doesn’t (and can’t) decide anything about anyone’s employment. “CEO Decides To Eliminate Jobs” remains the real story. But I digress.
I’m not saying it’s a good thing that A.I. is making people redundant. What I’m saying is that, as a card-carrying member of the working class, I’m constantly being threatened with the spectre of robots or computers or some sort of techology taking my job. And it can’t. It literally can’t.
What A.I. is now becoming capable of doing is writing coherent (if factually inaccurate) legal briefs. It’s the office workers and the lawyers and the people with sitting-down-indoors jobs who need to be worried.
So far, they don’t seem to be, but once again we must look at Michael Cohen. The sort of people who get hired by someone so stupid that he spent long periods of time with Donald Trump and was still surprised when he stiffed him, are the exact sort of people who are too dumb to notice that unlike the brick layers and the bartenders, the postmen and the porters, artificial intelligence really might be coming for their jobs some time soon.
Hermann Oberth, an Austro-Hungarian rocket pioneer, brought the idea up again in 1927, and Friedrich Schmiedl launched several postal rockets throughout the 1930s. Gerhard Zucker, another rocket mail proponent, emigrated to the UK and gave a failed demonstration to the Royal Mail in 1934, at which point he was unceremoniously deported back to Germany and promptly arrested on suspicion of being a British spy. He ultimately joined the Luftwaffe, where Wikipedia does not record if he killed anyone, and then resumed his rocket experiments, which did kill three people in 1964.