In a new paper, several Stanford economists studied payroll data from the private company ADP, which covers millions of workers, through mid-2025. They found that young workers aged 22–25 in “highly AI-exposed” jobs, such as software developers and customer service agents, experienced a 13 percent decline in employment since the advent of ChatGPT. Notably, the economists found that older workers and less-exposed jobs, such as home health aides, saw steady or rising employment. “There’s a clear, evident change when you specifically look at young workers who are highly exposed to AI,” Stanford economist Erik Brynjolfsson, who wrote the paper with Bharat Chandar and Ruyu Chen, told the Wall Street Journal.
In five months, the question of “Is AI reducing work for young Americans?” has its fourth answer: from possibly, to definitely, to almost certainly no, to plausibly yes. You might find this back-and-forth annoying. I think it’s fantastic. This is a model for what I want from public commentary on social and economic trends: Smart, quantitatively rich, and good-faith debate of issues of seismic consequence to American society.
How can ai be destroying jobs I havent seen a single good implementation except maybe dev work but even then its not speeding anything up.
as a consultant/freelancer dev whose entire workload for the past year has been cleaning up AI slop, no with dev it hasn’t been what I would say a smooth or even good implementation. for my wallet? been a fantastic implementation, for everyone else? not so much.
The thing is as a TOOL it’s great depending on the model. As a rubber duck? fantastic. As something that the majority of companies have utilized with vibe coding to build something end to end? no, it’s horrible. It can’t scale anything, implements exploits left right and center, and unlike junior devs doesn’t learn anything. If you don’t hold its hand during a build then it’ll quickly go off the rails. It’ll implement old APIs or libraries or whatever simply because those things have the most documentation attached to it.
An example. a few weeks ago a client wanted to set up a private git instance with Forgejo. They had Claude Code set it up for them. the problem? Claude went with Forgejo 1.20. ForgeJo is currently on 12.0. MASSIVE security hole right there. Why did Claude do that? 1.20 had more documentation as opposed to 12.0. And when I say “documentation” I could simply be referring to blog posts, articles, whatever that talked about it more than the latest version because The LLM’s will leverage that stuff when making decisions for builds. You also see it if you want something in Rust+Smithy. Majority of the time the AI will go for a very outdated version of Smithy because that’s what a lot of people talked about at one point. So you’re generating massive tech debt before even throwing something into production.
Now like I said as a tool? a problem solver for a function you can’t figure out? it’s great. the issue is like I said companies aren’t seeing it as a tool, they’re seeing it as a cost saving replacement for a living human being which it is not. It’s like replacing construction worker with a hammer attached to a drone and then wondering why your house frame keeps falling over.
I’ve seen some beginners trying to use AI to generate simple features for visual novels using Sugarcube (Twine). The code was awful, they didn’t understand it and even people with some experience needed time to understand what the fuck the code was even trying to do.
AI is this decade’s Rational Rose.
It doesn’t have to be a good implementation, it only has to be good enough for a demo to get the C-suite saying “Oh, slap a chatbot on there and then fire half the department that handles this now”.
The company I work for has replaced a lot of its employees with AI. It’s absolutely useless and we have to cover the loss but the fact it doesn’t work very well doesn’t help the fired employees.
It doesn’t matter if the implementation is good. All that matters is that middle management gets more stuff “done” with fewer people. Where the definition of “done” is just tgat it hasn’t exploded in their face… yet…
Junior devs and sysadmins who do not much very useful stuff yet, but get some basic experience. And people whose main required traits are human voice and following script.
Transient processes are a thing, one can have plenty of middle and senior devs and sysadmins, with the economy not producing new ones anymore. So the employers are hiring those, and replacing juniors with AI. Whether that works I’m not sure.
So at some point the AI bubble will be over (at least in dev and sysadmin and such work), but there will be fewer developers, and there might eventually be a situation where there are fewer qualified developers in the economy overall. Which would give centralized corporate things a market advantage over smaller non-corporate things, due to cost of development growing after the fall happening now.
While for some not very qualified jobs humans won’t be needed anymore - while that “AI” is expensive, it might really be, even after the bubble crash, more affordable than hiring a human (in a western country) for a bullshit job - except in everything I’ve read those bullshit jobs were treated as social responsibility to teach work ethic to growing generations, that weird mix of individualist and working class themes in books describing pre-Depression USA. Yes, individualism is important and being self-reliant is important, but even that protestant ethic wasn’t about capitalism more than it was about dignity and hard work.
I think Silicon Valley is consciously playing Asimov’s Foundation with our planet (seeding technologies affecting humanity’s development by some schedule with expected global results), except where Asimov’s Foundation was about preserving knowledge and civilization, they are moving in the opposite direction. That is, they may not understand it. They may think they are building that sci-fi empire the Foundation begins with. But in actuality they are breaking concrete and steel things that work and replace them with paper huts kinda resembling something that would work better. Metaphorically.
They don’t understand what an empire is, neither the “mandate of heaven” kind nor the “unity of civilization” kind (heck, even the Soviet covertly Christian “building the city of sun” kind, like in Vysotsky’s song - “… но сады сторожат и стреляют без промаха в лоб”). You don’t build an empire by burning libraries and poisoning discourses, you also don’t build an empire by making every its citizen uncertain whether they are a free man or a slave (it’s a common misconception to start an attempt at an empire from points where previous empires failed ; that state is usually expected to fail again for the same reasons).
Customer service, AI voice assistance is interacting with customers, which was usually done by people.
It’s doesnt have to work, it just has to be convincing enough to get the bean counters and/or incompetent/sociopathic upper management to buy in to the idea that they can save money.
Same as always, if the shitstorm created by a decision isn’t immediately devastating or can be incontrovertibly tied to said decision then that’s just BAU.
but the time the shitshow starts playing the preroll trailers the golden parachutes and bonuses have been claimed.
For them, this isn’t broken, this is how the game works.
It being shit does not stop corporations for using it, especially for stuff like customer service.