Hacker News

271

Labor market impacts of AI: A new measure and early evidence

I was at a big tech for last 10 years, quit my job last month - I feel 50x more productive outside than inside.

Here is my take on AI's impact on productivity:

First let's review what are LLMs objectively good at: 1. Writing boiler plate code 2. Translating between two different coding languages (migration) 3. Learning new things: Summarizing knowledge, explaining concepts 4. Documentation, menial tasks

At a big tech product company #1 #2 #3 are not as frequent as one would think - most of the time is spent in meetings and meetings about meetings. Things move slowly - it's designed to be like that. Majority devs are working on integrating systems - whatever their manager sold to their manager and so on. The only time AI really helped me at my job was when I did a one-week hackathon. Outside of that, integrations of AI felt like more work rather than less - without much productivity boost.

Outside, it has proven to be a real productivity boost for me. It checks all the four boxes. Plus, I don't have to worry about legal, integrations, production bugs (eventually those will come).

So, depends who you are asking -- it is a huge game changer (or not).

by vb71321772800617
People who are saying they're not seeing productivity boost, can you please share where is it failing?

Because, I am terrified by the output I am getting while working on huge legacy codebases, it works. I described one of my workflow changes here: https://news.ycombinator.com/item?id=47271168 but in general compared to old way of working I am saving half of the steps consistently, whether its researching the codebase, or integrating new things, or even making fixes. I have stopped writing code, occasionally I jump into the changes proposed by LLM and make manual edits if it is feasible, otherwise I revert changes and ask it to generate again but based on my learnings from the past rejected output

I am terrified about what's coming

by throwaw121772775088
I don't write code for a living but I administer and maintain it.

Every time I say this people get really angry, but: so far AI has had almost no impact on my job. Neither my dev team nor my vendors are getting me software faster than they were two years ago. Docker had a bigger impact on the pipeline to me than AI has.

Maybe this will change, but until it does I'm mostly watching bemusedly.

by bandrami1772756120
From my experience as a software engineer, doubling my productivity hasn’t reduced my workload. My output per hour has gone up, but expectations and requirements have gone up just as fast. Software development is effectively endless work, and AI has mostly compressed timelines rather than reduced total demand.
by tl2do1772754977
I'm working on a project right now, that is heavily informed by AI. I wouldn't even try it, if I didn't have the help. It's a big job.

However, I can't imagine vibe-coders actually shipping anything.

I really have to ride herd on the output from the LLM. Sometimes, the error is PEBCAK, because I erred, when I prompted, and that can lead to very subtle issues.

I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM. I assume there's going to be problems, and haven't been disappointed, yet. The good news is, the LLM is pretty good at figuring out where we messed up.

I'm afraid to turn on SwiftLint. The LLM code is ... prolix ...

All that said, it has enormously accelerated the project. I've been working on a rewrite (server and native client) that took a couple of years to write, the first time, and it's only been a month. I'm more than half done, already.

To be fair, the slow part is still ahead. I can work alone (at high speed) on the backend and communication stuff, but once the rest of the team (especially shudder the graphic designer) gets on board, things are going to slow to a crawl.

by ChrisMarshallNY1772765903
Based on my experience with using AI for development work, it feels like you really need to work with it instead of expecting it to do the work for you. Rather than type the code yourself by hand, you now need to explain the task very clearly, then review or test the generated code and then ask it to refactor and fix the issues you identify. This in itself is work that needs to be done, a different way of working compared to manual coding, but that doesn't mean any significant overall productivity gains are always guaranteed.
by r3ndr1772803235
I don't think there's been much of an impact, really. Those who know how to use AI just got tangentially more productive (because why would you reveal your fake 10x productivity boost so your boss hands you 10x more tasks to finish?), and those w/o AI knowledge stayed the way they were.

The real impact is for indie-devs or freelancers but that usually doesn't account for much of the GDP.

by behnamoh1772753701
I am not going to trust a single word from a company whose business is selling you AI products.
by g947o1772754817
One of the more interesting takes I heard from a colleague, who’s in the marketing department, is that he uses the corporate approved LLM (Gemini) for “pretend work” or very basic tasks. At the same time he uses Claude on his personal account to seriously augment his job.

His rationale is he won’t let the company log his prompts and responses so they can’t build an agentic replacement for him. Corporate rules about shadow it be damned.

Only the paranoid survive I guess

by holografix1772760400
We are rewriting our entire frontend from Webpack + Gatsby to Vite + React, we converted all static pages in one day using Claude Code.

We basically have ~40 components and 6 pages to go until complete rewrite, I am sure we will run into bumps in the road, but it's been crazy to watch.

We also added i18n (English + Spanish), ThemeProvider for white labeling solution, and WCAG 2A compliance, all in one shot.

If I went to a third party and asked them to rewrite just the static pages it would have been $200k and 3 months of work.

by bearjaws1772807997
the numbers they show are barely distinguishable from noise as far as I can interpret them.

For me, the impact is absolutely in hiring juniors. We basically just stopped considering it. There's almost no work a junior can do that now I would look at and think it isn't easier to hand off in some form (possibly different to what the junior would do) to an AI.

It's a bit illusory though. It was always the case that handing off work to a junior person was often more work than doing it yourself. It's an investment in the future to hire someone and get their productivity up to a point of net gain. As much as anything it's a pause while we reassess what the shape of expertise now looks like. I know what juniors did before is now less valuable than it used to be, but I don't know what the value proposition of the future looks like. So until we know, we pause and hold - and the efficiency gains from using AI currently are mostly being invested in that "hold" - they are keeping us viable from a workload perspective long enough to restructure work around AI. Once we do that, I think there will be a reset and hiring of juniors will kick back in.

by zmmmmm1772774082
Anthropic should be outsourcing this kind of studies by providing data to non-affiliated researchers instead of doing the analysis themselves.
by macleginn1772810153
I know kids avoiding many high paying careers because of ai right now, and artists just giving up everywhere i look. Thanks, ai
by gentleman111772789598
I know multiple devs who would have a very large productivity increase but instead choose to slow down their output on purpose and play video games instead. I get it.
by sanex1772774139
Productivity up by 10%. Happiness, life satisfaction and feeling of self-worth down by 20%.
by amelius1772792680
If people think Elite Overproduction (https://en.wikipedia.org/wiki/Elite_overproduction) is causing strife now, wait until tens of thousands of people with degrees get thrown out of work.
by gadders1772796160
I think it really depends what you're working on. I do some consulting and found it's not helping the C++ devs as much it's helping the html/js devs.
by boxedemp1772768592
There looks to be some errors in the conversion from PDF -> web in this report. For example, the web version of Figure 7 has the legend colours reversed.
by dannyboland1772805558
My day to day is even busier now with agents all over the place making code changes. The Security landscape is even more complex now overnight. The only negative impact I see is that there’s not much need for junior devs right now. The agent fills that role in a way. But we’ll have to backfill some way or another.
by zthrowaway1772755341
A possible outcome of AI: domestic technical employment goes up because the economics of outsourcing change. Domestic technical workers working with AI tools can replace outsourcing shops, eliminating time-shift issues, etc at similar or lower costs.
by recursivedoubts1772759598
The problem with using unemployment as a metric is hiring is driving by perception. You're making an educated guess as to how many people you need in the future.

Anthropic can cause layoffs through pure marketing. People were crediting an Anthropic statement in causing a drop in IBM's stock value, which may genuinely lead to layoffs: https://finance.yahoo.com/news/ibm-stock-plunges-ai-threat-1...

We'll probably have to wait for the hype to wear off to get a better idea, but that might take a long while.

by nitwit0051772757320
My speed shipping software increased but so did the demands of features by my company.
by sp4cec0wb0y1772754570
this keeps me up at night. i’m in a role that is essentially deployment management for LLMs at faang esque company. very little coding or need to code, mostly navigating guis, pipelines, and docker to get deployments updated with a new venting or model version or some patch
by ausbah1772777416
> There's suggestive evidence that hiring of young workers (ages 22–25) into exposed occupations has slowed — roughly a 14% drop in the job-finding rate

There goes my excuse of not finding a job in this market.

by rishabhaiover1772753135
I'm an SDE with 1 YOE using AI tools heavily (doing "day's work" in ~2 hrs, perfect reviews). Spending most time on specs/review vs. raw coding. Worried I'm optimising short-term output over long-term skill development. Should I consider pivoting to AI/ML roles? Would love advice from anyone who's hired juniors in the current era.
by synelabs1772779519
Has this been peer-reviewed?
by rando12341772789605
Never trust a statistic you haven't forged yourself.
by zombot1772804392
How is Anthropic getting this data? Are they running science experiments on people's chat history? (In the app, API or both?)
by andai1772757707
I call BS on this as the ones displaced aren’t in the workforce anymore. I haven’t been able to work in over a year. Despite me applying to over 200 jobs a month.
by reactordev1772794121
I'm not really concerned about the availability of SW dev jobs, but I am concerned about the quality of them. For many companies the velocity (and quality, much to my chagrin) of the code you can produce doesn't really matter. What matters more is whether or not you're building the right thing, and too often you're not. These companies also tend to keep more headcount than seems justified, I think because they are gambling that a few employees are going to do something awesome but they don't know which ones. As AI gets better what will these companies do? I don't think they will fire a bunch of SW devs. I think instead they will embrace the slop and just take more shots, and crazier shots. It doesn't just give us something to do, it also gives a bunch of PHBs something to do.
by default-kramer1772783323
> Claude is extensively used for coding, Computer Programmers are at the top, with 75% coverage

I think there are some advantages to being first.

It's time to re-evaluate strategies if we've been operating under the assumption that this is going to be a bubble, or otherwise largely bullshit. It definitely works. Not everywhere all the time, but often enough to be "scary" now. Some of my prior dismissals like "text 2 sql will never work" are looking pale in the face today.

by bob10291772787517
You know you're having a real impact when you have to self-report on the impact you're having.
by nickphx1772754567
This is a pretty interesting report.

The TL;DR is that there is little measurable impact (and I'd personally add "yet").

To quote:

"We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations"

My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.

I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe.

The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!

by nl1772758291
What's interesting from a practical standpoint: the paper confirms what we're seeing in SME deployments – AI augments, not replaces. But the real productivity gain only kicks in when you redesign the process around the AI, not just bolt it on. Most small businesses skip that step entirely and then wonder why their 'AI tool' isn't delivering. The organizational restructuring is the hard part, not the technology. Anyone here seen teams actually get this right systematically?
by Noyra-X1772787115
This rhymes with another recent study from the Dallas Fed: https://www.dallasfed.org/research/economics/2026/0224 - suggests AI is displacing younger workers but boosting experienced ones. This matches what we see discussed here, as well as the couple similar other studies we've seen discussed here.

Also, it seems to me the concept of "observed exposure" is analogous to OpenAI's concept of "capability overhang" - https://cdn.openai.com/pdf/openai-ending-the-capability-over...

I think the underlying reason is simply because companies are "shaped wrong" to absorb AI fully. I always harp on how there's a learning curve (and significant self-adaptation) to really use AI well. Companies face the same challenge.

Let's focus on software. By many estimates code-related activities are only 20 - 60%, maybe even as low as 11%, of software engineers' time (e.g. https://medium.com/@vikpoca/developers-spend-only-11-of-thei...) But consider where the rest of the time goes. Largely coordination overhead. Meetings etc. drain a lot of time (and more the more senior you get), and those are mostly getting a bunch of people across the company along the dependency web to align on technical directions and roadmaps.

I call this "Conway Overhead."

This is inevitable because the only way to scale cognitive work was to distribute it across a lot of people with narrow, specialized knowledge and domain ownership. It's effectively the overhead of distributed systems applied to organizations. Hence each team owned a couple of products / services / platforms / projects, with each member working on an even smaller part of it at a time. Coordination happened along the heirarchicy of the org chart because that is most efficient.

Now imagine, a single AI-assisted person competently owns everything a team used to own.

Suddenly the team at the leaf layer is reduced to 1 from about... 5? This instantly gets rid of a lot of overhead like daily standups, regular 1:1s and intra-team blockers. And inter-team coordination is reduced to a couple of devs hashing it out over Slack instead of meetings and tickets and timelines and backlog grooming and blockers.

So not only has the speed of coding increased, the amount of time spent coding has also gone up. The acceleration is super-linear.

But, this headcount reduction ripples up the org tree. This means the middle management layers, and the total headcount, are thinned out by the same factor that the bottom-most layer is!

And this focused only on the engineering aspect. Imagine the same dynamic playing out across departments when all kinds of adjacent roles are rolled up into the same person: product, design, reliability...

These are radical changes to workflows and organizations. However, at this stage we're simply shoe-horning AI into the old, now-obsolete ticket-driven way of doing things.

So of course AI has a "capability overhang" and is going to take time to have broad impact... but when it does, it's not going to be pretty.

by keeda1772764720
I really hate to say it, but this article in particular needs a tldr. The author does a web recipe take. Don't put the actual factual info upfront and require parsing through everything to find anything important.

Kinda done with this.

If you have something important to say, say it up front and back it up with literature later.

by geuis1772776461
Did you all read about the aws outage for 13hrs because their autonomous AI agent decided to delete everything and write from scratch?
by thewhitetulip1772769959
[dead]
by throwaway6137461772801620
[dead]
by Copyrightest1772756051
[dead]
by black_131772755746
[flagged]
by keybored1772753506
[flagged]
by shimman1772754255
[flagged]
by programmertote1772753677
[flagged]
by spruko1772800483
cigarettes don't cause cancer! -cigarette companies
by thatmf1772755654