Here is my take on AI's impact on productivity:
First let's review what are LLMs objectively good at: 1. Writing boiler plate code 2. Translating between two different coding languages (migration) 3. Learning new things: Summarizing knowledge, explaining concepts 4. Documentation, menial tasks
At a big tech product company #1 #2 #3 are not as frequent as one would think - most of the time is spent in meetings and meetings about meetings. Things move slowly - it's designed to be like that. Majority devs are working on integrating systems - whatever their manager sold to their manager and so on. The only time AI really helped me at my job was when I did a one-week hackathon. Outside of that, integrations of AI felt like more work rather than less - without much productivity boost.
Outside, it has proven to be a real productivity boost for me. It checks all the four boxes. Plus, I don't have to worry about legal, integrations, production bugs (eventually those will come).
So, depends who you are asking -- it is a huge game changer (or not).
Because, I am terrified by the output I am getting while working on huge legacy codebases, it works. I described one of my workflow changes here: https://news.ycombinator.com/item?id=47271168 but in general compared to old way of working I am saving half of the steps consistently, whether its researching the codebase, or integrating new things, or even making fixes. I have stopped writing code, occasionally I jump into the changes proposed by LLM and make manual edits if it is feasible, otherwise I revert changes and ask it to generate again but based on my learnings from the past rejected output
I am terrified about what's coming
Every time I say this people get really angry, but: so far AI has had almost no impact on my job. Neither my dev team nor my vendors are getting me software faster than they were two years ago. Docker had a bigger impact on the pipeline to me than AI has.
Maybe this will change, but until it does I'm mostly watching bemusedly.
However, I can't imagine vibe-coders actually shipping anything.
I really have to ride herd on the output from the LLM. Sometimes, the error is PEBCAK, because I erred, when I prompted, and that can lead to very subtle issues.
I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM. I assume there's going to be problems, and haven't been disappointed, yet. The good news is, the LLM is pretty good at figuring out where we messed up.
I'm afraid to turn on SwiftLint. The LLM code is ... prolix ...
All that said, it has enormously accelerated the project. I've been working on a rewrite (server and native client) that took a couple of years to write, the first time, and it's only been a month. I'm more than half done, already.
To be fair, the slow part is still ahead. I can work alone (at high speed) on the backend and communication stuff, but once the rest of the team (especially shudder the graphic designer) gets on board, things are going to slow to a crawl.
The real impact is for indie-devs or freelancers but that usually doesn't account for much of the GDP.
His rationale is he won’t let the company log his prompts and responses so they can’t build an agentic replacement for him. Corporate rules about shadow it be damned.
Only the paranoid survive I guess
We basically have ~40 components and 6 pages to go until complete rewrite, I am sure we will run into bumps in the road, but it's been crazy to watch.
We also added i18n (English + Spanish), ThemeProvider for white labeling solution, and WCAG 2A compliance, all in one shot.
If I went to a third party and asked them to rewrite just the static pages it would have been $200k and 3 months of work.
For me, the impact is absolutely in hiring juniors. We basically just stopped considering it. There's almost no work a junior can do that now I would look at and think it isn't easier to hand off in some form (possibly different to what the junior would do) to an AI.
It's a bit illusory though. It was always the case that handing off work to a junior person was often more work than doing it yourself. It's an investment in the future to hire someone and get their productivity up to a point of net gain. As much as anything it's a pause while we reassess what the shape of expertise now looks like. I know what juniors did before is now less valuable than it used to be, but I don't know what the value proposition of the future looks like. So until we know, we pause and hold - and the efficiency gains from using AI currently are mostly being invested in that "hold" - they are keeping us viable from a workload perspective long enough to restructure work around AI. Once we do that, I think there will be a reset and hiring of juniors will kick back in.
Anthropic can cause layoffs through pure marketing. People were crediting an Anthropic statement in causing a drop in IBM's stock value, which may genuinely lead to layoffs: https://finance.yahoo.com/news/ibm-stock-plunges-ai-threat-1...
We'll probably have to wait for the hype to wear off to get a better idea, but that might take a long while.
There goes my excuse of not finding a job in this market.
I think there are some advantages to being first.
It's time to re-evaluate strategies if we've been operating under the assumption that this is going to be a bubble, or otherwise largely bullshit. It definitely works. Not everywhere all the time, but often enough to be "scary" now. Some of my prior dismissals like "text 2 sql will never work" are looking pale in the face today.
The TL;DR is that there is little measurable impact (and I'd personally add "yet").
To quote:
"We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations"
My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe.
The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!
Also, it seems to me the concept of "observed exposure" is analogous to OpenAI's concept of "capability overhang" - https://cdn.openai.com/pdf/openai-ending-the-capability-over...
I think the underlying reason is simply because companies are "shaped wrong" to absorb AI fully. I always harp on how there's a learning curve (and significant self-adaptation) to really use AI well. Companies face the same challenge.
Let's focus on software. By many estimates code-related activities are only 20 - 60%, maybe even as low as 11%, of software engineers' time (e.g. https://medium.com/@vikpoca/developers-spend-only-11-of-thei...) But consider where the rest of the time goes. Largely coordination overhead. Meetings etc. drain a lot of time (and more the more senior you get), and those are mostly getting a bunch of people across the company along the dependency web to align on technical directions and roadmaps.
I call this "Conway Overhead."
This is inevitable because the only way to scale cognitive work was to distribute it across a lot of people with narrow, specialized knowledge and domain ownership. It's effectively the overhead of distributed systems applied to organizations. Hence each team owned a couple of products / services / platforms / projects, with each member working on an even smaller part of it at a time. Coordination happened along the heirarchicy of the org chart because that is most efficient.
Now imagine, a single AI-assisted person competently owns everything a team used to own.
Suddenly the team at the leaf layer is reduced to 1 from about... 5? This instantly gets rid of a lot of overhead like daily standups, regular 1:1s and intra-team blockers. And inter-team coordination is reduced to a couple of devs hashing it out over Slack instead of meetings and tickets and timelines and backlog grooming and blockers.
So not only has the speed of coding increased, the amount of time spent coding has also gone up. The acceleration is super-linear.
But, this headcount reduction ripples up the org tree. This means the middle management layers, and the total headcount, are thinned out by the same factor that the bottom-most layer is!
And this focused only on the engineering aspect. Imagine the same dynamic playing out across departments when all kinds of adjacent roles are rolled up into the same person: product, design, reliability...
These are radical changes to workflows and organizations. However, at this stage we're simply shoe-horning AI into the old, now-obsolete ticket-driven way of doing things.
So of course AI has a "capability overhang" and is going to take time to have broad impact... but when it does, it's not going to be pretty.
Kinda done with this.
If you have something important to say, say it up front and back it up with literature later.