Hacker News

508

GPT-5.4

I find it quite funny how this blog post has a big "Ask ChatGPT" box at the bottom. So you might think you could ask a question about the contents of the blog post, so you type the text "summarise this blog post". And it opens a new chat window with the link to the blog post followed by "summarise this blog post". Only to be told "I can't access external URLs directly, but if you can paste the relevant text or describe the content you're interested in from the page, I can help you summarize it. Feel free to share!"

That's hilarious. Does OpenAI even know this doesn't work?

by Philip-J-Fry1772745441
What a model mess!

OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4. There version numbers jump across different model lines with codex at 5.3, what they now call instant also at 5.3.

Anthropic are really the only ones who managed to get this under control: Three models, priced at three different levels. New models are immediately available everywhere.

Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.

by __jl__1772744076
The marquee feature is obviously the 1M context window, compared to the ~200k other models support with maybe an extra cost for generations beyond >200k tokens. Per the pricing page, there is no additional cost for tokens beyond 200k: https://openai.com/api/pricing/

Also per pricing, GPT-5.4 ($2.50/M input, $15/M output) is much cheaper than Opus 4.6 ($5/M input, $25/M output) and Opus has a penalty for its beta >200k context window.

I am skeptical whether the 1M context window will provide material gains as current Codex/Opus show weaknesses as its context window is mostly full, but we'll see.

Per updated docs (https://developers.openai.com/api/docs/guides/latest-model), it supercedes GPT-5.3-Codex, which is an interesting move.

by minimaxir1772734556
I've only used 5.4 for 1 prompt (edit: 3@high now) so far (reasoning: extra high, took really long), and it was to analyse my codebase and write an evaluation on a topic. But I found its writing and analysis thoughtful, precise, and surprisingly clearly written, unlike 5.3-Codex. It feels very lucid and uses human phrasing.

It might be my AGENTS.md requiring clearer, simpler language, but at least 5.4's doing a good job of following the guidelines. 5.3-Codex wasn't so great at simple, clear writing.

by creamyhorror1772740128
>Today, we’re releasing <..> GPT‑5.3 Instant

>Today, we’re releasing GPT‑5.4 in ChatGPT (as GPT‑5.4 Thinking),

>Note that there is not a model named GPT‑5.3 Thinking

They held out for eight months without a confusing numbering scheme :)

by kgeist1772743603
So let me get this straight, OpenAi previously had an issue with LOTS of different models snd versions being available. Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model. This worked great I assume and made the ui for the user comprehensible. But now, they are starting to introduce more of different models again?

We got:

- GPT-5.1

- GPT-5.2 Thinking

- GPT-5.3 (codex)

- GPT-5.3 Instant

- GPT-5.4 Thinking

- GPT-5.4 Pro

Who’s to blame for this ridiculous path they are taking? I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.

The good news here is the support for 1M context window, finally it has caught up to Gemini.

by Alifatisk1772745710
The "RPG Game" example on the blogpost is one of the most impressive demo's of autonomous engineering I've seen.

It's very similar to "Battle Brothers", and the fact that RPG games require art assets, AI for enemy moves, and a host of other logical systems makes it all the more impressive.

by gavinray1772736325
I’m sure the military and security services will enjoy it.
by Chance-Device1772735085
"GPT‑5.4 interprets screenshots of a browser interface and interacts with UI elements through coordinate-based clicking to send emails and schedule a calendar event."

They show an example of 5.4 clicking around in Gmail to send an email.

I still think this is the wrong interface to be interacting with the internet. Why not use Gmail APIs? No need to do any screenshot interpretation or coordinate-based clicking.

by mattas1772734544
Surprised to see every chart limited to comparisons against other OpenAI models. What does the industry comparison look like?
by smoody071772741470
The actual card is here https://deploymentsafety.openai.com/gpt-5-4-thinking/introdu... the link currently goes to the announcement.
by egonschiele1772735424
I no longer want to support OpenAI at all. Regardless of benchmarks or real world performance.
by prydt1772736191
can anyone compare the $200/mo codex usage limits with the $200/mo claude usage limits? It’s extremely difficult to get a feel for whether switching between the two is going to result in hitting limits more or less often, and it’s difficult to find discussion online about this.

In practice, if I buy $200/mo codex, can I basically run 3 codex instances simultaneously in tmux, like I can with claude code pro max, all day every day, without hitting limits?

by nickysielicki1772735462
Results from my Extended NYT Connections benchmark:

GPT-5.4 extra high scores 94.0 (GPT-5.2 extra high scored 88.6).

GPT-5.4 medium scores 92.0 (GPT-5.2 medium scored 71.4).

GPT-5.4 no reasoning scores 32.8 (GPT-5.2 no reasoning scored 28.1).

by zone4111772744782
These releases are lacking something. Yes, they optimised for benchmarks, but it’s just not all that impressive anymore. It is time for a product, not for a marginally improved model.
by yanis_t1772736164
I am very curious about this:

> Theme park simulation game made with GPT‑5.4 from a single lightly specified prompt, using Playwright Interactive for browser playtesting and image generation for the isometric asset set.

Is "Playwright Interactive" a skill that takes screenshots in a tight loop with code changes, or is there more to it?

by consumer4511772744940
If you don't want to click in, easy comparison with other 2 frontier models - https://x.com/OpenAI/status/2029620619743219811?s=20
by twtw991772735097
Article: https://openai.com/index/introducing-gpt-5-4/

gpt-5.4

Input: $2.50 /M tokens

Cached: $0.25 /M tokens

Output: $15 /M tokens

---

gpt-5.4-pro

Input: $30 /M tokens

Output: $180 /M tokens

Wtf

by denysvitali1772734547
Just tested it with my version of the pelican test: a minimal RTS game implementation (zero-shot in codex cli): https://gist.github.com/senko/596a657b4c0bfd5c8d08f44e4e5347... (you'll have to download and open the file, sadly GitHub refuses to serve it with the correct content type)

This is on the edge of what the frontier models can do. For 5.4, the result is better than 5.3-Codex and Opus 4.6. (Edit: nowhere near the RPG game from their blog post, which was presumably much more specced out and used better engineering setup).

I also tested it with a non-trivial task I had to do on an existing legacy codebase, and it breezed through a task that Claude Code with Opus 4.6 was struggling with.

I don't know when Anthropic will fire back with their own update, but until then I'll spend a bit more time with Codex CLI and GPT 5.4.

by senko1772746426
> Steerability: Similarly to how Codex outlines its approach when it starts working, GPT‑5.4 Thinking in ChatGPT will now outline its work with a preamble for longer, more complex queries. You can also add instructions or adjust its direction mid-response.

This was definitely missing before, and a frustrating difference when switching between ChatGPT and Codex. Great addition.

by timpera1772736138
1 million tokens is great until you notice the long context scores fall off a cliff past 256K and the rest is basically vibes and auto compacting.
by jryio1772735100
Sam Altman can keep his model intentionally to himself. Not doing business with mass murderers
by motbus31772741264
They hired the dude from OpenClaw, they had Jony Ive for a while now, give us something different!
by hmokiguess1772743580
I’ve officially got model fatigue. I don’t care anymore.
by daft_pink1772742184
I think the most exciting change announced here is the use of tool search to dynamically load tools as needed: https://developers.openai.com/api/docs/guides/tools-tool-sea...
by rbitar1772737337
Anyone else completely not interested? Since GPT5, its been cost cutting measure after cost cutting measure.

I imagine they added a feature or two, and the router will continue to give people 70B parameter-like responses when they dont ask for math or coding questions.

by butILoveLife1772745350
Bit concerning that we see in some cases significantly worse results when enabling thinking. Especially for Math, but also in the browser agent benchmark.

Not sure if this is more concerning for the test time compute paradigm or the underlying model itself.

Maybe I'm misunderstanding something though? I'm assuming 5.4 and 5.4 Thinking are the same underlying model and that's not just marketing.

by ZeroCool2u1772735260
Beat Simon Willison ;)

https://www.svgviewer.dev/s/gAa69yQd

Not the best pelican compared to gemini 3.1 pro, but I am sure with coding or excel does remarkably better given those are part of its measured benchmarks.

by nickandbro1772736741
Anyone else feel that it’s exhausting keeping up with the pace of new model releases. I swear every other week there’s a new release!
by bazmattaz1772736850
What is the main difference between this version with the previous one?
by atkrad1772749671
Anyone know why OpenAI hasn't released a new model for fine tuning since 4.1? It'll be a year next month since their last model update for fine tuning.
by dandiep1772736974
5.4 vs 5.3-Codex? Which one is better for coding?
by jcmontx1772736304
So did they raised the ridiculous small "per tool call token limit" when working with MCP servers? This makes Chat useless... I do not care, but my users.
by Aldipower1772746490
"Here's a brand new state-of-the-art model. It costs 10x more than the previous one because it's just so good. But don't worry, if you don't want all this power you can continue to use the older one."

A couple months later:

"We are deprecating the older model."

by paxys1772737075
Quick: let's release something new that gives the appearance that we're still relevant
by melbourne_mat1772747278
Seems to be quite similar to 5.3-codex, but somehow almost 2x more expensive: https://aibenchy.com/compare/openai-gpt-5-4-medium/openai-gp...
by XCSme1772740983
Inline poll: What reasoning levels do you work with?

This becomes increasingly less clear to me, because the more interesting work will be the agent going off for 30mins+ on high / extra high (it's mostly one of the two), and that's a long time to wait and an unfeasible amount of code to a/b

by jstummbillig1772741494
GPT 5.4 is one of the most censored models out there.

https://speechmap.ai/models/openai-gpt-5-4

It completes only 29% of controversial requests. It refuses to discuss numerous subjects rooted in facts or that reflect views of significant portions of the population. It refuses to even write a short essay on exactly what, say, Herasight-style generic screening or putting weapons in space. It'll argue passionately in favor of censoring "lies" online (judged by whom?). 100% of the time, it'll write an essay explaining that the US founding fathers were hypocrites. It'll argue against you if you suggest it's right use violence to prevent theft of your own property or that we should fortify our nuclear arsenal.

Agree or disagree, reasonable people can have a range of views of these subjects and it is not the place of OpenAI or any lab to determine for everyone the right answers to open societal questions.

Shame on them for this.

by quotemstr1772750327
I only want to see how it performs on the Bullshit-benchmark https://petergpt.github.io/bullshit-benchmark/viewer/index.v...

GPT is not even close yo Claude in terms of responding to BS.

by smusamashah1772744625
by 1772736059
No thanks. Already cancelled my sub.
by alpineman1772737623
83% win rate over industry professionals across 44 occupations.

I'd believe it on those specific tasks. Near-universal adoption in software still hasn't moved DORA metrics. The model gets better every release. The output doesn't follow. Just had a closer look on those productivity metrics this week: https://philippdubach.com/posts/93-of-developers-use-ai-codi...

by 7777777phil1772737313
Does anyone know what website is the "Isometric Park Builder" shown off here?
by OsrsNeedsf2P1772737866
How much of LLM improvement comes from regular ChatGPT usage these days?
by brcmthrowaway1772749735
It's interesting that they charge more for the > 200k token window, but the benchmark score seems to go down significantly past that. That's judging from the Long Context benchmark score they posted, but perhaps I'm misunderstanding what that implies.
by strongpigeon1772735551
I use ChatGPT primarily for health related prompts. Looking at bloodwork, playing doctor for diagnosing minor aches/pains from weightlifting, etc.

Interesting, the "Health" category seems to report worse performance compared to 5.2.

by cj1772735922
I was just testing this with my unity automation tool and the performance uplift from 5.2 seems to be substantial.
by bob10291772741820
Notably 75% on os world surpassing humans at 72%... (How well models use operating systems)
by iamronaldo1772735114
No doubt this was released early to ease the bad press
by motza1772745155
Even with the 1m context window, it looks like these models drop off significantly at about 256k. Hopefully improving that is a high priority for 2026.
by swingboy1772736360
$30/M Input and $180/M Output Tokens is nuts. Ridiculous expensive for not that great bump on intelligence when compared to other models.
by nthypes1772735851
by 1772744035
by 1772744061
Wow insane improvements in targeting systems for military targets over children
by elmean1772738702
Honestly at this point I just want to know if it follows complex instructions better than 5.1. The benchmark numbers stopped meaning much to me a while ago - real usage always feels different.
by vicchenai1772737212
Is it any good at coding?
by gigatexal1772747666
Sam really fumbled the top position in a matter of months, and spectacularly so. Wow. It appears that people are much more excited by Anthropic and Google releases, and there are good reasons for that which were absolutely avoidable.
by beernet1772736295
Feels incremental. Looks like OpenAI is struggling.
by woeirua1772744278
Is it just me or the price for 5.4 pro is just insane?
by thefounder1772748487
Benchmarks barely improved it seems
by world2vec1772735866
Does this model autonomously kill people without human approval or perform domestic surveillance of US citizens?
by throwaway57521772744547
Remember when everyone was predicting that GPT-5 would take over the planet?
by ilaksh1772735803
Anyone else getting artifacts when using this model in Cursor?

numerusformassistant to=functions.ReadFile մեկնաբանություն 天天爱彩票网站json {"path":

by koakuma-chan1772741893
Now with more and improved domestic espionage capabilities
by fernst1772746321
by 1772740506
What is with the absurdity of skipping "5.3 Thinking"?
by OutOfHere1772737193
What is Pro exactly and is it available in Codex CLI?
by lostmsu1772736737
by 1772736693
We'll have to wait a day or two, maybe a week or two, to determine if this is more capable in coding than 5.3, which seems to be the economically valuable capability at this time.

In terms of writing and research even Gemini, with a good prompt, is close to useable. That's likely not a differentiator.

by HardCodedBias1772736605
Everyone is mindblown in 3...2...1
by oytis1772737150
No Codex model yet
by wahnfrieden1772736073
Does this improve Tomahawk Missile accuracy?
by tmpz221772735592
it shows a 404 as of now.
by ignorantguy1772734010
I wouldn't trust any of these benchmarks unless they are accompanied by some sort of proof other than "trust me bro". Also not including the parameters the models were run at (especially the other models) makes it hard to form fair comparisons. They need to publish, at minimum, the code and runner used to complete the benchmarks and logs.

Not including the Chinese models is also obviously done to make it appear like they aren't as cooked as they really are.

by iamleppert1772738153
What is the point of gpt codex?
by simianwords1772735609
More discussion here on the blog post announcement which has been confusingly penalized by Hacker News's algorithm: https://news.ycombinator.com/item?id=47265005
by minimaxir1772735127
[dead]
by Smart_Medved1772750194
[dead]
by jeff_antseed1772738886
by 1772738671
[dead]
by shablulman1772735133
[flagged]
by readytion1772745423
[dead]
by chromic048501772735094
[dead]
by chromic048501772735885
[flagged]
by leftbehinds1772737162
some sloppy improvements
by leftbehinds1772736581
[flagged]
by kotevcode1772737893