Hacker News

347

iPhone 17 Pro Demonstrated Running a 400B LLM

by anemll1774276210198 comments
> SSD streaming to GPU

Is this solution based on what Apple describes in their 2023 paper 'LLM in a flash' [1]?

1: https://arxiv.org/abs/2312.11514

by firstbabylonian1774278072
My iPad Air with M2 can run local LLMs rather well. But it gets ridiculously hot within seconds and starts throttling.
by andix1774288912
I had a dream that everyone had super intelligent AIs in their pockets, and yet all they did was doomscroll and catfish...shortly before everything was destroyed.
by CrzyLngPwd1774292801
This reminds me of how excited people were to get models running locally when llama.c first hit.
by lainproliant1774293257
It's like the sloth from Zootopia
by gnarlouse1774297678
It’s 400B but it’s mixture of experts so how many are active at any time?
by cj001774278522
I installed Termux on an old Android phone last week (running LineageOS), and then using Termux installed Ollama and a small model. It ran terribly, but it did run.
by illwrks1774291953
For small values of "running".

Don't get me wrong, it's an awesome achievement, but 0.6s token/s at presumably fairly heavy compute (and battery), on a mobile device? There aren't too many use cases for that :)

by groby_b1774296507
Apple’s unified memory architecture plays a huge part in this. This will trigger a large scale rearchitecture of mobile hardware across the board. I am sure they are already underway.

I understand this is for a demo but do we really need a 400B model in the mobile? A 10B model would do fine right? What do we miss with a pared down one?

by yalogin1774288338
Run an incredible 400B parameters on a handheld device.

0.6 t/s, wait 30 seconds to see what these billions of calculations get us:

"That is a profound observation, and you are absolutely right ..."

by causal1774279797
This is awesome! How far away are we from a model of this capability level running at 100 t/s? It's unclear to me if we'll see it from miniaturization first or from hardware gains
by _air1774280895
I read this title as: "iPhone 17 Pro demonstrated being an overpriced phone".
by einpoklum1774295826
Impressive. Running a 400B model on-device, even at low throughput, is pretty wild.
by r4m186121774287132
It will be funny if we go back to lugging around brick-size batteries with us everywhere!
by redwood1774286950
CPU, memory, storage, time tradeoffs rediscovered by AI model developers. There is something new here, add GPU to the trade space.
by dv_dt1774284484
I can't understand why this is a surprise to anyone. An iphone is still a computer, of course it can run any model that fits in storage albiet very slowly. The implementation is impressive I guess but I don't see how this is a novel capability. And for 0.6t/s, its not a cost efficient hardware for doing it. The iphone can also render pixar movies if you let it run long enough, mine bitcoin with a pathetic hashrate, and do weather simulations but not in time for the forecast to be relevant.
by skiing_crawling1774290043
I have some macro opinions about Apple - not sure if I'm correct, but tell me what you think.

Apple has always seen RAM as an economic advantage for their platform: Make the development effort to ensure that the OS and apps work well with minimal memory and save billions every year in hardware costs. In 2026, iPhones still come with 8Gb of RAM, Pro/Max come with 12Gb.

The problem is that AI (ML/LLM training and inference) are areas where you can't get around the need for copious amounts of fast working memory. (Thus the critical shortage of RAM at the moment as AI data centers consume as many memory chips as possible.)

Unless there's something I don't know (which is more than possible) Apple can't code their way around this problem, nor create specialized SoCs with ML cores that obviate the need for lots and lots of RAM.

So, it's going to be interesting whether they accept this reality and we start seeing the iPhones in the future with 16Gb, 32Gb or more as standard in order to make AI performant. And if they give up on adding AI to the billions of iPhones with minimal RAM already out there.

As a side note, 8Gb of RAM hasn't been enough for a decade. It prevents basic tasks like keeping web tabs live in the background. My pet peeve is having just a few websites open, and having the page refresh when swapping between them because of aggressive memory management.

To me, Apple's obvious strength is pushing AI to the edge as much as possible. While other companies are investing in massive data centers which will have millions of chips that will be outdated within the next couple years, Apple will be able to incrementally improve their ML/AI features by running on the latest and greatest chips every year. Apple has a huge advantage in that they can design their chips with a mega high speed bus, which is just as important as the quantity of RAM.

But all that depends on Apple's willingness to accept that RAM isn't an area they can skimp on any more, and I'm not sure they will.

Sorry for the brain dump. I'd love to be educated on this in case I'm totally off base.

by russellbeattie1774283408
A year ago this would have been considered impossible. The hardware is moving faster than anyone's software assumptions.
by ashwinnair991774277845
The power draw is going to be crazy (today).

Practical LLMs on mobile devices are at least a few years away.

by HardCodedBias1774288912
"400 bytes should be enough for anybody"
by 1970-01-011774289980
[dead]
by Yanko_111774288888
[dead]
by aplomb10261774287153
[dead]
by jlhawn1774289211
[dead]
by jee5991774279405
[dead]
by literoldolphin1774289609
[flagged]
by anemll1774276210
Apple might just win the AI race without even running in it. It's all about the distribution.
by rwaksmunski1774279193
It's crazy to see a 400B model running on an iPhone. But moving forward, as the information density and architectural efficiency of smaller models continue to increase, getting high-quality, real-time inference on mobile is going to become trivial.
by simopa1774277876
The heat problem is going to be the real constraint here. I've been running smaller models locally for some internal tooling at work and even those make my MacBook sound like a jet engine after twenty minutes. A 400B model on a phone seems like a great way to turn your pocket into a hand warmer, even with MoE routing. The unified memory is clever but physics still applies.
by johnwhitman1774289930