https://pcper.com/2017/06/how-3d-xpoint-phase-change-memory-...
In an era of RAM shortages and quarterly price increases, Optane remains viable for swap and CPU/GPU cache.
They suck for large sequential file access, but incredible for small random access: databases.
Looking at those charts, besides the DWPD it feels like normal NVMe has mostly caught up. I occassionally wonder where a gen 7/8(?) optane would be today if it caught on, it'd probably be nuts.
It seems like there's a very small window, commercially, for new persistent memories. Flash throughput scales really cost-efficiently, and a lot is already built around dealing with the tens-of-microseconds latencies (or worse--networked block storage!). Read latencies you can cache your way out of, and writers can either accept commit latency or play it a little fast and loose (count a replicated write as safe enough or...just not be safe). You have to improve on Flash by enough to make it worth the leap while remaining cheaper than other approaches to the same problem, and you have to be confident enough in pulling it off to invest a ton up front. Not easy!
And if no shrink was possible, is that because it was (a) possible but too hard; (b) known blocks to a die shrink; or (c) execs didn't want to pay to find out?
https://goughlui.com/2024/07/28/tech-flashback-intel-optane-...