Bizarro-Moore’s Law for SSDs?

It looks like SSDs are going to get worse over time, unlike hard drives:

SSDs are seemingly doomed. Why? Because as circuitry of NAND flash-based SSDs shrinks, densities increase. But that also means issues relating to read and write latency and data errors will increase as well.
[…]
The group discovered that write speed for pages in a flash block suffered “dramatic and predictable variations” in latency. Even more, the tests showed that as the NAND flash wore out, error rates varied widely between devices. Single-level cell NAND produced the best test results whereas multi-level cell and triple-level cell NAND produced less than spectacular results.

This suggests to me that SSDs are never going to break out of the boot-disk niche for hardware builds. I get equivalent read-speads on my striped hard drives as my boot SSD, for that matter.

3 thoughts on “Bizarro-Moore’s Law for SSDs?”

  1. This is well-known by my company (Fusion-IO). We have IP involved in maximizing the life of flash devices; or rather, of units of storage comprised of many such flash devices.

    These are engineering problems which have rich solution spaces; In point of fact, SSDs (and related products) have already broken out of said niche, and are already establishing themselves in the Enterprise world.

  2. Firstly, the article is based on the weird assumption that there will be no science dedicated to figuring out ways to deal with this issue. Remember when 8x cdrom burners seemed like they could never get faster?

    Two, there are alternatives being developed right now that are persistent dense memory of different types, such as PRAM. One of those is very likely to blow past flash in the next few decades.

    Three, hard drives already combine multiple platters and striping internally, and ssd’s already use raid-like ideas on their flash chips internally. I don’t see why this graduate student thinks this is a serious limitation, why we can’t make bigger SSD’s for larger storage needs.

    Four, I am not sure what she thinks future storage needs are going to be like. Already dropping off, we have reached the point where we have near-infinite document capability on current drives, only special-need storage really requires quantities higher than a few terrabytes. The only consumer level needs that involve that kind of storage is video, and that need is shifting off into streaming. People are perfectly happy with 32 *GIG* ipads as their primary computer now, and how we use computers is changing all the time.

    Really, its a horrible article.

  3. Jeffrey, the issue here seems to be a physics-based limitation. 8x cdrom burners were slow because they were first generation, and increased speeds came about by *engineering* improvements. You can’t engineer around physics limitations, though you’re absolutely right that a new technology can circumvent a problem.

    I hadn’t heard of PRAM but from the Wikipedia article it looks like PRAM will have its own limitations, notably temperature dependence. There doesn’t seem to be a magic grail.

    Its true that you can stack SSDs – in fact many SSDs billed at 256 are already RAID 0 arrays of 128 internally. Theres a limit to how far you can go that way because each layer of RAID adds network overhead, so its not a straight doubling of performance. I’m sure we can get to 1 TB SSDs eventually but the reliability and lifetime of those will always be inferior to old fashioned hard disks.

    As far as my anticipation of future storage, I think that consumer music and photos drove the explosion to terabytes. But video will soon reach the same critical mass (see my article at Forbes?). When that happens, even 10 TB will start to seem constrained.

Comments are closed.