facebook rss twitter

SanDisk claims SSD innovation improves performance and reliability

by Scott Bicheno on 5 November 2008, 14:02

Tags: SanDisk (NASDAQ:SNDK)

Quick Link: HEXUS.net/qap26

Add to My Vault: x

FFS!

Flash storage specialist SanDisk has come out with a file management system for its SSDs (solid state drives), which it claims: "...yields dramatic improvement in performance and reliability for computing applications."

The new technology is called ExtremeFFS (I thought FFS stood for something else - Ed) and SanDisk reckons it: "...has the potential to accelerate random write speeds by up to 100 times over existing systems." It will make an appearance in SanDisk products next year.

Additionally, SanDisk reckons its time to change how we measure the performance of SSDs. The senior VP and GM of SanDisk's SSD unit, Rich Heye, reckons end-users need more help with comparing the performance of SSDs against HDDs (hard disk drives) and with calculating the lifespan of an SSD.

Consequently Heye proposes vRPM (virtual revs per minute) and  LDE (long-term data endurance) as new metrics. "SSDs will revolutionise client storage, but we need new benchmarks that allow them to be treated differently than HDDs," he said.

Here's what SanDisk has to say about ExtremeFFS:

To maximize random write performance, SanDisk developed the ExtremeFFS flash file management system. This operates on a page-based algorithm, which means there is no fixed coupling between physical and logical location. When a sector of data is written, the SSD puts it where it is most convenient and efficient. The result is an improvement in random write performance - by up to 100 times - as well as in overall endurance.

ExtremeFFS incorporates a fully non-blocking architecture in which all of the NAND channels can behave independently, with some reading while others are writing and garbage collecting. Another key element of ExtremeFFS is usage-based content localization, which allows the advanced flash management system to "learn" user patterns and over time localize data to maximize the product's performance and endurance.  "This feature might not show up in benchmarks, but we believe it is the right thing to do for end-users," Heye said.    

There's more on the subject here. Additionally Microsoft has said it's going to optimise Windows 7 for SSDs, something that isn't the case with Vista.

Does this look like a critical evolution in SSD technology to you? Do you think we need these new meterics? Let us know in the HEXUS.community.

 



HEXUS Forums :: 15 Comments

Login with Forum Account

Don't have an account? Register today!
From my perspective (which needn't be at all correct) the SSD performance issue is this:

SSD's use the same interfaces and logical block addressing that we've been using on hard disks for years. A hard disk breaks down into cylinders, head and sectors. The addressing exploits this known arrangement with things such as tagged command queuing and native command queuing.

With flash technology - unless somebody changed the way it works and failed to tell me - you can't just write to a ‘sector’ (or block). A chunk of the flash must be erased before it can be reprogrammed - this is often larger than the block being written. If the chunk already has data in it, the stuff that isn't being overwritten will have to be read out before the erase, then written back in.

So you can probably see why writing little bits of data to random blocks on an SSD isn't great. SSD can have incredible sequential throughput, so data transactions (writes in particular) should be twiddled with to make the most of that. The same is true for hard disks, of course, but they have mechanical limitations, meaning SSD's can create new ways of dealing with ‘the problem’.

I love SSD, and folks like Intel and SanDisk are playing a big part in bringing it to the mainstream.

In a couple of years I expect we'll see crazy storage subsystems that seamlessly combine HDDs and SSDs to achieve optimal power consumption and gigantic capacities.
Virtual RPM as a speed measurement sounds absolutely rubbish - are we also going to get virtual platter size/density?

A better longevity measure would be very welcome though - how about a simple ‘Best before’?
Steve
In a couple of years I expect we'll see crazy storage subsystems that seamlessly combine HDDs and SSDs to achieve optimal power consumption and gigantic capacities.

In a couple of years I would love to see SSD completely replace HDD in the same way as LCD has replaced CRT.

It may take a bit longer but I think it will happen sooner then a lot of people expect. The new technology has so many plus points over the old technology that replacement will happen at a phenomenal rate.
Optimisation for Windows 7, if it brings real benefits, will be one thing encouraging me to make the switch from Vista.
Also, if the essential Vista architecture remains in Windows 7 (despite being refined somewhat), is this likely to mean most Vista programs will work on 7?
miniyazz
Optimisation for Windows 7, if it brings real benefits, will be one thing encouraging me to make the switch from Vista.
Also, if the essential Vista architecture remains in Windows 7 (despite being refined somewhat), is this likely to mean most Vista programs will work on 7?

Yes, the driver architecture and API is much the same. From initial tests all my code works without (any) changes. TBH problematic programs from the XP era remain problematic by design, rather than due to OS changes (i.e. bad code = problems regardless).