Fusion-io ioDrive GA

April 13, 2008

They say the drive has been available since April 7, 2008.

I look forward to initial use scenarios and field reports. As a hardware junkie, I’d like to get the smallest model, but I don’t know what I’d do with it (and no, I do not think using ioDrive for Firefox or MSIE cache can justify the investment.)

Related to press coverage of the product – storage investors remember where you first heard the prediction: “A new flash storage card from Fusion io could make huge storage area networks go the way of the dinosaur and DoDo bird.” (Source: http://www.tgdaily.com/content/view/34065/118/). It’s that simple.

P.S. Off-topic: I wonder how many companies actually postpone announcements to skip April 1st? I’m saying this because the other day I saw that one storage vendor announced restated Q3/Q4 2007 results on April 2nd.


Is iSCSI NAS SAN?

May 9, 2007

If you think the title makes no sense, read this post and other posts related to the original post.

Although this will automatically put me into the mind-losing group, from the sales and non-technical marketing perspective – note the emphasis – I tend to agree with his “politically incorrect” view of iSCSI. I doubt that any of these critics have ever spent hours – I have, and it wasn’t fun – pitching their solution to the customer only to be asked “Is this NAS thing of yours that SAN thing?”

From the NetApp perspective, I guess that iSCSI indeed is Network-Attached Storage Area Network-kinda Storage. Seriously, in simple terms, it’s just another way to allocate NetApp storage to clients/hosts. Because NetApp filers are versatile (NAS, iSCSI, SAN), they don’t really need to care or argue about this with their prospects and/or customers. (Personally I think there’s a better way to provide a single solution for integrated block and file storage – more about this some other time).

Says Marc Farley:

It does not matter if the network is Fibre Channel or Ethernet (or carrier pigeons), the network is simply a way to transmit information for a storage application.

Well, this is exactly why I do not equal iSCSI with SAN. It does matter. I don’t have anything against accessing my database over FC (as long as I can afford it). And yes, in some cases I would definitively consider recommending Ethernet instead, but I would never recommend or even consider using PTP (Pigeon Transport Protocol). (Not yet, anyway. Maybe one day we’ll have storage-enabled pigeons that will be able to use spooky action at a distance to overcome limitations of PTP v1. Even with PTP v2 we’d still need a good MPIO mechanism for pigeons to prevent Single Point Of Flying, or SPOF – it’s gonna take time.)

When people hear “SAN” many of them will – maybe mistakenly, as this SAN could be Ethernet-based – visualize thin orange cables, low (lower than GbE) latencies, dedicated FC storage switches and the rest of FC-SAN h/w and s/w. I am not saying that Gigabit Ethernet and iSCSI won’t or can’t do, but it’s just not the same. What I am saying, though, is that this isn’t any more PC and/or less confusing than Dave’s opinion.

iSCSIs vendors would like you to think iSCSI is SAN and Ethernet is as good as FC, versatile storage vendors say “whatever you want, pal”, and so on and so forth. It’s (not) that simple! Devil said (not actually, but in a movie): “Consider the source, son!”

If we wanted to be technically correct or very PC from the technical point of view, we couldn’t easily communicate with decision makers and other stakeholders (assuming that everything offered satisfies requirements from the customer’s RFP, why would application owner have to care what’s happening behind that mount point?), which is why occasional technical heresy can be a skill.

P.S. By the way, whatever happened to IP-SAN? It’s been a while since I heard iSCSI vendors using this term. Did it have to go because it had a low-end (as in “Netgear launches dirt-cheap IP SAN” Ouch!) ring to it? That’s too bad, because I kind of like it and it means what it does.


Good Riddance, Filesystem Consistency Check

May 5, 2007

An old ZFS hand shared with us two simple ways to improve performance of your (production) ZFS filesystem, one of which is to “disable ZIL”. And what is ZIL?

The ZIL is the way ZFS maintains consistency until it can get the blocks written to their final place on the disk.

As we all know, disabling stuff that has to do with consistency checking is not a good idea.

Hopefully no reader left the page without reading the comments, because one of them contains a life-saving advice:

Also, you should not turn off the zil. If your storage device has a non-volatile order-preserving cache, then you can safely turn off the flush write cache command by setting zfs_nocacheflush=1 in /etc/system.

Oh, o-kay!

Obviously this kind of “tuning” is quite popular – why spend money on crappy (insert your most hated storage vendor of the day) storage when you can tune your filesystem instead?

> There’s actually a tunable to disable cache flushes:
> zfs_nocacheflush and in older code (like S10U3) it’s zil_noflush.

Yes, but we didn’t want to publicise this internal switch. (I would not call it a tunable). We (or at least I) are regretting publicising zil_disable, but using zfs_nocacheflush is worse. If the device is volatile then we can get pool corruption. An uberblock could get written before all of its tree.

It seems there are still dream jobs out there and I was badly mistaken when I thought “get paid to play around with company data” would be false advertising.