Storage

Louis Mamakos louie at transsys.com
Wed Mar 6 15:04:53 CST 2013


On Mar 6, 2013, at 3:39 PM, Rob Seastrom <rs at seastrom.com> wrote:
> 
> How important is plug-and-chug vs. building it yourself to you?  How
> good are you with Unix?  I am super happy with a SmartOS (illumos
> under the hood, which is son-of-OpenSolaris) based NAS that I put
> together myself last fall in an HP N40L chassis.  Best of all it has
> ZFS which is a nice guard against silent data corruption.
> 
> [root at a0-b3-cc-e8-95-9a ~]# zpool list
> NAME    SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
> zones  5.44T  3.34T  2.09T         -    61%  1.00x  ONLINE  -
> [root at a0-b3-cc-e8-95-9a ~]# 
> 
> -r

+1 for ZFS.  I'm using it on a FreeBSD 8 system with great success.

louie at ringworld[2] $ zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
boot  9.94G  6.35G  3.59G    63%  1.00x  ONLINE  -
z     5.38T   143G  5.24T     2%  1.00x  ONLINE  -

I have 5x1T drives.  Each drive has two partitions: 10G, and everything else.

Two of the 10G partitions are organized into a ZFS mirror volume "boot" that
I boot from and contains most of the OS stuff.  The other 3 10G partitions are
used as swap.

The 5 large partitions are organized in a ZFS raidz2 volume.  Note exactly like
RAID5 because the details are a bit different and ZFS doesn't suffer from a
certain class of failures.  This volume can loose 2 out of the 5 drives and 
not loose any data.  There's lots of good stuff out there on ZFS, and I sure
like it quite a bit, especially the ability to create and delete ZFS file systems
and snapshots inside the volumes.  I routinely snapshot most of the ZFS file 
systems every day, and delete the previous week's daily snapshot just to
guard against human error.

I have ZFS also running on a much older Dell server platform, with similar
raidz2 configuration on older, smaller SCSI drives.  One of the drives was 
throwing media errors on read which were only noticed when looking at the
logs because ZFS could recover the data from other replicas of it.  Doing
a 'zpool scrub volume' will have ZFS read and verify all of the allocated
blocks, and re-write and recover the errored blocks, forcing a block
replacement.  No muss, no fuss.

louie
wa3ymh






More information about the Tacos mailing list