Tag: ZFS

  • ZFS: Adding an SSD as a cache drive

    ZFS uses any free RAM to cache accessed files, speeding up access times; this cache is called the ARC. RAM is read at gigabytes per second, so it is an extremely fast cache. It is possible to add a secondary cache – the L2ARC (level 2 ARC) in the form of solid state drives. SSDs may only be able to sustain about half of a single gigabyte per second but this is still vastly more than any spinning disk is able to achieve, and the IOPS (in/out operations per second) are also typically much, much higher than standard hard drives.

     

    If you find that you want more high-speed caching and adding more RAM isn’t feasible from an equipment or cost perspective a L2ARC drive may well be a good solution. To add one, insert the SSD to the system and run the following:

     

    zpool add [pool] cache [drive]

    e.g.:

     

    zpool add kepler cache ata-M4-CT064M4SSD2_000000001148032355BE

     

    zpool status now shows:

     

            NAME                                            STATE     READ WRITE CKSUM
    kepler                                          ONLINE     0     0     0
    raidz2-0                                      ONLINE     0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WMAZA7352713    ONLINE       0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WCAZAA637401  ONLINE       0     0     0
    ata-WDC_WD20EARS-00MVWB0_WD-WCAZAC389999  ONLINE       0     0     0
    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300005397    ONLINE       0     0     0
    ata-WDC_WD20EARX-00MMMB0_WD-WCAWZ0842074    ONLINE       0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WMAZA7482193    ONLINE       0     0     0
    cache
    ata-M4-CT064M4SSD2_000000001148032155BE       ONLINE       0     0     0

    You can see that the cache drive has been added to the bottom of the pool listing.

     

    The way the ARC works is beyond the scope of this post – suffice to say for the moment that simply adding a L2ARC is not necessarily going to improve performance in every situation, so do some research before spending money on a good SSD. Check back for a more detailed investigation into how ARC and L2ARC work in November!

  • ZFS: How to check compression efficiency

     

    If you have enabled compression on a ZFS folder you can check to see just how much disk space you’re saving. Use the following command:

     

    sudo zfs get all [poolname]/[folder] | grep compressratio

     

    An example:

     

    sudo zfs get all backup01/data | grep compressratio

     

    returns the following:

     

    backup01/data compressratio  1.50x  –

     

    Here we can see we have a compression ratio of 1.5x. Compression is an excellent way of reducing disk space used and improving performance, so long as you have a modern CPU with enough spare power to handle it. Some data will not be easily compressible and you may see less benefit – other data will be much more compressible and you may reach quite high compression ratios.

     

    If we run the same command on a folder full of already-compressed RAW image files:

     

    sudo zfs get all backup01/photos | grep compressratio

    backup01/photos compressratio 1.05x

     

    …we can see that they do not compress as easily as the documents in the data folder, giving us only a 1.05x compression ratio. You can see the compression ratio of all of your ZFS pools and folders with the following:

     

    sudo zfs get all | grep compressratio

     

    Check your own datasets and see how much you are saving!

  • Ubuntu: How to list drives by ID

    If you’re creating a  zpool on Ubuntu you have several options when it comes to referring to the drives; the most common is /dev/sda, /dev/sdb and so on. This can cause problems with zpools as the letter designated to a drive can change if you move the drives around, add or remove a drive – causing your zpool to come up as faulty. One way to avoid this is to use a more specific name for the disk, one of which is the ID. You can find this on Ubuntu through the following command:

     

    sudo ls -l /dev/disk/by-id/

     

    This gives an output similar to the following:

     

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W1F01W57 -> ../../sdd

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W2400946 -> ../../sdc

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W24009TB -> ../../sde

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W2402PBQ -> ../../sdb

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_Z24064V7 -> ../../sda

     

    You can see that the command helpfully points which ID is associated with which /dev/sd?.

  • ZFS basics: Installing ZFS on Ubuntu

    For those who don’t want to use Solaris or FreeBSD as their ZFS platform Ubuntu now seems a valid option; installation is now relatively straightforward. Please note, though, that you should be running a 64-bit system – ZFS on Ubuntu is not stable on a 32-bit system. Open up a terminal and enter the following:

     

    sudo add-apt-repository ppa:zfs-native/stable
    sudo apt-get update
    sudo apt-get install ubuntu-zfs

    …and that’s it! From here you can begin to use the filesystem by creating a pool or importing an existing pool. Many people find Ubuntu an easier system to manage compared to Solaris or FreeBSD so it’s good news that it now appears to be a stable implementation, despite being technically a release candidate.

  • ZFS: Stopping a scrub

    If you accidentally started a scrub on a pool or need to stop one for any reason it’s fortunately quite straightforward:

     

    # zpool scrub -s [poolname]

     

     

    e.g. zpool scrub -s kepler

     

    You can check on whether that was successful with zpool status – it will give an output above the pool status that looks like:

     

    pool: kepler
    state: ONLINE
    scan: scrub canceled on Sat Sep 29 10:30:14 2012

     

    Unnecessary scrubbing just wastes power – particularly for a large, nearly-full array where you’ll have quite a few disks running constantly for hours – so it’s good practice to cancel an accidentally-started scrub.

  • How to add a drive to a ZFS mirror

    Sometimes you may wish to expand a two-way mirror to a three-way mirror, or to make a basic single drive vdev into a mirror – to do this we use the zpool attach command. Simpy run:

     

    # zpool attach [poolname] [original drive to be mirrored] [new drive]

    An example:

     

    # zpool attach seleucus /dev/sdj /dev/sdm

     

    …where the pool is named seleucus, the drive that’s already present in the pool is /dev/sdj and the new drive that’s being added is /dev/sdm. You can add the force switch like so:

     

    # zpool attach -f seleucus /dev/sdj /dev/sdm

     

    to force ZFS to add a device it thinks is in use; this won’t always work depending on the reason why the drive is showing up as being in use.

     

    Please note that you cannot expand a raidz, raidz1, raidz2 etc. vdev with this command – it only works for basic vdevs or mirrors. The above drive syntax is for Ubuntu; for Solaris or OpenIndiana the drive designations look like c1t0d0 instead, so the command might look like:

     

    # zpool attach seleucus c1t1d0 c1t2d0

     

    …instead.

     

    This is a handy command if you want a three-way mirror but don’t have all three drives to start with – you can get the ball rolling with a 2-way mirror and add the third drive down the track. Remember that ZFS performs reads in a mirror in round-robin fashion, so that while you get a single drive’s performance for writes you will get approximately the sum of all of the drives in terms of read performance – it’s not hard for a 3-way 6gb/s SSD mirror to crack 1,500MB/s in sequential reads. It’s a fantastic way to get extreme performance for a large number of small VMs.

     

  • ZFS Basics – zpool scrubbing

    One of the most significant features of the ZFS filesystem is scrubbing. This is where the filesystem checks itself for errors and attempts to heal any errors that it finds. It’s generally a good idea to scrub consumer-grade drives once a week, and enterprise-grade drives once a month.

     

    How long the scrub takes depends on how much data is in your pool; ZFS only scrubs sectors where data is present so if your pool is mostly empty it will be finished fairly quickly. Time taken is also dependent on drive and pool performance; an SSD pool will scrub much more quickly than a spinning disk pool!

     

    To scrub, run the following command:

     

    # zpool scrub [poolname]

     

    Replace [poolname] with the name of your pool. You can check the status of your scrub via:

     

    # zpool status

     

    The output will look something like this:

     

    pool: seleucus
    state: ONLINE
    scan: scrub in progress since Tue Sep 18 21:14:37 2012
    1.18G scanned out of 67.4G at 403M/s, 0h2m to go
    0 repaired, 1.75% done
    config:

    NAME        STATE     READ WRITE CKSUM
    seleucus    ONLINE       0     0     0
    mirror-0  ONLINE       0     0     0
    sdh     ONLINE       0     0     0
    sdk     ONLINE       0     0     0

    errors: No known data errors

     

    Scrubbing has a low priority so if the drives are being accessed while the scrub is happening there should be less impact on performance. It’s a good idea to automate the scrubbing process in case you forget – we will do a later post on just how to do that!

  • ZFS folders on Ubuntu not mounting after reboot

    After upgrading to 12.04 people seem to be finding that their ZFS shares aren’t mounting on boot; this has an easy fix, fortunately – edit your /etc/rc.local file and add:

     

    stop smbd

    stop nmbd

    zfs mount -a

    start smbd

    start nmbd

     

    Save and reboot and your shares should be intact. This has worked on all of the systems we’ve tested it on so far.

  • How do you tell which zpool version you are running?

    This is a question that crops up fairly regularly, as different operating systems support different zpool versions. Fortunately, it’s quite easy to find out which versions you are running – simply run:

     

    # zpool upgrade

     

    If you want a more detailed readout, including the features of the pool version you have, try:

     

    # zpool upgrade -v

     

    Your output should look something like this:

     

    # zpool upgrade -v
    This system is currently running ZFS pool version 28.

    The following versions are supported:

    VER  DESCRIPTION
    —  ——————————————————–
    1   Initial ZFS version
    2   Ditto blocks (replicated metadata)
    3   Hot spares and double parity RAID-Z
    4   zpool history
    5   Compression using the gzip algorithm
    6   bootfs pool property
    7   Separate intent log devices
    8   Delegated administration
    9   refquota and refreservation properties
    10  Cache devices
    11  Improved scrub performance
    12  Snapshot properties
    13  snapused property
    14  passthrough-x aclinherit
    15  user/group space accounting
    16  stmf property support
    17  Triple-parity RAID-Z
    18  Snapshot user holds
    19  Log device removal
    20  Compression using zle (zero-length encoding)
    21  Deduplication
    22  Received properties
    23  Slim ZIL
    24  System attributes
    25  Improved scrub stats
    26  Improved snapshot deletion performance
    27  Improved snapshot creation performance
    28  Multiple vdev replacements

    For more information on a particular version, including supported releases,
    see the ZFS Administration Guide.

     

    Or for the more simple command:
    # zpool upgrade

     

    This system is currently running ZFS pool version 28.

    All pools are formatted using this version.

  • Fixing a ZFS pool which didn’t automatically expand

    We had a look at a pool today which had a raidz1 vdev consisting of 2x2TB drives and 1x1TB drive. The 1TB drive failed and was replaced with a 2TB drive; however, on resilver the pool didn’t expand. Exporting and importing didn’t work, so we tried:

     

    # zpool online -e [poolname] [disk01] [disk02] [disk03]

     

    …and the available capacity increased as it should.