Author: sotech

  • Ubuntu: Securing your remote SSH logins with Denyhosts

    Being able to log in to your server remotely via SSH is an incredibly powerful way of remotely managing your system. With so many devices now able to support consoles (just about any current smartphone or current OS, really) you can check on things, update or make changes from just about anywhere.

     

    One of the less positive consequences of opening up your SSH port to the wider world is that you’re also exposing your server to everyone else in the world, not just yourself. There are many computers and virus-born botnets out there who scan IP addresses for open ports and try to brute-force their way in to steal data, generally cause destruction or create another bot. One good way of protecting yourself is installing a program which monitors the attempted logins via SSH and blocks any IP addresses which match an undesired pattern: Denyhosts.

     

    You can install denyhosts by entering the following:

     

    sudo apt-get install denyhosts

     

    This installs denyhosts on your system, which starts automatically once installed and also on boot. You can edit the settings with the following file:

     

    /etc/denyhosts.conf

     

    Blocked IPs are listed in:

     

    /etc/hosts.deny

     

    It’s not unusual to have hundreds of entries after a couple of months. The default settings are reasonably good; you do have the freedom to make them as lenient or paranoid as you care to which is handy for tailoring it to your specific needs (e.g. strict rules re: logging in as accounts that don’t exist or the root account). Be aware that if you mistype your own password enough times you may ban your ou cown IP address, which might be inconvenient if you don’t have physical access to the server or another IP to fix!

     

    Denyhosts is a quick, easy and powerful way to begin securing your SSH-accessible servers – as far as we’re concerned it or an equivalent program are a must if you’re opening up a SSH port to the outside world.

  • ZFS: Replacing a drive with a larger drive within a vdev

    One way to expand the capacity of a zpool is to replace each disk with a larger disk; once the last disk is replaced the pool can be expanded (or will auto-expand, depending on your pool settings). To do this we do the following:

     

    zpool replace [poolname] [old drive] [new drive]

     

    e.g.:

     

    zpool replace kepler ata-WDC_WD15EARX-00PASB0_WD-WCAZAA512624 ata-WDC_WD20EARX-00PASB0_WD-WCAZAA637471

    If you then check on the pool’s status via zpool status, we see:

     

    NAME                                            STATE     READ WRITE CKSUM
    kepler                                          ONLINE     0     0     0
    raidz2-0                                      ONLINE     0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WMAZA7352703    ONLINE       0     0     0
    replacing-1                                 ONLINE     0     0     0
    ata-WDC_WD15EARX-00PASB0_WD-WCAZAA512624  ONLINE      0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WCAZAA637471  ONLINE       0     0     0  (resilvering)

     

    The pool will resilver and copy the drive contents across. If you have large drives which are reasonably full this resilver process can take quite a few hours. You can do this with multiple drives at once; here’s a zpool

     

            NAME                                            STATE     READ WRITE CKSUM
    kepler                                          ONLINE     0     0     0
    raidz2-0                                      ONLINE     0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WMAZA7352703    ONLINE       0     0     0
    replacing-1                                 ONLINE     0     0     0
    ata-WDC_WD15EARX-00PASB0_WD-WCAZAA512624  ONLINE      0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WCAZAA637471  ONLINE       0     0     0  (resilvering)
    replacing-2                                 ONLINE       0     0     0
    ata-ST2000DM001-9YN164_W24009TB           ONLINE       0     0     0
    ata-WDC_WD20EARS-00MVWB0_WD-WCAZAC389099  ONLINE       0     0     0  (resilvering)
    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300005367    ONLINE       0     0     0
    ata-WDC_WD20EARX-00MMMB0_WD-WCAWZ0842974    ONLINE       0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WMAZA7482198    ONLINE       0     0     0

    Please don’t disconnect the old drive before inserting the new one – this can cause issues with some setups where ZFS complains that it cannot find the old drive to replace.

     

    Once your resilver is complete on the final drive you can expand the vdev by running:

     

    zpool online -e [poolname]

     

    or you can turn on automatic expansion with the following settings:

     

    zfs set autoexpand=on [poolname]

     

    If you are using zpool online -e the pool does not have to be offline for it to work. Now sit back and enjoy your increased space!

  • ZFS: Adding an SSD as a cache drive

    ZFS uses any free RAM to cache accessed files, speeding up access times; this cache is called the ARC. RAM is read at gigabytes per second, so it is an extremely fast cache. It is possible to add a secondary cache – the L2ARC (level 2 ARC) in the form of solid state drives. SSDs may only be able to sustain about half of a single gigabyte per second but this is still vastly more than any spinning disk is able to achieve, and the IOPS (in/out operations per second) are also typically much, much higher than standard hard drives.

     

    If you find that you want more high-speed caching and adding more RAM isn’t feasible from an equipment or cost perspective a L2ARC drive may well be a good solution. To add one, insert the SSD to the system and run the following:

     

    zpool add [pool] cache [drive]

    e.g.:

     

    zpool add kepler cache ata-M4-CT064M4SSD2_000000001148032355BE

     

    zpool status now shows:

     

            NAME                                            STATE     READ WRITE CKSUM
    kepler                                          ONLINE     0     0     0
    raidz2-0                                      ONLINE     0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WMAZA7352713    ONLINE       0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WCAZAA637401  ONLINE       0     0     0
    ata-WDC_WD20EARS-00MVWB0_WD-WCAZAC389999  ONLINE       0     0     0
    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300005397    ONLINE       0     0     0
    ata-WDC_WD20EARX-00MMMB0_WD-WCAWZ0842074    ONLINE       0     0     0
    ata-WDC_WD20EARX-00PASB0_WD-WMAZA7482193    ONLINE       0     0     0
    cache
    ata-M4-CT064M4SSD2_000000001148032155BE       ONLINE       0     0     0

    You can see that the cache drive has been added to the bottom of the pool listing.

     

    The way the ARC works is beyond the scope of this post – suffice to say for the moment that simply adding a L2ARC is not necessarily going to improve performance in every situation, so do some research before spending money on a good SSD. Check back for a more detailed investigation into how ARC and L2ARC work in November!

  • How to exclude results from grep

     

    Sometimes you may wish to further filter grep output, such as in the following situation:

     

    # zfs get all | grep compressratio

    backup01         compressratio         1.23x                  –
    backup01         refcompressratio      1.00x                  –
    backup01/data    compressratio         1.50x                  –
    backup01/data    refcompressratio      1.50x                  –
    backup01/photos  compressratio         1.05x                  –
    backup01/photos  refcompressratio      1.05x                  –

    Here we only really want to see the compressratio results, not the refcompressratio. We can pipe the grep output to another grep command, this time inverting the results with the -v switch.

     

    # zfs get all | grep compressratio | grep -v refcompressratio

    backup01         compressratio         1.21x                  –
    backup01/data    compressratio         1.50x                  –
    backup01/photos  compressratio         1.05x                  –

    This excludes any line containing refcompressratio, making the results easier to read. This is particularly helpful when you have a large number of results from grep.

  • ZFS: How to check compression efficiency

     

    If you have enabled compression on a ZFS folder you can check to see just how much disk space you’re saving. Use the following command:

     

    sudo zfs get all [poolname]/[folder] | grep compressratio

     

    An example:

     

    sudo zfs get all backup01/data | grep compressratio

     

    returns the following:

     

    backup01/data compressratio  1.50x  –

     

    Here we can see we have a compression ratio of 1.5x. Compression is an excellent way of reducing disk space used and improving performance, so long as you have a modern CPU with enough spare power to handle it. Some data will not be easily compressible and you may see less benefit – other data will be much more compressible and you may reach quite high compression ratios.

     

    If we run the same command on a folder full of already-compressed RAW image files:

     

    sudo zfs get all backup01/photos | grep compressratio

    backup01/photos compressratio 1.05x

     

    …we can see that they do not compress as easily as the documents in the data folder, giving us only a 1.05x compression ratio. You can see the compression ratio of all of your ZFS pools and folders with the following:

     

    sudo zfs get all | grep compressratio

     

    Check your own datasets and see how much you are saving!

  • Ubuntu: How to list drives by ID

    If you’re creating a  zpool on Ubuntu you have several options when it comes to referring to the drives; the most common is /dev/sda, /dev/sdb and so on. This can cause problems with zpools as the letter designated to a drive can change if you move the drives around, add or remove a drive – causing your zpool to come up as faulty. One way to avoid this is to use a more specific name for the disk, one of which is the ID. You can find this on Ubuntu through the following command:

     

    sudo ls -l /dev/disk/by-id/

     

    This gives an output similar to the following:

     

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W1F01W57 -> ../../sdd

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W2400946 -> ../../sdc

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W24009TB -> ../../sde

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_W2402PBQ -> ../../sdb

    lrwxrwxrwx 1 root root  9 Oct 19 08:48 ata-ST2000DM001-9YN164_Z24064V7 -> ../../sda

     

    You can see that the command helpfully points which ID is associated with which /dev/sd?.

  • ZFS basics: Installing ZFS on Ubuntu

    For those who don’t want to use Solaris or FreeBSD as their ZFS platform Ubuntu now seems a valid option; installation is now relatively straightforward. Please note, though, that you should be running a 64-bit system – ZFS on Ubuntu is not stable on a 32-bit system. Open up a terminal and enter the following:

     

    sudo add-apt-repository ppa:zfs-native/stable
    sudo apt-get update
    sudo apt-get install ubuntu-zfs

    …and that’s it! From here you can begin to use the filesystem by creating a pool or importing an existing pool. Many people find Ubuntu an easier system to manage compared to Solaris or FreeBSD so it’s good news that it now appears to be a stable implementation, despite being technically a release candidate.

  • Get yourself a UPS…

    We had a lightning storm rcently; this was the view from our perspective:

     

     

    We also got a call about a customer’s computer that was no longer working, along with most of the connected peripherals. This wasn’t a coincidence…

     

    An uninterruptible power supply is a device which stores electricity in batteries and, when the power goes out, kicks the electricity supply across from mains to the batteries without missing a beat. Some better UPS models will also condition the power that comes through, compensating for brownouts or oversupply. Depending on the capacity of the UPS and the load of the devices attached to it you may end up with anything from minutes to hours of power supply; a small UPS will give you enough time to turn the devices off normally and some come with software to automate this process.

     

    A larger UPS may be able to give you enough run-time to outlast the power blackout – this could be very valuable if you run very long processes that can’t be interrupted without data loss. Uninterruptible power supplies generally also provide surge protection., and the higher-end models isolate the power supplied to your systems from the mains power as much as possible – meaning that you have a much better chance of reducing or avoiding entirely the damage caused by power surges.

     

    A basic UPS costs around $100, and for the protection they provide they’re a worthy investment. Protect your expensive electronic hardware today – drop us a line and we can set you up with a model that suits your needs.

  • Reminder: Check your backups!

    Just a quick reminder today – remember to check that your backups still work, particularly those that are full and likely haven’t been added to for some time. It wouldn’t be a lot of fun having your main drive die and only then finding out that your backup drive kicked the bucket 6 months previous and nobody noticed.

  • Checking SSD health with ESXi 5.1

    A new feature with ESXi 5.1 is the ability to check SSD health from the command line. Once you have SSH’d into the ESXi box, you can check the drive health with the following command:

     

    esxcli storage core device smart get -d [drive]

     

    …where [drive] takes the format of: t10.ATA?????????. You can find out the right drive name by the following:

     

    ls -l /dev/disks/

     

    This will return output something like the following:

     

    mpx.vmhba32:C0:T0:L0
    mpx.vmhba32:C0:T0:L0:1
    mpx.vmhba32:C0:T0:L0:5
    mpx.vmhba32:C0:T0:L0:6
    mpx.vmhba32:C0:T0:L0:7
    mpx.vmhba32:C0:T0:L0:8
    t10.ATA_____M42DCT064M4SSD2__________________________000000001147032121AB
    t10.ATA_____M42DCT064M4SSD2__________________________000000001147032121AB:1
    t10.ATA_____M42DCT064M4SSD2__________________________0000000011470321ADA4
    t10.ATA_____M42DCT064M4SSD2__________________________0000000011470321ADA4:1

     

    Here I can use the t10.xxx names without the :1 at the end to see the two SSDs available, copying and pasting the entire line as the [drive]. The command output should look like:

     

    ~ # esxcli storage core device smart get -d t10.ATA_____M42DCT064M4SSD2__________________________000000001147032121AB
    Parameter                     Value  Threshold  Worst
    —————————-  —–  ———  —–
    Health Status                 OK     N/A        N/A
    Media Wearout Indicator       N/A    N/A        N/A
    Write Error Count             N/A    N/A        N/A
    Read Error Count              100    50         100
    Power-on Hours                100    1          100
    Power Cycle Count             100    1          100
    Reallocated Sector Count      100    10         100
    Raw Read Error Rate           100    50         100
    Drive Temperature             100    0          100
    Driver Rated Max Temperature  N/A    N/A        N/A
    Write Sectors TOT Count       100    1          100
    Read Sectors TOT Count        N/A    N/A        N/A
    Initial Bad Block Count       100    50         100

    One figure to keep an eye on is the reserved sector count – this should be around 100, and diminishes as the SSD replaces bad sectors with ones from this reservoir. The above statistics are updated every 30 minutes. As a point of interest, in this case ESXi isn’t picking up on the data correctly – the SSD doesn’t actually have exactly 100 power-on hours and 100 power cycle count.

    Assuming it works for your SSDs, this is quite a useful tool – knowing when a drive is likely to fail can give you the opportunity for early replacement and less downtime due to unexpected failures.