Author: sotech

  • Asus motherboard BIOS update error: CAP file not recognised EFI bios!

     

    If you see the above error – and you’re selecting what you’re sure is the proper CAP file – chances are you’re trying to use a USB disk formatted to NTFS rather than FAT32. Frustratingly, the error message for a corrupt/non-CAP file is the same as the error message you get when you’re trying to use an NTFS-formatted drive, which Asus Ez Flash 2 is not compatible with.

     

    The solution is simply to use a FAT32-formatted drive. If that doesn’t work double check that your file hasn’t corrupted during the download and is complete, and the right BIOS for the board.

  • How to change a user’s password in Mediawiki

    If you have a wiki you may need to change a user’s password from time to time; you can do this from the back end quite easily. First, access mysql:

     

    mysql -u root -p

     

    Log in using your root password. Next, list your databases:

     

    show databases;

     

    On our test system this shows all of our databases like so:

     

    mysql> show databases;
    +——————–+
    | Database
    +——————–+
    | information_schema
    | mysql
    | performance_schema
    | press
    | test
    | wiki
    +——————–+
    6 rows in set (0.10 sec)

    Select your wiki’s database:

     

    USE wiki;

     

    Replace “wiki” in the above with your own database’s name.

     

    UPDATE user SET user_password = MD5(CONCAT(user_id, ‘-‘, MD5(‘newpasswordgoeshere’))) WHERE user_name = ‘usernameofuser’;

     

    If this is successful you should get the following:

     

    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0

     

    If something has gone wrong (e.g. a non-existent username) you will get the following instead:

     

    Query OK, 0 rows affected (0.03 sec)
    Rows matched: 0  Changed: 0  Warnings: 0

     

    All done! To leave mysql just type “exit”.

     

  • esxcli: Update/patch produces “Could not download from depot” error

    In ESXi 5.1 you can patch using the following command:

     

    esxcli software vib install -d /path/to/patch.zip

     

    If you’re getting the following result (using ESXi510-20121001.zip as an example):

     

     [MetadataDownloadError]
    Could not download from depot at zip:/var/log/vmware/ESXi510-201210001.zip?index.xml, skipping ((‘zip:/var/log/vmware/ESXi510-201210001.zip?index.xml’, ”, “Error extracting index.xml from /var/log/vmware/ESXi510-201210001.zip: [Errno 2] No such file or directory: ‘/var/log/vmware/ESXi510-201210001.zip’”))
    url = zip:/var/log/vmware/ESXi510-201210001.zip?index.xml
    Please refer to the log file for more details.

     

    This results from not putting in the absolute path to the .zip – e.g. using:

     

    esxcli software vib install -d ESXi510-20121001.zip

     

    rather than:

     

    esxcli software vib install -d /vmfs/volumes/datastore/ESXi510-20121001.zip

     

    Putting in the full path to the .zip file should resolve that error.

  • Ubuntu: How to create a file or folder using today’s date

    This is a useful little trick to use in your scripts – particularly for things like periodic backups.

     

    For a file:

     

    touch $(date +%F)

     

    creates the file 2012-11-18

     

    For a folder, let’s add the time after the date:

     

    mkdir $(date +%F-%H:%M)

     

    creates the folder 2012-11-18-09:00

     

    We can use this in a command, like so:

     

    tar -cf $(date +%F).tar /path/to/files/

     

    This creates the tarball (archive) file 2012-11-18.tar from the files in the /path/to/files/.

     

    To see the other options, type man date or visit the Ubuntu webpage on date here.

  • How To: Export all mysql databases for backup

    This is a handy command for anyone using multiple mysql databases – it produces a single file which you can easily back up to elsewhere.

     

    mysqldump -u root -p –all-databases > databasesBackup.sql

     

    Note the two hyphens before “all”. This command creates the file databasesBackup.sql which contains the contents of all of your databases. This file can be easily rsync’d or scp’d elsewhere to create an offsite backup of your site’s databases.

  • Error Code 60 on an Asus Z9PE-D16

     

    Today a customer’s Asus Z9PE-D16 wouldn’t boot and displayed the debug code 60 – in this case the problem was that the RAM sticks (8 total, 4 for each CPU) were in the black slots rather than the blue slots. Swapping them across resulted in a boot straight away.

     

    Hope that helps someone!

  • Intel E3-1245 V2 vs. Intel E3-1275 V2

    We were asked to spec a customer build the other day who was torn between the above two processors. Here’s our thoughts on them.

     

    E3-1245V2: Quad core, 8-thread, 3.4GHz -> 3.8GHz Turbo

    E3-1275V2: Quad core, 8-thread, 3.5GHz -> 3.9GHz Turbo

     

    All other specifications are equal apart from the 100MHz clockspeed difference. At this level 100MHz is a mere ~3% increase – not something that is going to be visible for most real-world applications. However, if there’s not much price difference it may be worth that extra bit of cash if you really do need every ounce of performance you can get (or just want the bragging rights). So what do they both cost (AU, our prices)?

     

    E3-1245V2: $299

    E3-1275V2: $389

     

    $90 difference – or around a 30% premium above the E3-1245V2 for a 3% clockspeed increase.

     

    If you have to have the utmost performance from a S1155 server chip with onboard graphics, there’s no other option. For most workstation users, however, the 3% is probably not going to be noticed – whereas the $90 could go towards a 128GB SSD or something similar where you’ll get a tangible speed boost. We would recommend taking a long hard look at the price difference – if you need it it’s $90 well spent but we’re pretty sure that most people will go with the E3-1245V2 at the end of the day and spend their $90 elsewhere.

  • Folding@Home: How to check your progress

    Once you start folding you will want to check on your contribution and see how you’re faring compared to other volunteers – one of the most popular ways of checking your stats is through the Extreme Overclocking website:

     

    http://folding.extremeoverclocking.com/

     

    On the left-hand side you can see a name search field; put your username in there and you will see all of the username results which match. This can be a bit of an issue if you have a common username (e.g. Bob) – doing a case-sensitive search might help a bit there. Once you find your username you can save the URL as it’s unique; for ours, see:

     

    http://folding.extremeoverclocking.com/user_summary.php?s=&u=567476

     

    If you click on your username at the top you’ll see your ranking within your team. The rest of the information is easily accessible – you can see your daily and weekly stats in the bar up the top and below that your top 5 conquests and threats. Below that again are graphs showing your future production estimates, past production and a list of your points from the past few months, weeks and days.

     

    The listings are updated every 3 hours, so bookmark the page and check regularly!

  • What is Folding @ Home?

    Do you leave your computer on 24/7? Want to contribute to research into how diseases like Parkinsons, Huntingtons and some types of cancer occur?

     

    Folding @ Home is a distributed computing project which uses people’s spare computer cycles to study the way proteins form in an attempt to understand how that process goes wrong, which is known to be one of the causes of a number of diseases, including the above. The program is run out of Stanford University in the U.S. and has produced a bit over 100 research papers in the past 10 years based on the results from computers around the world.

    You are awarded points based on how quickly you return completed work units to the server; faster computers will generally score more points per day. These points are listed online to encourage friendly competition and you can join teams to compete on a grander scale.

     

    If you’re interested, head over to:

     

    http://folding.stanford.edu/English/HomePage

     

    and download the client from the link in the middle! If you want to join in with the team we fold for, fold for team 24 🙂

     

  • ZFS: How to change the compression level

    By default ZFS uses the lzjb compression algorithm; you can select others when setting compression on a ZFS folder. To try another one do the following:

     

    sudo zfs set compression=gzip [zfs dataset]

     

    This changes the compression algorithm to gzip. By default this sets it to gzip-6 compression; we can actually specify what level we want with:

     

    sudo zfs set compression=gzip-[1-9] [zfs dataset]

     

    e.g.

     

    sudo zfs set compression=gzip-8 kepler/data

     

    Note that you don’t need the leading / for the pool, and that you can set this at a pool level and not just on sub-datasets. 1 is the lowest level of compression (less CPU-intensive, less compressed) where gzip-9 is the opposite – often quite CPU intensive and offers the most compression. This isn’t necessarily a linear scale, mind, and the type of data you are compressing will have a huge impact on what sort of returns you’ll see. Try various levels out on your data, checking the CPU usage as you go and the compression efficiency afterwards – you may find that 9 is too CPU-intensive, or that you don’t get a great deal of benefit after a certain point. Note that when you change the compression level it only affects new data written to the ZFS dataset; an easy way of testing this is to make several sets, set a different level of compression on each and copy some typical data to them one by one while observing. We discussed checking your compression efficiency in a previous post.

     

    Compression doesn’t just benefit us in terms of space saved, however – it can also greatly improve disk performance at a cost of CPU usage. Try some benchmarks on compression-enabled datsets and see if you notice any improvement – it can be anywhere from slight to significant, depending on your setup.