Trick Out Your HTOP With Useful Features

Here’s the htop I’ve came up with over several years.

Simply create a “.htoprc” in your home folder with the below contents.

# Beware! This file is rewritten every time htop exits.
# The parser is also very primitive, and not human-friendly.
# (I know, it's in the todo list).
fields=0 48 17 18 38 39 40 2 46 47 49 1
left_meters=Hostname Tasks LoadAverage Uptime Memory Memory Swap CPU CPU
left_meter_modes=2 2 2 2 1 2 1 1 2

Rebuilding an RPM-based OS White it’s Running

Cool title right? Recently at Beyond Hosting we had a server get hard powered off while it was doing a raid array rebuild and for whatever reason it corrupted a ton of data, surprising right? Thank a singlehop DC ‘tech?’..

Okay well here’s how you do it.
First create a list of all the files that are SCREWED. Then reinstall them with yum, hopefully your yum/rpm still works..

rpm -V -a | grep -v local | awk '{print $2}' | \
xargs rpm -q --whatprovides | sort | uniq | grep -v "no package" | \
xargs yum -y reinstall

At this point restart and hopefully everything that isn’t a configuration or user generated file is fixed.

Why you pay money for ECC RAM

Tonight presents a valuable lesson. I had a box running heavy MySQL duty that would crash at odd times. I could get MySQL to start, but the processes would die, it wouldn’t terminate cleanly, and even on a freshly started copy it was giving me “out of memory” errors. After fighting this for some time (say hours) and assuming that it was me the user, I checked the system in a bout of frustration.

Being a Xeon, my first look after rebooting it was in the error log of the BIOS. It had a lone ECC error in the log. Where I couldn’t even run show databases; before it will go through a check and stay up now. I bring this up as it presents two invaluable lessons:

A)It’s usually the software or the sysadmin that screws a server up. Not the hardware. That being said it is best to consider it. This is the second time I’ve seen a machine with ECC RAM screw up like this in two years and several hundred servers later. I have seen maybe 20 ECC equipped machines that actually had DIMMs that were bad. Probably half that. With that being said MySQL tends to show it first.

B)ECC RAM is worth the extra outlay in the datacenter. This could have easily not been detected for a long period of time, and cost a client and the next client that would have been put on the server.

Turn off Windows Server 2008’s “Enhanced Security Configuration”

I have just completed a Windows 2008 Server Standard install and configuring various areas of the server. One configuration that I always turn off is IE ESC, or Internet Explorer Enhanced Security Configuration.  This is an easy step so I thought I would post this up to the blog for future reference.?

To do this with Windows 2008 Server:

  • Open Server Manager
  • Locate the area of Security Information as shown below:

  • Click the option Configure IE ESC

You will be shown the configuration window as shown below:

I have selected to turn OFF the IE ESC just for administrators on this server install. To complete simply click OK.

I just saved you from wanting to format the server with a nice clean install of Fedora….

How to identify what processes are generating IO Wait load

An easy way to identify what process is generating your IO Wait load is to enable block I/O debugging. This is done by setting /proc/sys/vm/block_dump to a non zero value like:

echo 1 > /proc/sys/vm/block_dump

This will cause messages like the following to start appearing in dmesg:

bash(6856): dirtied inode 19446664 ( on md1

Using the following one-liner will produce a summary output of the dmesg entries:

dmesg | egrep "READ|WRITE|dirtied" | egrep -o '([a-zA-Z]*)' | sort | uniq -c | sort -rn | head
    354 md
    324 export
    288 kjournald
     53 irqbalance
     45 pdflush
     14 portmap
     14 bash
     10 egrep
     10 crond
      8 ncftpput

Once you are finished you should disable block I/O debugging by setting /proc/sys/vm/block_dump to a zero value like:

echo 0 > /proc/sys/vm/block_dump

Another cool method with a perl script:

MySQL Auto Repair and Optimization

It’s important to keep your MySQL tables repaired and optimized, simply add the below command to crontab.

crontab -e

@daily mysqlcheck --all-databases -B -e --auto-repair --optimize

You will need to provide your MySQL root details in .my.cnf


Alex actually showed me this a while ago but its a good bit of information.