Some Perl for entering IPs into a database

This code is proof of concept, if you want to use it in a production environment I suggest you go over it heavily. For a person fairly new to perl there is a lot going on here that you may find useful. The overall idea is to convert IPs from dotted quad decimal numbers into binary then store them in a database. Because IPs can’t be duplicated on machines or it will cause a conflict, it is in general going to be a good value to have as a primary key. Feel free to use and adapt this code as you see fit. The end result should be something like:

 

mysql> select * from IPs;
+———————————-+———————————-+————————–+
| ip_address                       | netmask                          | computer_name            |
+———————————-+———————————-+————————–+
| 11000000101010000000001000000101 | 11111111111111111111111100000000 | control.frontandback.net |
+———————————-+———————————-+————————–+
1 row in set (0.00 sec)

#/usr/bin/perl

#IP2DB 0.1.0 (C) Febuary 2011 Howard A Underwood II
#Free for use and modification under the Creative Commons 1.0 License. If you want to give me a shout out try aunderwoodii#at#gmail.com
#The purpose of this code is to convert an IP address and netmask pair into Binary to make it easily stored in the database in a processable manner. This is only for IPV4 atm and is just a proof of concept, I’d love to see your adaptations to real world applications. Feel free to give me your feedback at the above address.

#This requires DBI and DBD::MySQL. Use CPAN or your package manager of choice to get them.
use DBI;
use DBD::mysql;

#info to connect to the DB server. This assumes that your table is pre-created. If you need to create a database do the following:
#create database ips;
#CREATE TABLE IPs (ip_address BINARY(32) PRIMARY KEY, netmask BINARY(32), computer_name char(200));

$hostname=localhost;
$db=”ips”;
$port=”3306″;
$user=”dbuser”;
$password=”wouldn’tyouliketoknow”;

#info to put into the DB. There’s the IP here, netmask and the computer name. These variables and the ones above are going to be what you need to use to adapt the script to your needs.
$ip=”192.168.2.5″;
$netmask=”255.255.255.0″;
$compname=”control.frontandback.net”;

#Getting down to business. This first line takes the netmask and breaks it into 4 ocets.
my @netmask = split (/\./, $netmask);
#Now that we have 4 ocets, we process each one into binary. Future modifications include cleaning this code up so that it’s a loop rather than 4 instances.
$ocetnm0= unpack(“B*”, pack(“C”, $netmask[0]));
$ocetnm1= unpack(“B*”, pack(“C”, $netmask[1]));
$ocetnm2= unpack(“B*”, pack(“C”, $netmask[2]));
$ocetnm3= unpack(“B*”, pack(“C”, $netmask[3]));
#We recombine everything into 1 Binary number after this.
$totalnm= $ocetnm0.$ocetnm1.$ocetnm2.$ocetnm3;
#Just printing the post process # on the TTY for human verification
print “$totalnm\n”;

#Now we repeat the process for the IP its self. This will probably get condensed into one instance along with the above code eventually. Once again, not the most efficient way to do it but rather straight forward.
my @ip = split (/\./, $ip);
$ocet0= unpack(“B*”, pack(“C”, $ip[0]));
$ocet1= unpack(“B*”, pack(“C”, $ip[1]));
$ocet2= unpack(“B*”, pack(“C”, $ip[2]));
$ocet3= unpack(“B*”, pack(“C”, $ip[3]));
$total= $ocet0.$ocet1.$ocet2.$ocet3;
print “$total\n”;

#Basic DBI connection code. We are using the DBI script to connect to the databse
$dsn = “DBI:mysql:database=$db;host=$hostname;port=$port”;
$DBIconnect = DBI->connect($dsn, $user, $password)
#If we don’t like what we see bail out because we can’t connect.
or die “Connection denied to database $db \n;”;
#Add the entry to the table. Please note that if you use the above table it will probably not let you run this more than once for any given IP.
eval { $DBIconnect->do(“INSERT INTO IPs (ip_address,netmask,computer_name) VALUES (‘$total’,’$totalnm’,’$compname’);”) };
print “Data not added to the database: $@\n” if $@;

Another basic shell script

The great thing about shell scripts is that they are a great way to solve complex problems that can cost you a lot of time to do manually. To this end, I had a client that needed some videos (that was made by using the Video production services Toronto) encoded on his server that didn’t encode properly. For an experienced script writer this would take about 5 minutes to write. It also makes it so that if the client wants to use it they can. The configuration was nice because the input and output file name was the same, just the extension was different. This is not very polished, if it were I would

A)run it as the same user

B)Put it in the user’s homedir

C)Make it so that it was password protected and executable via PHP script so the user wouldn’t require any bash experience at all but could upload a list via FTP and just run it.

#!/bin/bash

for video in `cat /root/list.txt` #We will run a loop where each line in list.txt is run as a variable $video.
do
mv /home/user/public_html/media/videos/flv/$video.flv /home/user/public_html/media/videos/flv/$video.flv.old #back up old files
ffmpeg -y -b 1500 -r 25 -i  /home/gogreenc/public_html/media/videos/vid/$video.* -f flv -s 640×480 -deinterlace -ac 1 -ar 41400 /home/user/public_html/media/videos/flv/$video.flv #encode new file, 640X480 out, FLV format deinterlaced.
chown user:user /home/user/public_html/media/videos/flv/$video.flv #chown to the right user. Not required if running as the right user.
done

A quickie MySQL backup script

I’ve seen my fair share of clients that need basic MySQL backups but have no control panel or don’t want to bother with Control panel based backups. This is a really simple setup that lets you do DB backups and put them in a local directory of the server. It would likely be easily modified to rsync to another server as well if you wanted to. There are a ton of options that could be added to this, your imagination (and shell scripting capacity) are the only limitations. Some suggestions I have would be

-Mail on success or failure and on old file deletion

-Connect to a remote DB

-Monitor the overall size

Well enough with the abstract, on to the shell!

#!/bin/bash
date=`date +%Y%m%d`
mysqldump –all-databases > /mysqlbackups/mysql-$date.sql
find /mysqlbackups/ -atime +30 -delete

If you notice, this takes up all of 4 lines. The first one is the she-bang, the second is establishing the date time stamp, the third dumps the databases and the last one purges any old backups. The only real variable you have to change here is the “+30” so that it is the number of days you want to retain the backups for minus one.

Why you pay money for ECC RAM

Tonight presents a valuable lesson. I had a box running heavy MySQL duty that would crash at odd times. I could get MySQL to start, but the processes would die, it wouldn’t terminate cleanly, and even on a freshly started copy it was giving me “out of memory” errors. After fighting this for some time (say hours) and assuming that it was me the user, I checked the system in a bout of frustration.

Being a Xeon, my first look after rebooting it was in the error log of the BIOS. It had a lone ECC error in the log. Where I couldn’t even run show databases; before it will go through a check and stay up now. I bring this up as it presents two invaluable lessons:

A)It’s usually the software or the sysadmin that screws a server up. Not the hardware. That being said it is best to consider it. This is the second time I’ve seen a machine with ECC RAM screw up like this in two years and several hundred servers later. I have seen maybe 20 ECC equipped machines that actually had DIMMs that were bad. Probably half that. With that being said MySQL tends to show it first.

B)ECC RAM is worth the extra outlay in the datacenter. This could have easily not been detected for a long period of time, and cost a client and the next client that would have been put on the server.

MySQL Auto Repair and Optimization

It’s important to keep your MySQL tables repaired and optimized, simply add the below command to crontab.

crontab -e

@daily mysqlcheck --all-databases -B -e --auto-repair --optimize

You will need to provide your MySQL root details in .my.cnf

[client]
user="root"
pass="password"

Alex actually showed me this a while ago but its a good bit of information.

How to disable wordpress in a single query.

This is just a quickie I hijacked off another web site but it came in really handy since a client couldn’t access their wordpress admin panel and we don’t really support it. Just log in to SSH or PHPMyAdmin and run the query that follows:

wp_options SET option_value = ” WHERE option_name = ‘active_plugins’;

*BOOM* no more WP plugins. Then if you’re troubleshooting it go through and reactivate till it breaks again. If not tell em they are on their own.

If you are doing this via SSH you will need to go into the MySQL shell. That can usually be done by typing “mysql” or “mysql -u username -p” and entering the password for the user. If you need this you can grab it from wp-config. After that you need to select the db to use. You do that by going:

mysql>use dbname_here;

This will put you into the appropriate db. Then you run the query above. This is of course assuming you’re using the default schema for naming the tables. If  you are not wp_options needs to be changed to just “options” or schema_options as necessecary. If you don’t know what your schema is you can do:

mysql>show tables;

Why not to use a downloaded my.cnf part II

In case you don’t recall or weren’t reading when it was put up, I did an article on why a downloaded my.cnf is a bad idea. It’s time for a bit of appending to that. I am cranking out my usual support tickets helping little old ladies with their sites about their pinochle groups (no not really) when I get a support ticket from a client who is having issues with their databases not working properly.

Naturally, being the curious sort I am and them being a paying client, I log into the server and start examining things. The first thing I do is try to do a database check. This doesn’t go over too well, I get a bunch of errors about “corrupt .frm files.” Naturally my curiosity is piqued at this point, so I go and start looking through /var/lib/mysql/hosts.err file. What is extremely odd here is that there it is super short, and there is nothing there really. At this point I decide to log into mysql and try rotating out the log file. It indicated that the operation had completed successfully, so I am a bit baffled at this point. Looking in the my.cnf, it appears that someone downloaded a web based my.cnf and stuck it on the server. It also appears that the logs are going somewhere else at this point. After examining the correct logs, I find a bunch of errors about the .frms being corrupted. I then start looking for the MYIs so that I can see if I can rebuild the .frms with them. To my dismay I find out there are no MYIs for the DBs here just the .frms.

After updating the client on what I’d found, I start looking over the net for an answer. It turns out that if InnoDB doesn’t have the engine started it will default to MyISAM databases, which is where the .frms came from. Definitely getting closer at this point, so I start looking at the engine statuses and of course InnoDB is disabled.

After this the picture is starting to come into focus a bit more, something in the my.cnf was changed and it broke everything. But what? I had noticed that the server was running MySQL as root when I had done “ps aux | grep mysql” earlier so I had a suspicion. Referencing another server I confirmed what I thought and found out that a line was missing.

user=mysql

The net downloaded configuration didn’t have a set MySQL user, root didn’t have permission to access the file and therefore it wouldn’t load the InnoDB files and in turn the InnoDB engine. A single line was missing and this production server was likely down for hours while they tried to figure out what happened. The internet is not a substitute for knowing what you are doing. Yes it can help you in this regard, but at the same time it is not a substitute for knowledge or competency. I once had an instructor tell me there is no crib sheet for life, it rings as true now as it ever did when he said it.

Basic MySQL backups

Backing data up in MySQL is super important. The site content is often replaceable, however your board posts, ecommerce orders, etc. are far far harder to replace, if at all possible. The other nice thing is that with a lot of database driven web sites you retain most content even if you only back up the databases. You can still lose things like images, but by and large your posts will still be there even in event of a crash. Since bad code, etc. happens there are a few reasons I would recommend doing backups of the databases.

  • Making server side changes, including configuration tweaks that can influence system stability.
  • Making software changes, even adding plugins etc. to your site. I have seen odd things in the past caused by plugins and they can interact together with not always predictable results.
  • Periodical backups as part of a data protection plan. I would do more than less, by and large most databases are not huge for the data they contain. I have seen forums with about 100,000 posts have about 2GB of database; not really that much if you think about it.

There are a few ways to do backups, but the most common for people who don’t use command line interface is probably going to be PHPMyAdmin. Quite frankly, I do not like using it for database backup and restorations. There are two reasons for this pertaining to reliability

  • If the session timeout of PHP or Apache is too low, you will get an incomplete dump. This means that backup you may have of your databases is all of a sudden found to be truncated at a few megabytes. This is not a big deal for a lot of people with small DBs however if you’re rocking out with 50+ megabyte DBs this can be a huge issue.
  • Restoration can be dependent on if the person has large uploads and large timeouts enabled. Since most people have slower upload than down, it can be a real nuisance to get the DB back up to the server and restored. Not only that but If you have a server with low upload size limits and timeouts, it may be impossible to get that DB back up and restored to the system before you hit one of these limits.

The way I honestly prefer to back my databases up is mysqldump; this allows one to have a lot of control over what gets dumped, do a full backup without worrying about doing it individually etc. It also allows us to make a nice easy cron job to do these backups too if we want.

The way we back a particular DB is up is:

mysqldump dbname > backup.sql

This will put the database named dbname into backup.sql. I believe you can separate multiple DBs with a space as well, however I don’t normally use that functionality. Another thing you can do is you can back up everything with:

mysqldump --all-databases > backup.sql

and this will do everything on the server. Handy for migrations, etc. Definitely way easier than trying to do this via a web interface and having 1,000 DBs.

How we restore these is fairly simple as well, we just run the command as follows:

mysql dbname < backup.sql

If you have a full DB dump you wouldn’t specify anything for the name when restoring. Notice how the carrat points in the opposing direction though. Don’t screw this up otherwise you will likely end up with a blank file and have to re-dump or re-move the file. I usually make 2 copies on the destination server especially when it’s not easy to get a second copy. There are other ways of moving the DBs over, such as copying the raw MYI files but that will be another blog.

Why you shouldn’t download a MySQL config from the net

Working on servers, people seem to think there is a cheat sheet to it at times. While there are many ways to simplify and automate a lot of configurations on a server, MySQL is not the time nor the place to cut corners. If you bog MySQL in turn the system will feel the wrath of your misconfigurations. Here are a few super common mistakes.

  • max_connections set way high-Why people think this is even remotely near a good idea is beyond me. It is common to go out to a server, find out it has either flat out hung or is spitting out of memory errors on the screen and has gone brain dead. Just because you can set your connections to 1500 doesn’t mean it’s a good idea. In fact it’s probably better for MOST people to set their connections to 150. The reason is simple; each connection available uses RAM. If a query comes by and there are 150 connections available, you will probably get an out of connections error and can log into the system, figure out what broke and fix it; If you hang the system you get an out of memory error and no real data to work off of. CPanel sets this to 500 by default, if you are using 500 connections (and the system is still up) YOU PROBABLY HAVE A PROBLEM.
  • huge buffer sizes-Guess what if you have 100 megabytes of InnoDB tables setting your innodb_buffer_size to 5GB isn’t going to help you out. Plain and simple. If you start adding more tables you could possibly even run out of RAM (see above.)
  • Big RAM-MySQL does not like RAM use above 2GB on 32 bit systems. They do not recommend running it because it can cause stability problems. In turn this means that there is little reason not to run a 64 bit OS any more unless you’re running a low end server. You NT guys are pretty much stuck with 2GB no matter if you’re running X86 or X64. That being said for hard core DB applications you should probably be putting it on a unix based OS anyways.

Besides this there is the single biggest reason of all:You will probably have lousy performance compared to a custom my.cnf. When tuning databases, one needs to keep in mind that the my.cnf  is tailored to a combination of the database content, how the database is used and the server its self. If there was a magical my.cnf that would make any server work great don’t you think that Oracle/Sun/MySQL would have included it with the server software? We will go into a few things in later blogs such as software to help tuning MySQL, what the parameters mean and how to actually tune a database properly.