Lisle Solution Center Network Refurb part II

OK, in part one I went over the objectives that I wanted to achieve. In part II I’m going to go over the resources at my disposal. There are aspects of this that are compromise, but at the same time I decided to make priority what I wanted to. This is an interim environment that will carry us over to our 10G roll out. It is designed with a certain level of fault tolerance in mind, although it’s not as high as I would have liked.

Physically I am starting out with 2 Dell 6248s and 2 3750G-24P switches connected to a set of 2 3750G-48P switches acting as the cores. Each switch has a single 1G aggregate in LACP. I have a single uplink to the main network.

Due to having 2 stack cables, I made the decision that I would stack the 3750Gs that were previously acting as standalone switches. The reasoning is that I only have a single uplink so having that level of redundancy at the switch level is not really going to do much of anything. I am running 4 links from the 3750G-48Ps which will be used for the ESXi cluster to the 3750G-24Ps. Due to the stack cables, each of these switches acts as a single larger switch. This means that because I have 2 uplinks from each of the 48 port switches to the 24 port switches I can lose links or have the stacking cable fail on either side and remain up. It also means that I am getting 8GBPS of bandwidth to the core switches, which is going to be more important when I roll out 10G networking through those cores. This means I am going to only use the 3750G-48Ps as the ESXi cluster switches, and eliminate the Dell 6248s in the present configuration entirely. They will be re-used at some later point.

Logically, I’m really starting from scratch overall. There were some attempts at separating VLANs but that’s mangeld overall along with the vSwitch config in ESXi. I’ve decided to split the following items into separate VLANs, which is a very conservative plan but I think will give room to grow.

Non Routable:
-Internal /20 for people to configure networks on. I will probably allocate chunks to users on an IP plan of some form.
-/24 for managed power strips. Way overkill but who cares? Easier than running out of IPs
-/24 for switch management
-/24 for IPMI devices
-/24 for VMotion
-/24 for VMware Management.

Routable:
I have 3 /24s that are on the intranet. One of my goals with this networking function is to try and reclaim as much of that space as plausible just because I’m not sure what my difficulty in getting more space is. I would rather have this space be efficiently used no matter the challenge, so pushing things like IPMI over to non-routable IPs is a priority to me.

The last thing I will mention is I am documenting the physical configuration, port utilization and the addressing (VLANs and IPs.) This is probably the most critical step of doing this sort of reconfiguration. It’s way easier for the next guy (who may be you) to troubleshoot a configuration that it’s known where stuff is. They also tend to scale a lot better.

Next time, I’ll cover what I’m doing in the ESXi configuration and on the switch side. This is the stuff with a big learning curve!

Lisle Solution Center Network Refurb part I

Long time no post! After I became a field tech for EMC, my posting went by the wayside. Amazing how life goes, isn’t it? Well, just in time to not be a new years resolution I’m going to start posting some more stuff up at SMI. I have a ton of new stuff due to becoming Manager of the Lisle Solution Center. The LSC is a lab located at the Lisle EMC Office we actively demo from. It has a myriad of infrastructure from VPLEX to VMAX to XtremIO and software such as ViPR and VMware Site Recovery Manager.  The initial architecture is something that has fallen victim to atrophy that happens when we continuously build and adapt an environment. My goals when designing a network are really as follows:

Cleanliness-This is a biggie. Few like to work on a network that’s a rats nest and poorly organized, fewer still want to have anything to do with rebuilding it. This can be a big issue when service comes into play. Lack of standardization can be a huge issue. We’re actually going to be re-racking this environment at some point into a rack with better cable management (need power for the new rack first) so getting the base line in place is a big priority.

Standardization-There will always be some exceptions to standardization, but having it start to finish is a great thing. This means that when I’m done everything from the order the ports on the servers to the way the vSwitches are configured will have a method to them. This is also procedural standardization which means start to end certain things will be done with host adds.

Documentaion-Documentation is one of the critical steps of a build out that’s often overlooked. Ultimately I want the whole environment documented, but with networking being foundational it’s a very good place to start. Since I’m in charge, it will also make life easier on anyone who works in the LSC.

Lean-Less can easily be more. Some of the key purposes of this network restructure are to reduce switch count that’s needless and eliminate hundreds of feet of cabling. I know that there are many, many environments that could do the same very easily.

Reliable-Although redundancy of design is key, reliability to me is a few aspects. Reliability encompasses the aspects above as well. If you don’t know what goes where and can’t troubleshoot easily then it becomes far easier for human error to come into play.

 

Next time I will go over the initial environment, the resources at my disposal and the changes I’m looking to make. Stay Tuned!

Recommendations I make to save critical data

First off, your data is the most valuable part of any server. There are many many hour of very hard if not impossible to replace work involved in setting up even a fairly basic web site. This doesn’t even include things like client information, orders etc. that directly cost you money if you lose them.

Not all backup methods are for everyone. The reason is that there are widely variable needs for data security as well as a wide variety of budgets. Someone with a page that is doing e-commerce transactions will likely need a lot more in regards to backups than someone with a bi-weekly blog for instance.

First off, there are two different modes of failure one will encounter as a sysadmin. The first is a “hard” failure. This includes drives or RAID arrays (yes it does happen) going bad. I love RAID, I think it’s a great measure to ensuring data protection but it’s not fool proof by any means and is no substitute for backups.

The second type of failure is the “soft” failure. With this failure mode for whatever reason data on the system is gone. This can be anything from a user deleting off their public_html directory to data corruption because the drive is heavily over run. Commonly this is someone running an FS check on a machine and having it dump a few thousand files to lost&found. I have seen my fair share of machines come up after this and run fine, and have seen plenty that didn’t too. This can also be the result of hackers etc. messing around on your system. Something I will warn of is if you use a secondary drive in the same server for backups, it can be something that is deleted by hackers as well. If you leave the drive mounted after backups are done and they do rm -rf /* it will be erased. Be sure to unmount your backup drive if you use this method. In general I do not advise relying on it for this reason, however it makes for a great way to have backups on a system without waiting for them to transfer. Ensuring the integrity of your backups involves utilizing reliable storage solutions. HPE hard drives offer robust and dependable storage options that can be integral in safeguarding your crucial data. With their quality and resilience, they provide a secure platform for backups, assuring that your data remains protected even in the event of system failures or security breaches

The first rule I have is no matter what you should have minimum three copies of your data, at least one of which is totally off site and not within the same company as your server/colocation/shared host etc. This gives you options if something happens, and you’re not relying on one group of people to ensure your data is in tact.This can be as simple as having your system upload the files to a home or office computer via DynDNS and back mapping the port, then burning the images on to a CD weekly. On a higher level it can be storage by a company offering cloud storage such as Amazon.

How often you should back your data up and retain it is another question that is fairly common. This is largely subjective, and is a compromise between how much data you can afford to lose versus how much space you can afford. If you’re running a streaming video site, this can get quite pricey very quickly. Even to the point it may be best to try and get a low end server and put big drives in it to back up to. Afterall if you pay .50/gb and need a 1TB of backup space $500 buys a good bit of server!

What to back up is another good question. If you’re running a forum or something like that where there aren’t really all that many changes made to the underlying software, doing a single full backup and then backing the user upload directories (eg images) and the database may be enough. If the site is undergoing constant development, full backups would be a great deal more prudent.

The last thing to consider is how these backups are going to be made. I have done backups before with shell scripts, and used both Plesk’s and CPanel’s backup mechanisms. When doing a shell script for backups, you gain a ton of versatility in how and what you back up, at the price of being a lot more tedious to configure. These sort of backups are really nice if you’re wanting to make it so that your system backs up only certain things on varying interval. The panel based backups are so easy to configure, there is little to no reason you shouldn’t set them up. You just specify how often you want backups, where they will be stored and what will be backed up. The caveat I will warn about using a panel based backup system is that even with CPU level tweaks in the config files these can heavily load a system so my advice is to run them off hours.

Setting up a Linux console

I had always wanted to have a dumb terminal on my Linux workstation. I always liked the idea of having a text only environment just because I’m not a fan of clicking through a thousand windows to get what I want set up in regards to consoles. That being said, I haven’t had any luck finding any dumb terminals. Mainframe being out of style dumb terminals are getting progressively harder and harder to come by. Thin Clients are actually easy to get via Ebay or other sources, come in a variety of shapes, sizes and capabilities. I ended up buying a random terminal likely without doing adequate research into it, but for the price of $25 shipped I figured it was a minimal investment.

The terminal its self came with a copy of embedded Windows that was trying to connect to a Citrix Server. Considering that is far from what I wanted, I needed to figure out how to get Linux on it. There are a few things you need to know about these terminals. They have an AMD Geode processor, 128 Megabytes of RAM, but more importantly IMO they only have 32 Megabytes of flash. This ultimately is the biggest limitation.

This means you’re really having to chop things down to even get a base OS and some sort of SSH client on them. Make sure to use a USB 3 copier and copy any essential files before starting the boot process. There are really 2 vectors for getting another OS on these clients, the first is USB. Considering this is a USB1 terminal, it is horridly slow to do this way. It does work, but a basic flash drive takes forever to boot.  The other way, the way that I ultimately settled on is PXE booting. There are a few advantages of this IMO, among them being:

-Faster. 100 megs versus 12 megs. Even with a basic initrd it’s obvious to me which wins.

-Ease of migration. You can use this with pretty much any desktop, laptop or terminal that will PXE boot. That’s very convenient if you want to step up to something a bit hotter, or use a random laptop as a spare console.

-Remote connectivity. If I wanted to use this in my garage remotely, all I would have to do is run an Ethernet cable. Makes it really easy to get a VT.  No OS installations to worry about, it “just works.”

-Expandable. We can set this up to connect to a remote X11 server later if we want to.

To actually start the install, we need to change the BIOS. The password for these systems is “Fireport” which makes it easy for us to log in. There’s not a ton of options here, so we will change the boot order and exit.

Linux selection is a mixed bag on these, the fact the Geode is an I586 architecture limits the options pretty significantly. I decided on Slackware as I’ve always liked it, you can use the latest version with an I586 system and it’s fairly easy to “chop.”

On to actually getting this thing booted up in a (non-windows) environment. First things we need to do are set up services. On SuSE this can be a bit more trying than Red Hat due to having to open some ports, but the process is essentially the same. I have a separate NIC as well due to running DHCP on my main network and not wanting to cause conflicts with that. The idea is that the main server will be 192.168.1.1, the terminal will be set up as 192.168.1.2.

/etc/dhcpd.conf:

ddns-update-style none;
default-lease-time 14400;
filename “pxelinux.0″;

# IP address of the dhcp server nothing but this machine.
next-server 192.168.1.1;
subnet 192.168.1.0 netmask 255.255.255.0 {
# ip distribution range between 192.168.1.1 to 192.168.1.100
range 192.168.1.2 192.168.1.100;
default-lease-time 10;
max-lease-time 10;
}

I also edited /etc/sysconfig/dhcpd to set up dhcpd to listen on eth1:

DHCPD_INTERFACE=”eth1”

The next thing I did was install the TFTP server. There’s not much to that, it’s an Xinetd service. Be sure that ports are open, if at all possible I like to try and nmap the server to make sure everything is open and running. After this, we need to add a few things to the config. The first is the pxelinux.0 file which goes in /tftproot/. After this, a pxelinux.cfg directory needs to get created. Add a file to this called default. Since I started with the hugesmp kernel (I would just use the regular huge kernel since this is unicore and uniproc) I set it up the following way:

default hugesmp.s
prompt 1
timeout 1
display message.txt
F1 message.txt
F2 f2.tx
label hugesmp.s
kernel kernels/hugesmp.s/bzImage
append initrd=initrd.img load_ramdisk=1 prompt_ramdisk=0 rw SLACK_KERNEL=hugesmp.s

I copied the initrd.img off the DVD as well as the “kernels” directory into /tftpboot in their entirety. You should be able to actually boot the Slackware initrd at this point, and run any of the setup aps you want. We however, are going to do a lot more with it. This will come in part II where we will tweak it to our means. The cool thing about this initrd is that it has an SSH server as well as an SSH client built into it.

Extending LVM across multiple disks

Had a situation arise yesterday where a coworker was wanting to extend an LVM Volume Group across two disks. It’s actually really simple to do.

The first thing we do is use vgdisplay to show original info for the Volume Group. Notice how when you look at this, the Free PE Size is 0MB.

[root@nfsen01 ~]# vgdisplay
— Volume group —
VG Name               VolGroup00
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               2.88 GB
PE Size               32.00 MB
Total PE              92
Alloc PE / Size       92 / 2.88 GB
Free  PE / Size       0 / 0
VG UUID              XXXXXXXXXXXXXXXXXXXXXXXXXX

To create the LVM PV on your new new disk follow these steps.

fdisk /dev/sdb
n
p
1
enter
enter
t
1
8e
w

Now we will probe for the new linux partition without rebooting:

partx -v -a /dev/sdb
pvcreate /dev/sdb1

Assuming you are using sdb1 as your drive, extending the Volume Group is as simple as:

vgextend VolGroup00 /dev/sdb1

And this will extend the volume across the entire disk. You should be able to run vgdisplay again and see your free PE size went up.

What you have to do next is extend the Logical Volume for the disk. This is optional depending on your objectives, if you wanted a common VG and wanted to create new volumes you can do it at your convenience now.

lvextend -L +931.51G /dev/mapper/VolGroup00-LogVol00

Assuming you’re running EXT3 you would use this command. For other file systems on top of LVM your milage may vary; Consult your documentation.

resize2fs /dev/mapper/VolGroup00-LogVol00 -p

After this is done you should be able to use df -h on the drive, and see your partition has been enlarged. This can even be done while the system is active, there’s no need for any boot CDs or the likes.

Some Perl for entering IPs into a database

This code is proof of concept, if you want to use it in a production environment I suggest you go over it heavily. For a person fairly new to perl there is a lot going on here that you may find useful. The overall idea is to convert IPs from dotted quad decimal numbers into binary then store them in a database. Because IPs can’t be duplicated on machines or it will cause a conflict, it is in general going to be a good value to have as a primary key. Feel free to use and adapt this code as you see fit. The end result should be something like:

 

mysql> select * from IPs;
+———————————-+———————————-+————————–+
| ip_address                       | netmask                          | computer_name            |
+———————————-+———————————-+————————–+
| 11000000101010000000001000000101 | 11111111111111111111111100000000 | control.frontandback.net |
+———————————-+———————————-+————————–+
1 row in set (0.00 sec)

#/usr/bin/perl

#IP2DB 0.1.0 (C) Febuary 2011 Howard A Underwood II
#Free for use and modification under the Creative Commons 1.0 License. If you want to give me a shout out try aunderwoodii#at#gmail.com
#The purpose of this code is to convert an IP address and netmask pair into Binary to make it easily stored in the database in a processable manner. This is only for IPV4 atm and is just a proof of concept, I’d love to see your adaptations to real world applications. Feel free to give me your feedback at the above address.

#This requires DBI and DBD::MySQL. Use CPAN or your package manager of choice to get them.
use DBI;
use DBD::mysql;

#info to connect to the DB server. This assumes that your table is pre-created. If you need to create a database do the following:
#create database ips;
#CREATE TABLE IPs (ip_address BINARY(32) PRIMARY KEY, netmask BINARY(32), computer_name char(200));

$hostname=localhost;
$db=”ips”;
$port=”3306″;
$user=”dbuser”;
$password=”wouldn’tyouliketoknow”;

#info to put into the DB. There’s the IP here, netmask and the computer name. These variables and the ones above are going to be what you need to use to adapt the script to your needs.
$ip=”192.168.2.5″;
$netmask=”255.255.255.0″;
$compname=”control.frontandback.net”;

#Getting down to business. This first line takes the netmask and breaks it into 4 ocets.
my @netmask = split (/\./, $netmask);
#Now that we have 4 ocets, we process each one into binary. Future modifications include cleaning this code up so that it’s a loop rather than 4 instances.
$ocetnm0= unpack(“B*”, pack(“C”, $netmask[0]));
$ocetnm1= unpack(“B*”, pack(“C”, $netmask[1]));
$ocetnm2= unpack(“B*”, pack(“C”, $netmask[2]));
$ocetnm3= unpack(“B*”, pack(“C”, $netmask[3]));
#We recombine everything into 1 Binary number after this.
$totalnm= $ocetnm0.$ocetnm1.$ocetnm2.$ocetnm3;
#Just printing the post process # on the TTY for human verification
print “$totalnm\n”;

#Now we repeat the process for the IP its self. This will probably get condensed into one instance along with the above code eventually. Once again, not the most efficient way to do it but rather straight forward.
my @ip = split (/\./, $ip);
$ocet0= unpack(“B*”, pack(“C”, $ip[0]));
$ocet1= unpack(“B*”, pack(“C”, $ip[1]));
$ocet2= unpack(“B*”, pack(“C”, $ip[2]));
$ocet3= unpack(“B*”, pack(“C”, $ip[3]));
$total= $ocet0.$ocet1.$ocet2.$ocet3;
print “$total\n”;

#Basic DBI connection code. We are using the DBI script to connect to the databse
$dsn = “DBI:mysql:database=$db;host=$hostname;port=$port”;
$DBIconnect = DBI->connect($dsn, $user, $password)
#If we don’t like what we see bail out because we can’t connect.
or die “Connection denied to database $db \n;”;
#Add the entry to the table. Please note that if you use the above table it will probably not let you run this more than once for any given IP.
eval { $DBIconnect->do(“INSERT INTO IPs (ip_address,netmask,computer_name) VALUES (‘$total’,’$totalnm’,’$compname’);”) };
print “Data not added to the database: $@\n” if $@;

The Sword of SEO part II

Well, it’s been a long time since I posted the first article on this. My time or lack thereof got the best of me. To counter this attack is actually very very easy. The first thing you do is you find out who is the referrer. This is simply done by tailing the logs. If you have a single domain, this can be fairly easy. Otherwise my preferred method involves using “watch ls -l” and seeing which log grows the fastest. This tends to be the one getting hit, or a likely suspect. I will probably write a perl script later to check this and tell me which log grows the most in say 10 seconds eventually. After this, you can use tail in the manner of:

tail -f /etc/httpd/domlogs/domain.log

When you do this, you will see what IPs are querying the page and the source they are being referred from. Look for any thing that doesn’t look like a search engine. To actually block them after they are identified what you do is you block the attack based on a referrer in the .htaccess. See the convenient rewrite code I jacked off another web site (about the same I did when I really saw the attack.)

RewriteEngine on
# Options +FollowSymlinks
RewriteCond %{HTTP_REFERER} attacker\.com [NC]
RewriteRule .* – [F]

So, why does this work you may ask? In the case of the scenario I saw the person was attacking a “high value” target. This means a page that hits the database and has dynamically generated content with no caching. Server side configuration CAN make these sort of attacks a lot harder to perpetrate as well. Anything that you can do to increase the robustness of a server will help with a DoS. When you add a rule like this where it denies access to the referrer basically what happens is you pull up static content instead. Static content uses virtually no resources compared to something PHP based and backed by a databse. It’s a good idea to know about this sort of attack, as I could see it being bigger in the future. Black hat SEO is very common these days, and if you have the SEO part down the resources to do the rest of this attack are virtually nothing compared to what it does. You can click here to go on Freshlinks and learn more about SEO and outreach strategies.  It could also be plausible we will see this attack combined with “conventional, network level” type DoSing to increase its effectiveness.

Another basic shell script

The great thing about shell scripts is that they are a great way to solve complex problems that can cost you a lot of time to do manually. To this end, I had a client that needed some videos (that was made by using the Video production services Toronto) encoded on his server that didn’t encode properly. For an experienced script writer this would take about 5 minutes to write. It also makes it so that if the client wants to use it they can. The configuration was nice because the input and output file name was the same, just the extension was different. This is not very polished, if it were I would

A)run it as the same user

B)Put it in the user’s homedir

C)Make it so that it was password protected and executable via PHP script so the user wouldn’t require any bash experience at all but could upload a list via FTP and just run it.

#!/bin/bash

for video in `cat /root/list.txt` #We will run a loop where each line in list.txt is run as a variable $video.
do
mv /home/user/public_html/media/videos/flv/$video.flv /home/user/public_html/media/videos/flv/$video.flv.old #back up old files
ffmpeg -y -b 1500 -r 25 -i  /home/gogreenc/public_html/media/videos/vid/$video.* -f flv -s 640×480 -deinterlace -ac 1 -ar 41400 /home/user/public_html/media/videos/flv/$video.flv #encode new file, 640X480 out, FLV format deinterlaced.
chown user:user /home/user/public_html/media/videos/flv/$video.flv #chown to the right user. Not required if running as the right user.
done

A quickie MySQL backup script

I’ve seen my fair share of clients that need basic MySQL backups but have no control panel or don’t want to bother with Control panel based backups. This is a really simple setup that lets you do DB backups and put them in a local directory of the server. It would likely be easily modified to rsync to another server as well if you wanted to. There are a ton of options that could be added to this, your imagination (and shell scripting capacity) are the only limitations. Some suggestions I have would be

-Mail on success or failure and on old file deletion

-Connect to a remote DB

-Monitor the overall size

Well enough with the abstract, on to the shell!

#!/bin/bash
date=`date +%Y%m%d`
mysqldump –all-databases > /mysqlbackups/mysql-$date.sql
find /mysqlbackups/ -atime +30 -delete

If you notice, this takes up all of 4 lines. The first one is the she-bang, the second is establishing the date time stamp, the third dumps the databases and the last one purges any old backups. The only real variable you have to change here is the “+30” so that it is the number of days you want to retain the backups for minus one.