Lisle Solution Center Network Refurb part II

OK, in part one I went over the objectives that I wanted to achieve. In part II I’m going to go over the resources at my disposal. There are aspects of this that are compromise, but at the same time I decided to make priority what I wanted to. This is an interim environment that will carry us over to our 10G roll out. It is designed with a certain level of fault tolerance in mind, although it’s not as high as I would have liked.

Physically I am starting out with 2 Dell 6248s and 2 3750G-24P switches connected to a set of 2 3750G-48P switches acting as the cores. Each switch has a single 1G aggregate in LACP. I have a single uplink to the main network.

Due to having 2 stack cables, I made the decision that I would stack the 3750Gs that were previously acting as standalone switches. The reasoning is that I only have a single uplink so having that level of redundancy at the switch level is not really going to do much of anything. I am running 4 links from the 3750G-48Ps which will be used for the ESXi cluster to the 3750G-24Ps. Due to the stack cables, each of these switches acts as a single larger switch. This means that because I have 2 uplinks from each of the 48 port switches to the 24 port switches I can lose links or have the stacking cable fail on either side and remain up. It also means that I am getting 8GBPS of bandwidth to the core switches, which is going to be more important when I roll out 10G networking through those cores. This means I am going to only use the 3750G-48Ps as the ESXi cluster switches, and eliminate the Dell 6248s in the present configuration entirely. They will be re-used at some later point.

Logically, I’m really starting from scratch overall. There were some attempts at separating VLANs but that’s mangeld overall along with the vSwitch config in ESXi. I’ve decided to split the following items into separate VLANs, which is a very conservative plan but I think will give room to grow.

Non Routable:
-Internal /20 for people to configure networks on. I will probably allocate chunks to users on an IP plan of some form.
-/24 for managed power strips. Way overkill but who cares? Easier than running out of IPs
-/24 for switch management
-/24 for IPMI devices
-/24 for VMotion
-/24 for VMware Management.

Routable:
I have 3 /24s that are on the intranet. One of my goals with this networking function is to try and reclaim as much of that space as plausible just because I’m not sure what my difficulty in getting more space is. I would rather have this space be efficiently used no matter the challenge, so pushing things like IPMI over to non-routable IPs is a priority to me.

The last thing I will mention is I am documenting the physical configuration, port utilization and the addressing (VLANs and IPs.) This is probably the most critical step of doing this sort of reconfiguration. It’s way easier for the next guy (who may be you) to troubleshoot a configuration that it’s known where stuff is. They also tend to scale a lot better.

Next time, I’ll cover what I’m doing in the ESXi configuration and on the switch side. This is the stuff with a big learning curve!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.