Home » Tutorials » DHCP on Core – Clustered

DHCP on Core – Clustered

As I said we would be working on DHCP next. But to put a bit more difficulty in it we’re going to make it clustered. Clustering is a topic you will find in the 70-643 as well as working with Core. Don’t forget that most of what you can do with the command line on Core applies to the full version as well. So for this lab we’re going to need two additional servers as well as our original domain controller. Note down the address that you have set for your DC as your DC is also running DNS. DNS is vital for a healthy Active Directory. Your AD based network will not function properly without proper support from DNS. Let’s digress a bit and look at how DNS and Active Directory work together.

When a DC is promoted in AD it creates a number of SRV locater records both for its site and the list of DCs for that zone. Global catalogs also add records for themselves as well as several other Active Directory integrated applications. These are located in the _msdcs.domainname.local zone. When a computer is authenticating to AD it queries the DNS server for a SRV record for a DC in its site as well as the A record. If you want to get into the fine details of how this all works, and I highly recommend that you do, read through this Technet article. By the way a handy fix for thrashed locater records is to restart the netlogon service on your problem DC. It recreates any invalid locater records on the DC for you. Now back to our project.

Let’s get our servers created. Set up two Core installations and make sure that they have two network adapters. One will be for our production network and the other will be for a heartbeat network that is specifically for the clusters. Do some reading here for best practices for clustering. While we will not be meeting all of the minimum requirements and best practices since this is a simplified lab situation, you will still want to know what the requirements and best practices are for when you are implementing this in a real life situation. Now configure the first network adapter for your production network, but for the second nic when we configure it we will be leaving off the default gateway and also set no dns for it. Here is a quick run through how you would configure these machines, and then we’ll join them to the domain as well.

First let’s get our interfaces some more distinguishing names. Jump into netsh but only go to the interface level.

netsh interface>set interface name=”Local Area Connection” newname=”Public Network”
netsh interface>set interface name=”Local Area Connection 2” newname=”Heartbeat Network”
netsh interface>show interface

You should see the proper names for your interfaces now. This isn’t a necessary step but anyone coming behind you will be able to distinguish what is what a whole lot more easily than without naming them. Now let’s set a dns server for our public network and configure out heartbeat network.

netsh interface ipv4>set dnsserver name=”Public Network” source=static address=10.60.0.2
netsh interface ipv4>set address name=”Heartbeat Network” source=static address=192.168.1.101 mask=255.255.255.0

I’m sure you’ve noticed by now that you can go by either interface index or name. Whichever you prefer. Don’t forget to set these up on both servers, though don’t forget to assign different addresses either. No need to disable NetBIOS as it is disabled by default. Now let’s get these machines joined to the domain.

C:\>netdom join %computername% /domain:shinra /userd:administrator /passwordd:*

The * serves the purpose of prompting you to enter in your password. You can put the password in place of the * in the command line to skip the prompt. Don’t forget to reboot after you have successfully joined to the domain.

At this point you should step back and make sure that communication between everyone is working ok. If your network is broken in some fashion that makes it a royal pain to get clustering going. Now out of the box the windows firewall blocks pings so you will want to allow that through for testing. You’ll need to go back into netsh to do this.

netsh>firewall
netsh firewall>set icmpsetting type=8 mode=ENABLE

By the way it is also possible to do this all as a one liner from the command prompt like netsh firewall set icmpsetting type=8 mode=ENABLE. Once you’ve verified that network connectivity is working, especially across the heartbeat network, then we will continue. Run the command oclist. This will show you all of the components available to install. Now what we need to get installed first is the FailoverCluster-Core component so that we can start building our cluster.

C:\>start /w ocsetup FailoverCluster-Core

This will give us the availability of the cluster.exe command to setup and manage our clusters. A real man manages from the command line after all rarrgh-rarrrrrgh. The start /w is optional, it just let’s the command complete first before returning you to the command line. If you don’t mind it running in the background and giving you no notification when it is finished then omit the start /w. Run this on both of your nodes.

C:\>cluster TurksCluster /create /node:”renocore rudecore” /ipaddr:10.60.0.50/24

This will get our cluster created with both nodes added in. The first parameter is the name that we are designating for the cluster. In 2008 you can now add multiple nodes at the same time rather than first setting up the cluster and then bringing in the nodes. The subnet mask can be in either /xx or /xxx.xxx.xxx.xxx whichever notation you prefer.

C:\>cluster TurksCluster restype

This gives us a list of the available types that we have to work with. Now the type of cluster we will be creating will be the Node and File Share Majority. This will require a file share available so let’s hop onto the computer that we are not planning for the cluster and create a share for our witness.

C:\>mkdir TurksWitness
C:\>net share TurksWitness=C:\TurksWitness /GRANT:Shinra\TurksCluster$,FULL /GRANT:Shinra\Administrator,FULL
C:\>icacls C:\TurksWitness /grant Shinra\TurksCluster$:(F)
C:\>icacls C:\TurksWitness /grant Shinra\Administrator:(F)

Don’t forget the $ at the end of the cluster account name. That designates it to look for a computer account in AD. Otherwise you will get an error of “No mapping between account names and security IDs was done.” So this gets our share prepped. Now we need to add it as a quorum resource.

C:\>cluster TurksCluster resource “File Share Witness” /create /group:”Cluster Group” /type:”File Share Witness” /priv SharePath=\\cloudcore\TurksWitness
C:\>cluster TurksCluster resource “File Share Witness” /online
C:\>cluster TurksCluster /quorum:”File Share Witness”

This gets our File Share Witness set up and puts it in for the quorum resource. We now have a Node and File Share Majority. This gives us the extra vote needed for high availability. Let’s start getting our DHCP server cluster set up.

C:\>cluster TurksCluster group “DHCP Server” /create

Now we have a resource group created for our DHCP server.

C:\>start /w ocsetup DHCPServercore

Run this on both servers to get the DHCP service installed. We are in need of some shared storage for this server though. I am using StarWind’s iSCSI Target server to generate some instant SAN storage on a separate Server 2008 machine I set up. Feel free to use whatever you wish for storage though, as of this writing, none of the free Linux based iSCSI alternatives such as Openfiler current work with Server 2008 Failover Clustering. This may change in a few months. So we will run through setting up iscsi storage under Core. Iscsicli is a cryptic beast, so you may wish to peruse the initiator user guide over here. Now for best practices you would want to segregate the SAN traffic from your public network traffic but in this case we will maintain simplicity.

C:\>sc config msiscsi start= auto
C:\>sc start msiscsi

This configures our iscsi service to start automatically, and then we start it ourselves so that we can start adding our storage. Note the space after “start=” as it is very important to include that space.

C:\>iscsicli qaddtargetportal 10.60.0.75

If you are using a CHAP login and password with this then you add those after the address. Execute iscsicli listtargets to get your iqn so that you can copy and paste for our next series of commands.

C:\>iscsicli qlogintarget iqn.2006-01.inc.shinra:tsn.f3e716ca5a5c
C:\>iscsicli persistentlogintarget iqn.2006-01.inc.shinra:tsn.f3e716ca5a5c t * * * * * * * * * * * * * * * 0

Note the asterisks with spaces between them. There are 15 of them. This tells the driver to autoconfigure a slew of settings. Check the guide to customize to your set up though. Thanks to Chuck Timon and Microsoft KB 870964 for this one. Now run through the same routine on the other server to get your storage set up there as well. Then let’s bring our iscsi online and get it formatted.

C:\>diskpart
DISKPART> list disk

Look for your new disk in the list. Now we need to bring it online and initialize it. This isn’t quite as easy as when using the GUI though.

DISKPART> select disk=1
DISKPART> online disk
DISKPART> attributes disk clear readonly
DISKPART> create partition primary
DISKPART> format label=”Turks Cluster Storage”

This gets us a primary partition that is filled to the size of the storage we brought up. It will also be formatted to the default file system of NTFS. Optionally you can add “quick” as a parameter to format for things to be speedier in the format process. Now we need to assign a drive letter. So find your new volume in the list.

DISKPART> list volume
DISKPART> select volume 2
DISKPART> assign letter=T
DISKPART> detail disk

Note down the disk ID. We will be using this again soon. To get this storage added in to our DHCP Server cluster.

C:\>cluster TurksCluster res “DHCP Storage” /create /group:”DHCP Server” /type:”Physical Disk” /priv DiskSignature=0xB92B880C
C:\>cluster TurksCluster res “DHCP Storage” /online

Make sure you include the 0x. This let’s cluster know that it is a hex number it is working with. Otherwise it will fail out on an incorrect parameter. Alternatively you can break out your calculator and convert it to decimal and put in the decimal number. Don’t forget to bring this resource online. We can now start setting up the resources for our DHCP cluster.

C:\>cluster TurksCluster res “DHCP IP Address” /create /group:”DHCP Server” /type:”IP Address” /priv Address=10.60.0.55 SubnetMask=255.255.255.0

We have an ip for the DHCP server; let’s get it a name.

C:\>cluster TurksCluster res “DHCP Name” /create /group:”DHCP Server” /type:”Network Name” /priv DnsName=TurksDHCP Name=TURKSDHCP /adddep:“DHCP IP Address”

We get our name and also make the name resource depend upon having an IP. It is time to bring in our DHCP service.

C:\>cluster TurksCluster res “DHCP Service” /create /group:”DHCP Server” /type:”DHCP Service” /priv DatabasePath=”T:\DHCP\db” LogFilePath=”T:\DHCP\Logs” BackupPath=”T:\DHCP\Backup” /adddep:“DHCP Name” /adddep:”DHCP IP Address” /adddep:”DHCP Storage”

This gets our service created and puts all the data onto the shared storage. Let’s bring it online and see if it works.

C:\>cluster TurksCluster res “DHCP IP Address” /online
C:\>cluster TurksCluster res “DHCP Name” /online
C:\>cluster TurksCluster res “DHCP Service” /online

If no errors popped up then congratulations! You have created a DHCP cluster running on Core, and by the command line too! Now it is fully possible to use the gui from the management tools and connect to the cluster from there and create it. The benefit of doing things this way is it gives you familiarity with the command line so that you can make changes swiftly, or even more importantly so that you can script out any large cluster roll-outs you may be doing. Next part we’ll finish up actually configuring this DHCP server as it is rather useless at the moment without having any scopes created for it and being authorized in AD. Let’s go over a few gotchas for this though.

First of all make sure that you have all of your network connectivity and storage configured properly and talking to each other. As usual work your way up from the physical layer. Then look into the configurations of your resources. A common problem that stops a resource from starting is the dependencies. Make sure that the dependencies can start on their own, and also make sure that the resource has its dependencies added. For instance if you forget to add dependencies to the DHCP Service resource then it will not start until you add those. If cluster.exe is not working then make sure your service has started. Finally if it is not allowing you to create any resources make sure that you have created your cluster management point before you start adding services to it.

Advertisements

1 Comment

  1. […] DHCP on Core – Clustered « Jeffery Land’s Tech Blog […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

wordpress visitor counter

RSS Subscriptions

Contact Me

%d bloggers like this: