Home » Posts tagged 'windows server core'
Tag Archives: windows server core
Anyone taken a look at Windows Server 2008 R2 yet? Things I’m excited about in it are PowerShell on Core, AD cmdlets, and the AD Recycle Bin. PS on Core is the most exciting addition though. Maybe later on I will start delving into R2 and talk about working with that on Core. This time, though, we are going to deal with setting up a basic DFS using Windows Server 2008 Core machines.
Core makes for a low resource file server that you can deploy to do its job without letting layers of the OS get in the way. Using it for a DFS will be a step in the right direction towards high availability of your data as well. Further more it can be used as a way to put some controls on your bandwidth utilization through having replicas of your data in locations that are local to your users. Failover is provided by pointing the users at the namespace which will then direct the users to the nearest server. Let’s run through putting together a setup on Core.
Grab our first server and let’s install the DFS NameSpace role.
C:\> start /w oclist DFSN-Server
Once this is complete we can start breaking out our trusty dfsutil.exe tool. We will start out with making a domain based namespace. Set up a share to use for this.
C:\> mkdir TurksNS
C:\> net share TurksNS=C:\TurksNS /GRANT:”Authenticated Users”,FULL
Don’t forget to customize the share and NTFS permissions to your specific needs.
C:\> dfsutil root adddom \\renocore\TurksNS “2008 Namespace”
You can also add V1 or V2 as a parameter. The default is V2. V1 is a Windows Server 2000 mode namespace while V2 is a 2008 mode namespace. Note that a requirement for a V2 namespace is a Windows Server 2008 domain functional level. If you receive any the RPC server is unavailable errors make sure the DFS Namespace service is running. Easiest way is to reboot but you can also run the sc command to start up the service.
C:\> sc start dfs
After that if you are still getting RPC errors then check your firewall and start going down the usualy RPC troubleshooting path. Let’s verify that we have created our domain based namespace.
C:\> dfsutil domain shinra.inc
You will see your newly created namespace there. Of course it isn’t doing much for us right now so let’s create some targets for it. Create another share on this server (or really any server) and add a link.
C:\> dfsutil link add \\shinra.inc\data \\renocore\data
If browse to \\shinra.inc\data via UNC or just map a drive you’ll now see the data available in there. This get us a running DFS, but it really isn’t anything more than a fancy way to share data right now. There are not multiple targets so no replication is occurring. If this server goes down there goes the access to the data. Let’s get some targets in there to fulfill the D in DFS. Jump onto another server, install the DFSN-Server role, and make yourself a share to add to the pool. Don’t forget to make sure it has the same share and NTFS permissions as your first share, otherwise things could get difficult for troubleshooting problems later on. Once you have it ready we can add the target.
C:\> dfsutil target add \\shinra.inc\TurksNS\Data \\RudeCore\Data
We have our links now. But we still have no replication. To get this setup we need yet another role added.
C:\> start /w ocsetup DFSR-Infrastructure-ServerEdition
We will then set up a replication group for our folder here.
C:\> dfsradmin RG New /RgName:TurksData
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RudeCore
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RenoCore
This gives us a replication group with our two servers added in as members. Next we will bring in our data for replication.
C:\> dfsradmin RF New /RgName:TurksData /RfName:TurksData /RfDfsPath:\\shinra.inc\TurksNS\Data /force
We have a folder set for replication, but now we need replication links so that the data may flow. Note that force is required because we set up our namespace target first.
C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RudeCore /RecvMem:RenoCore /ConnEnabled:True /ConnRdcEnabled:True
C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RenoCore /RecvMem:RudeCore /ConnEnabled:True /ConnRdcEnabled:True
Close to the end but we still need to bring in memberships to this replication group.
C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RenoCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:True /force
C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RudeCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:False /force
Replication should start flowing smoothly now shortly. If you don’t have any data in there or if you have prepopulated the shares then you won’t know for sure if replication is working properly. You can run a test from this command line utility.
C:\> dfsradmin PropTest New /RgName:TurksData /RfName:TurksData /MemName:RenoCore
This will start the test from RenoCore and the data will flow to Rudecore. Generate the results with dfsradmin.
C:\> dfsradmin PropRep New /RgName:TurksData /RfName:TurksData /MemName:RenoCore
You’ll find an html and xml file generated to pull up in your web browser. Of course you may just find it easier to do things on your own with creating a new different file on both shares and verifying if it is replicated to the other. But the good thing about the report is that it is detailed and will help you in tracking down any issues you may be having. You can also use dfsradmin to automatically create the folders for you when you use dfsradmin RF. Just add them into the namespace later on. So let’s touch on one last topic here, replication of large amounts of data.
It is ok to run through this with a small amount of data that the DFS may need to replicate initially, but if you get into large amounts, which I generally consider to be amounts over 400 or 500 GB, you will definitely want to prepopulate things. Otherwise your DFS may choke on a few files initially and cause you all sorts of headaches. Not to mention it just plain gives you more control over everything. This all does depend upon the bandwidth available to you, of course. The method I normally use is robocopy. You would want to use /E /SEC /ZB. Instead of /SEC you could use /DATSOU to include the auditing information.
Now for the topic that you all have been waiting for. Building an RODC! Read only domain controllers are another one of those awesome additions to Server 2008. An RODC holds read only copies of parts of your AD. They’re ideal for branch offices or even your DMZ where you need heightened security but also still need access to your AD as well. RODCs don’t contain a copy of your credentials but only caches those that you set as per policy. In cases of customized AD-integrated applications you can also mark certain attributes in your AD as filtered. Filtered attributes do not replicate to an RODC, so if the RODC is ever compromised the attacker will not gain this critical information. Furthermore any changes made on an RODC do not replicate back out. If for instance someone makes some changes to the SYSVOL folder, those changes will not replicate out to all the other DCs in the forest. It will make that SYSVOL out of sync with the rest of the forest though and could cause some Group Policy idiosyncrasies. If you are using DFS replication for SYSVOL though this problem is fixed automatically. Later I may talk about how to enable DFS-R.
As a side note, anyone running VirtualBox under Linux and has switched to the newly released 2.6.29 kernel may be having a bit of trouble with their VB installation. If you are receiving an error message like this when starting a VM:
Failed to load VMMR0.r0 (VERR_SYMBOL_NOT_FOUND).
Unknown error creating VM (VERR_SYMBOL_NOT_FOUND).
Then you are in need of editing the vboxdrv Makefile. You should find this in /usr/src/vboxdrv-2.1.4/Makefile. You might need to tweak the version number depending upon your installed version. Uncomment the line # VBOX_USE_INSERT_PAGE = 1. Re-run your /etc/init.d/vboxdrv setup command under your root account (or just use sudo) and you should be good to go. More information about this is available here.
Let’s get a new VM created that we will be purposing for our RODC. Get it installed and joined to the domain, but we’ll be building a different answer file for the dcpromo. One important thing to remember by the way, for practical as well as testing purposes, is that to install an RODC requires a minimum forest functional level of Server 2003. Also for testing and practical purposes remember you only need one DC in your domain running Server 2008. No need to migrate over all your DCs yet. You also have to prep your forest. Login as an Enterprise Admin on your schema master, mount your Server 2008 DVD, and run:
D:\>xcopy /E D:\sources\adprep C:\adprep
This copies over adprep files and then preps the forest DNS partitions for replication to an RODC. Now to set up your answer file:
You will also want to specify ReplicationSourceDC= if you have Server 2003 DCs and need to point to your Server 2008 DC. You can also specify PasswordReplicationDenied to deny any additional users/groups replication to this RODC. Once you have your file created run the dcpromo as normal.
Upon success restart your RODC. If you have your site set up properly they should now be able to log into their systems with authentication through the RODC. Now to do some delving into management of your RODC specifically dealing with the Password Replication Policy (PRP). This is what defines what credentials will be cached and what will never be cached. What happens in the case of a denied password caching the RODC forwards the request on up the WAN to a writable DC for authentication. To view what is currently set for your RODC run:
C:\>repadmin /prp view JenovaCoreRODC allow
C:\>repadmin /prp view JenovaCoreRODC deny
This will show you what is currently allowed and denied for the RODC you have specified.
C:\>repadmin /prp view JenovaCoreRODC auth2
From this you will view all accounts that have been authenticated by this RODC. Finally to know what credentials have been cached by the RODC run:
C:\>repadmin /prp view JenovaCoreRODC reveal
It is important to know what credentials have been cached in case of the RODC being compromised. Now if you are wanting to update the list of what accounts you wish to allow caching for then run:
C:\>repadmin /prp add JenovaCoreRODC allow “CN=Lab Guests,OU=Lab Users,DC=shinra,DC=inc”
This uses the LDAP DN to the account or group that you wish to allow caching for. Something to remember is that an account won’t actually be cached until they have logged in authenticating to that RODC. You can pre-populate credentials via this command:
C:\>repadmin /rodcpwdrepl JenovaCoreRODC CloudCore “CN=Jeffery Land,OU=Lab Users,DC=shinra,DC=inc” “CN=Jeffery Land2,OU=Lab Users,DC=shinra,DC=inc”
You can specify as many users as you would like separated by a space. You will have to specify user accounts and not groups though. Most likely you would want to script this if you’re pre-populating an RODC for a site with limited/sporadic WAN connectivity. Remember that you not only want to allow caching for user accounts but also for any computer and service accounts that require authentication. Otherwise the RODC will attempt to forward the authentication on up and if the WAN is down it will fail due to not having a cached account. You are best off first working with an RODC in a lab environment prior to deployment so that you have worked through all such issues that could arise. Also if an account is both in the allowed and denied lists the account will be denied caching as deny takes precedence.
This should get you up to speed on RODC installation and management. Here is some reading for you to more thoroughly understand RODC implementation and management.
Now that you grok more completely the concepts of DNS and how it works we will be going over some of the actual implementation details, on Server 2008 Core of course. We’ll jump on the primary DNS server for our lab and set up a subdomain and put together some records for it. Then we’ll set up another DNS server and do a zone transfer to it and make it authoritative for the zone. We’ll be adding a reverse look-up zone for our ip range as well. That should get you started on managing zones and records from the command line. Let’s begin with a reverse look-up zone for our shinra.inc domain.As mentioned earlier for creating a reverse look-up zone you read from right to left for the ip address. I need to clean-up a bit first, though.
C:\>dnscmd /zonedelete 0.60.10.in-addr.arpa /dsdel
I had a zone left over from some previous work so we are going to remove it and start over. Note the addition of /dsdel to the command. This is require to remove the zone from AD if it is AD integrated. Otherwise you will receive an error such as DNS_ERROR_INVALID_ZONE_TYPE 9611. If you are working with a non-AD integrated zone then it is fine without /dsdel. Now let’s recreate our reverse look-up zone.
C:\>dnscmd /zoneadd 0.60.10.in-addr.arpa /dsprimary
This gets us an AD integrated zone. Pretty much you’ll want to always great AD integrated zones, unless you have requirements such as needing to replicate to a DNS server that is not a DC such as a BIND server set up on your Linux box. AD integrated zones enable you to configure secure dynamic updates. This allows an ACL to secure who can read and update particular records. We’ll set up some PTR records now for our machines.
C:\>dnscmd /recordadd 0.60.10.in-addr.arpa 2 PTR cloudcore.shinra.inc
Now if you execute an nslookup of 10.60.0.2 you’ll find a response of cloudcore.shinra.inc. Here’s the anatomy of how this works. After the /recordadd you specify your zone name which is 0.60.10.in-addr.arpa, then next comes your node which is your ip address relative to the zone name. Since our server is 10.60.0.2 in 0.60.10.in-addr.arpa this would be 2. If the zone was only the first two octets it would be 60.10.in-addr.arpa which would mean our node would be 2.0 for this zone. Then we specify that it is a PTR RR and give the FQDN. We’ll add in a few more records to flesh out the zone.
C:\>dnscmd /recordadd 0.60.10.in-addr.arpa 10 PTR renocore.shinra.inc
C:\>dnscmd /recordadd 0.60.10.in-addr.arpa 12 PTR rudecore.shinra.inc
Note that DHCP clients can add their own PTR records in addition to A records. To verify this list of records we’ve added we’ll do a recordsenum.
C:\>dnscmd /enumrecords 0.60.10.in-addr.arpa @
@ 3600 NS cloudcore.shinra.inc.
3600 SOA cloudcore.shinra.inc. hostmaster.shinra.inc. 13 900 600 86400 3600
2 3600 PTR cloudcore.shinra.inc.
10 3600 PTR renocore.shinra.inc.
12 3600 PTR rudecore.shinra.inc.
This should show that your reverse look-up zone is properly created and populated. We will now move on to our next exercise of creating a subdomain. Since we will also be using this zone for non-AD integrated zone transfers will be creating it is a regular zone which requires it to be stored as a file and not in a directory partition.
C:\>dnscmd /zoneadd lab.shinra.inc /primary /file lab.shinra.inc.dns
We’ll add a few A records for a few non-existent machines to populate the zone. We’ll use a quick batch script to aid in this. Here’s the contents of the script.
for /L %%C in (%1, 1, %2) do dnscmd /recordadd lab.shinra.inc experiment%%C /createptr A 10.60.0.2%%C
Then run from the command line with adddns.bat 10 30. This will populate your zone with a good number of A records. You can verify with dnscmd /enumrecords lab.shinra.inc @. You can also verify that the corresponding PTR record was created with dnscmd /enumrecords 0.60.10.in-addr.arpa @. Now we’ll set up a second server to transfer this zone over to. Configure your server as normal and join it to the AD or not as it doesn’t really matter in this case. If you do join it to the AD you can transfer over AD integrated zones as well though. Let’s get our DNS role installed on it. Here’s a good way to find out the name of a service for installation without having to scroll through a long list.
C:\>oclist | find /I “dns”
Now to install it.
C:\>start /w ocsetup DNS-Server-Core-Role
Once that finishes the role will be installed. We need to configure our original DNS server to allow zone transfers to this new server.
C:\>dnscmd /zoneresetsecondaries lab.shinra.inc /securelist 10.60.0.25
Then we jump back onto our new server to get the zone set up and transferred.
C:\>dnscmd /zoneadd lab.shinra.inc /secondary 10.60.0.2
C:\>dnscmd /zonerefresh lab.shinra.inc
Once this has finished transferring, which with these sizes and being in the same network should be instantaneous, you’ll have a complete read only copy of the zone on this server. Now to make it a master we decommission the old server and make the new one the primary. On the old server we delete the zone.
C:\>dnscmd /zonedelete lab.shinra.inc
Then on the new server we switch it to being a primary zone.
C:\>dnscmd /zoneresettype lab.shinra.inc /primary /file lab.shinra.inc.dns
Then we’ll verify that we were successful from the zone RRs themselves.
C:\>dnscmd /zoneprint lab.shinra.inc
Check your SOA record if you see that your new server is listed then the transfer of the master server was successful. Most likely your NS records will not have been updated properly, so we will go through an recreate those ourselves.
C:\>dnscmd /recordadd lab.shinra.inc @ NS redxiiicore.shinra.inc
C:\>dnscmd /recorddelete lab.shinra.inc @ NS cloudcore.shinra.inc
Now we could even take it a step further and create a stub zone on our previous server for our lab.shinra.inc zone. Hop on your old server and let’s get this created.
C:\>dnscmd /zoneadd lab.shinra.inc /stub 10.60.0.25
Check the zone info and you should be seeing the SOA and NS records in there for the zone, but none of the horde of A records that we had created.
You should be feeling up to speed on managing DNS from the command line on your Core installations. Don’t forget that you can also use these on your full Server 2008 (or even older versions) installations as well. The GUI can be easier but don’t let it be your only tool in your arsenal. Remember that one of the places the CLI can shine is in scripting, as demonstrated earlier. For some reference reading this post will be useful for you as it has a list of commands for dnscmd and a quick example.
Now that we have our wonderfully clustered DHCP server running it is fabulous that it has failover but does us not a spot of good if it isn’t configured. So let’s get that wrapped up. Fortunately this is a lot simpler than building a cluster from the command line. We need a scope created and activated for our DHCP range. I am going to use the 10.60.0.100-200/24 range for this. Now for some added redundancy we could also implement the 80/20 rule with a second DHCP server in addition to our clustered DHCP. The way this works is that 80% of the scope would be kept on the cluster and then 20% would be on another DHCP server. Just in case the cluster went completely bananas. 80/20 is definitely something you should look into if you’re pursuing your MCSE or MCITP. We will not be implementing it for this lab though.
For configuring the DHCP server from the command line you go back into your friendly netsh utility. First off we need to authorize this DHCP server in Active Directory.
netsh dhcp>add server turksdhcp.shinra.inc 10.60.0.55
netsh dhcp>show server
You should see the listing for your server there. Now to jump onto the server.
netsh dhcp>server \\turksdhcp
netsh dhcp server>add scope 10.60.0.0 255.255.255.0 Headquarters
We now have our first scope created. It needs a range added.
netsh dhcp server>scope 10.60.0.0
netsh dhcp server scope>add iprange 10.60.0.100 10.60.0.200
This gives us our 100-200 range. We need to set some options though so that our clients will be correctly configured. Even though this subnet currently does not have a gateway I will be setting one anyways.
netsh dhcp server scope>set optionvalue 003 IPADDRESS 10.60.0.1
netsh dhcp server scope>set optionvalue 006 IPADDRESS 10.60.0.2
netsh dhcp server scope>set optionvalue 015 STRING shinra.inc
And that’s it! A whole lot simpler than actually configuring the cluster wasn’t it. It still may be easier through the GUI but the scripting possibilities are pretty exciting. Don’t forget to bring on a client and test it out to make sure it is working well. Then for those of you studying your MCSE or MCITP check out this fascinating reading of how the DHCP process works. It is knowledge well worth having.
As I said we would be working on DHCP next. But to put a bit more difficulty in it we’re going to make it clustered. Clustering is a topic you will find in the 70-643 as well as working with Core. Don’t forget that most of what you can do with the command line on Core applies to the full version as well. So for this lab we’re going to need two additional servers as well as our original domain controller. Note down the address that you have set for your DC as your DC is also running DNS. DNS is vital for a healthy Active Directory. Your AD based network will not function properly without proper support from DNS. Let’s digress a bit and look at how DNS and Active Directory work together.
When a DC is promoted in AD it creates a number of SRV locater records both for its site and the list of DCs for that zone. Global catalogs also add records for themselves as well as several other Active Directory integrated applications. These are located in the _msdcs.domainname.local zone. When a computer is authenticating to AD it queries the DNS server for a SRV record for a DC in its site as well as the A record. If you want to get into the fine details of how this all works, and I highly recommend that you do, read through this Technet article. By the way a handy fix for thrashed locater records is to restart the netlogon service on your problem DC. It recreates any invalid locater records on the DC for you. Now back to our project.
Let’s get our servers created. Set up two Core installations and make sure that they have two network adapters. One will be for our production network and the other will be for a heartbeat network that is specifically for the clusters. Do some reading here for best practices for clustering. While we will not be meeting all of the minimum requirements and best practices since this is a simplified lab situation, you will still want to know what the requirements and best practices are for when you are implementing this in a real life situation. Now configure the first network adapter for your production network, but for the second nic when we configure it we will be leaving off the default gateway and also set no dns for it. Here is a quick run through how you would configure these machines, and then we’ll join them to the domain as well.
First let’s get our interfaces some more distinguishing names. Jump into netsh but only go to the interface level.
netsh interface>set interface name=”Local Area Connection” newname=”Public Network”
netsh interface>set interface name=”Local Area Connection 2” newname=”Heartbeat Network”
netsh interface>show interface
You should see the proper names for your interfaces now. This isn’t a necessary step but anyone coming behind you will be able to distinguish what is what a whole lot more easily than without naming them. Now let’s set a dns server for our public network and configure out heartbeat network.
netsh interface ipv4>set dnsserver name=”Public Network” source=static address=10.60.0.2
netsh interface ipv4>set address name=”Heartbeat Network” source=static address=192.168.1.101 mask=255.255.255.0
I’m sure you’ve noticed by now that you can go by either interface index or name. Whichever you prefer. Don’t forget to set these up on both servers, though don’t forget to assign different addresses either. No need to disable NetBIOS as it is disabled by default. Now let’s get these machines joined to the domain.
C:\>netdom join %computername% /domain:shinra /userd:administrator /passwordd:*
The * serves the purpose of prompting you to enter in your password. You can put the password in place of the * in the command line to skip the prompt. Don’t forget to reboot after you have successfully joined to the domain.
At this point you should step back and make sure that communication between everyone is working ok. If your network is broken in some fashion that makes it a royal pain to get clustering going. Now out of the box the windows firewall blocks pings so you will want to allow that through for testing. You’ll need to go back into netsh to do this.
netsh firewall>set icmpsetting type=8 mode=ENABLE
By the way it is also possible to do this all as a one liner from the command prompt like netsh firewall set icmpsetting type=8 mode=ENABLE. Once you’ve verified that network connectivity is working, especially across the heartbeat network, then we will continue. Run the command oclist. This will show you all of the components available to install. Now what we need to get installed first is the FailoverCluster-Core component so that we can start building our cluster.
C:\>start /w ocsetup FailoverCluster-Core
This will give us the availability of the cluster.exe command to setup and manage our clusters. A real man manages from the command line after all rarrgh-rarrrrrgh. The start /w is optional, it just let’s the command complete first before returning you to the command line. If you don’t mind it running in the background and giving you no notification when it is finished then omit the start /w. Run this on both of your nodes.
C:\>cluster TurksCluster /create /node:”renocore rudecore” /ipaddr:10.60.0.50/24
This will get our cluster created with both nodes added in. The first parameter is the name that we are designating for the cluster. In 2008 you can now add multiple nodes at the same time rather than first setting up the cluster and then bringing in the nodes. The subnet mask can be in either /xx or /xxx.xxx.xxx.xxx whichever notation you prefer.
C:\>cluster TurksCluster restype
This gives us a list of the available types that we have to work with. Now the type of cluster we will be creating will be the Node and File Share Majority. This will require a file share available so let’s hop onto the computer that we are not planning for the cluster and create a share for our witness.
C:\>net share TurksWitness=C:\TurksWitness /GRANT:Shinra\TurksCluster$,FULL /GRANT:Shinra\Administrator,FULL
C:\>icacls C:\TurksWitness /grant Shinra\TurksCluster$:(F)
C:\>icacls C:\TurksWitness /grant Shinra\Administrator:(F)
Don’t forget the $ at the end of the cluster account name. That designates it to look for a computer account in AD. Otherwise you will get an error of “No mapping between account names and security IDs was done.” So this gets our share prepped. Now we need to add it as a quorum resource.
C:\>cluster TurksCluster resource “File Share Witness” /create /group:”Cluster Group” /type:”File Share Witness” /priv SharePath=\\cloudcore\TurksWitness
C:\>cluster TurksCluster resource “File Share Witness” /online
C:\>cluster TurksCluster /quorum:”File Share Witness”
This gets our File Share Witness set up and puts it in for the quorum resource. We now have a Node and File Share Majority. This gives us the extra vote needed for high availability. Let’s start getting our DHCP server cluster set up.
C:\>cluster TurksCluster group “DHCP Server” /create
Now we have a resource group created for our DHCP server.
C:\>start /w ocsetup DHCPServercore
Run this on both servers to get the DHCP service installed. We are in need of some shared storage for this server though. I am using StarWind’s iSCSI Target server to generate some instant SAN storage on a separate Server 2008 machine I set up. Feel free to use whatever you wish for storage though, as of this writing, none of the free Linux based iSCSI alternatives such as Openfiler current work with Server 2008 Failover Clustering. This may change in a few months. So we will run through setting up iscsi storage under Core. Iscsicli is a cryptic beast, so you may wish to peruse the initiator user guide over here. Now for best practices you would want to segregate the SAN traffic from your public network traffic but in this case we will maintain simplicity.
C:\>sc config msiscsi start= auto
C:\>sc start msiscsi
This configures our iscsi service to start automatically, and then we start it ourselves so that we can start adding our storage. Note the space after “start=” as it is very important to include that space.
C:\>iscsicli qaddtargetportal 10.60.0.75
If you are using a CHAP login and password with this then you add those after the address. Execute iscsicli listtargets to get your iqn so that you can copy and paste for our next series of commands.
C:\>iscsicli qlogintarget iqn.2006-01.inc.shinra:tsn.f3e716ca5a5c
C:\>iscsicli persistentlogintarget iqn.2006-01.inc.shinra:tsn.f3e716ca5a5c t * * * * * * * * * * * * * * * 0
Note the asterisks with spaces between them. There are 15 of them. This tells the driver to autoconfigure a slew of settings. Check the guide to customize to your set up though. Thanks to Chuck Timon and Microsoft KB 870964 for this one. Now run through the same routine on the other server to get your storage set up there as well. Then let’s bring our iscsi online and get it formatted.
DISKPART> list disk
Look for your new disk in the list. Now we need to bring it online and initialize it. This isn’t quite as easy as when using the GUI though.
DISKPART> select disk=1
DISKPART> online disk
DISKPART> attributes disk clear readonly
DISKPART> create partition primary
DISKPART> format label=”Turks Cluster Storage”
This gets us a primary partition that is filled to the size of the storage we brought up. It will also be formatted to the default file system of NTFS. Optionally you can add “quick” as a parameter to format for things to be speedier in the format process. Now we need to assign a drive letter. So find your new volume in the list.
DISKPART> list volume
DISKPART> select volume 2
DISKPART> assign letter=T
DISKPART> detail disk
Note down the disk ID. We will be using this again soon. To get this storage added in to our DHCP Server cluster.
C:\>cluster TurksCluster res “DHCP Storage” /create /group:”DHCP Server” /type:”Physical Disk” /priv DiskSignature=0xB92B880C
C:\>cluster TurksCluster res “DHCP Storage” /online
Make sure you include the 0x. This let’s cluster know that it is a hex number it is working with. Otherwise it will fail out on an incorrect parameter. Alternatively you can break out your calculator and convert it to decimal and put in the decimal number. Don’t forget to bring this resource online. We can now start setting up the resources for our DHCP cluster.
C:\>cluster TurksCluster res “DHCP IP Address” /create /group:”DHCP Server” /type:”IP Address” /priv Address=10.60.0.55 SubnetMask=255.255.255.0
We have an ip for the DHCP server; let’s get it a name.
C:\>cluster TurksCluster res “DHCP Name” /create /group:”DHCP Server” /type:”Network Name” /priv DnsName=TurksDHCP Name=TURKSDHCP /adddep:“DHCP IP Address”
We get our name and also make the name resource depend upon having an IP. It is time to bring in our DHCP service.
C:\>cluster TurksCluster res “DHCP Service” /create /group:”DHCP Server” /type:”DHCP Service” /priv DatabasePath=”T:\DHCP\db” LogFilePath=”T:\DHCP\Logs” BackupPath=”T:\DHCP\Backup” /adddep:“DHCP Name” /adddep:”DHCP IP Address” /adddep:”DHCP Storage”
This gets our service created and puts all the data onto the shared storage. Let’s bring it online and see if it works.
C:\>cluster TurksCluster res “DHCP IP Address” /online
C:\>cluster TurksCluster res “DHCP Name” /online
C:\>cluster TurksCluster res “DHCP Service” /online
If no errors popped up then congratulations! You have created a DHCP cluster running on Core, and by the command line too! Now it is fully possible to use the gui from the management tools and connect to the cluster from there and create it. The benefit of doing things this way is it gives you familiarity with the command line so that you can make changes swiftly, or even more importantly so that you can script out any large cluster roll-outs you may be doing. Next part we’ll finish up actually configuring this DHCP server as it is rather useless at the moment without having any scopes created for it and being authorized in AD. Let’s go over a few gotchas for this though.
First of all make sure that you have all of your network connectivity and storage configured properly and talking to each other. As usual work your way up from the physical layer. Then look into the configurations of your resources. A common problem that stops a resource from starting is the dependencies. Make sure that the dependencies can start on their own, and also make sure that the resource has its dependencies added. For instance if you forget to add dependencies to the DHCP Service resource then it will not start until you add those. If cluster.exe is not working then make sure your service has started. Finally if it is not allowing you to create any resources make sure that you have created your cluster management point before you start adding services to it.
Now that we’ve got our virtualization platform in place it is time to start building our network. We’ll start off with one of the great new features of Server 2008. Server 2008 comes in 2 different flavours, in addition to picking Standard, Enterprise and Datacenter. First off is your regular Full version of Server 2008. You get your GUI and everything else with that. Then comes Core. Core is a stripped down version of Server 2008 that is managed from the CLI. There are only a couple of GUI elements left. This version also takes up far disk space and memory than the Full version. It requires only 100 megabytes for the kernel and then you just start adding on for the services you start building in. Core is primarily intended for a low resource box that runs a few of the core services for an Active Directory based network. Core also has a reduced attack surface due to being so stripped down. Not all roles and features available under Full are available in Core. A few important ones missing are Terminal Services, NAP, and AD CS. IIS is available but without ASP.NET. For the features one of the biggest missing is the .NET Framework which means, sadly, that no PowerShell is available on Core. Core will still accept remote WMI queries though so you will still be able to use PowerShell to execute WMI queries remotely. Though apparently it is possible to hack it in. I have not tried this yet though. So now that we know a bit about Core, let’s use this to set up our first forest.
Boot off the dvd and make sure that you select Core. For our first DC we don’t really need the extra features of Enterprise so I would recommend Standard. Installation is pretty simple so just run the rest of the way through and login setting your admin password. Now the first thing we have to do is get your network configured. I am working with 10.60.0.x but it is up to you to decide your ip block. We will start off with the netsh command.
netsh interface ipv4>show interfaces
The results will list all of your connections giving you in the first column the index number. Look up your network connection as that is the one that we will be working with.
netsh interface ipv4>set address name=”2” source=static address=10.60.0.2 mask=255.255.255.0
You can also add a gateway=x.x.x.x parameter to configure your gateway if necessary. Since this network is just for testing purposes I have no need for gateway currently. Also note that the name is the index number of the nic you are working with. If no errors are returned then the command should be successful so type quit and then just to be sure do an ipconfig /all. You should see your new settings listed. Now let’s get the name set for this machine.
C:\>netdom renamecomputer %computername% /NewName:CloudCore
Agree to proceed and then we’ll need to reboot the machine.
C:\>shutdown /r /t 0
This issues an immediate reboot of the machine. Don’t forget that if you’re confused about a command you can issue the command with a /? to find out more information about it. Now that we have our network connection let’s get Active Directory set up. This gets a bit more complicated than configuring your network but if you are at all familiar with unattended installations then you won’t have too much difficulty.
Core still has our trusty notepad application. Here is a copy of the config we will write up for doing an installation of AD DS on this machine.
ReplicaOrNewDomain = Domain
NewDomain = Forest
NewDomainDNSName = shinra.inc
SiteName = Shinra-Headquarters
ForestLevel = 3
AutoConfigDNS = Yes
DNSDelegation = Yes
DNSDelegationUserName = dnsuser
DNSDelegationPassword = Pass1word
RebootOnSuccess = Yes
SafeModeAdminPassword = Pass1word
It should be pretty self explanatory but I will go over a few options here. If you want to see more options and find out more about these current options use dcpromo /? and/or dcpromo /?:Promotion. Now the NewDomainDNSName is what you are wanting to use for your new forest. It is best practices not to use your public DNS name for your internal Active Directory structure. Most people like to use a postfix of .local but you can use whatever you want. An example using mine here is if my external DNS was shinra.com then using shinra.inc would be a good choice for the internal AD structure. The SiteName is not necessary as it will default to Default-First-Site-Name but I dislike using such a nondescript name. ForestLevel will set your forest to a particular level. The default is Windows 2000 but using 2 will set to 2003 and 3 to 2008. Be careful when setting your forest level as you can’t go back. Since we’re starting from scratch here it is ok to go with a 2008 forest level. Once we start introducing some RODCs you will need a forest level of 2003. The domain level is automatically set from the forest level in this install.
Now we’re ready to run through those so execute
If all is well you should see the installation run through and it will reboot at the end, if you left the RebootOnSuccess flag set. There you go you have your first Active Directory set up, and on Core none-the-less! So here are a few tasks few you to go about for some good practice. Add a second DC, preferably on Core. Make sure to put a global catalog on it too. Create yourself an OU and put in a standard user account for yourself to use there as well. You could even try writing a script to pre-populate a large number of users. The commands you’ll want to check out for this are dsadd and dsmod. Up next we’ll get into installing and configuring DHCP.