Home » Tutorials

Category Archives: Tutorials

Advertisements

Rescue that Workflow Manager from certain doom, or at least get that OutboundCertificate fixed!

SharePoint 2013 and Workflow Manager have always proven to be a winning combination for late nights of

Doom I tell you. Doom!

Doom I tell you. Doom!

troubleshooting involving copious amounts of coffee and a complete loss of sleep. Or whatever may be your preferred means of caffeine intake. Plus your spouse may not appreciate yet another late night without you at home. Workflow Manager is a frustrating, burdensome beast, and it is not on the fun sunny side of life.

In this particular instance we are needing to resuscitate Workflow Manager from its current undead state. It pretends to be running and responsing to your commands. The users believe otherwise, and are wanting to lynch you because their workflows are showing angry messages about no longer being able to talk with the server. Looking at the server you find that the management databases are shot and cannot be worked with in their current state. In my recent case in particular, it was due to a fabulous mixture of expired certificates, revoked certificates and certificate templates that “update” your current certificates to certificates that are incompatible with Workflow Manager. This restore is also a method that can be used to replace the OutboundCertificate in the Workflow Manager farm if the Set-WFNextOutboundCertificateReference and Set-WFNextOutboundCertificateAsCurrent are not working for you.

Microsoft has a pretty decent article on disaster recovery for Workflow Manager 1.0. The problem I found with it as that it was incomplete, so thusly why I am putting together this post. Now, the topology we are working with in this scenario is a single SharePoint 2013 server, with a separate single SQL server, and a separate single Workflow Manager server. This scenario also requires you to have either working backups of your Workflow Manager databases, or that only the WFManagementDB and/or SbManagementDB are the only shot databases. You do have valid backups of everything, right? Go check again, right now, just to be safe. If you are doing a restore on a farm of multiple Workflow Manager servers then you may need a few extra steps to update those servers to the new databases. Also, check and make sure your certificates are up to date and that you know which service accounts are in use on your Workflow Manager farm and what their passwords are.

If you’re skipping ahead to the details on how to do this, here is where you need to start paying attention!

First off we need to uninstall Workflow Manager. Hopefully an easy enough step. If you’re installing 1.0 Refresh and you’re running Service Bus 1.0 then this would be a good time to move to Service Bus 1.1. It worked flawlessly for me when I did this. If that is the direction you are going to go then uninstall Service Bus 1.0

Next step! Let’s install Service Bus 1.1 followed by Workflow Manager Refresh 1.0. Hopefully that went smoothly for you.

Now we need to get the Service Bus farm up and running. Check your SQL server and make sure you remove your SbManagementDB and your WFManagementDB, just in case those still exist. Alternatively when rebuilding things you could name the databases something else, but I don’t see much of a point to that as it will just cause confusion further down the line. Identify your service account you are using for Service Bus and then we’ll get the database recreated. Pop open PowerShell and run

Import-Module ServiceBus

Restore-SBFarm -RunAsAccount DOMAIN\servicebussvc -GatewayDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=SbGatewayDatabase;Integrated Security=SSPI;Asynchronous Processing=True” -SBFarmDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=SbManagementDB;Integrated Security=SSPI;Asynchronous Processing=True” -FarmCertificateThumbprint 814AA8261BE6F0DD9031F802A4D26EBAD020770D -EncryptionCertificateThumbprint 814AA8261BE6F0DD9031F802A4D26EBAD020770D

That will get your replacement SbManagementDB created. The output of a successful run of the command will look something like the following, which don’t you love how on very critical commands like this it defaults to Yes?

This operation will restore the entire service bus farm
Are you sure you want to restore the service bus farm?
[Y] Yes [N] No [S] Suspend [?] Help (default is “Y”):
FarmType : SB
SBFarmDBConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=SbManagementDB;Integrated
Security=True;Asynchronous Processing=True
ClusterConnectionEndpointPort : 9000
ClientConnectionEndpointPort : 9001
LeaseDriverEndpointPort : 9002
ServiceConnectionEndpointPort : 9003
RunAsAccount : DOMAIN\servicebussvc
AdminGroup : BUILTIN\Administrators
GatewayDBConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=SbGatewayDatabase;Integrated
Security=True;Asynchronous Processing=True
HttpsPort : 9355
TcpPort : 9354
MessageBrokerPort : 9356
AmqpsPort : 5671
AmqpPort : 5672
FarmCertificate : Thumbprint: 814AA8261BE6F0DD9031F802A4D26EBAD020770D, IsGenerated: False
EncryptionCertificate : Thumbprint: 814AA8261BE6F0DD9031F802A4D26EBAD020770D, IsGenerated: False
Hosts : {}
RPHttpsPort : 9359
RPHttpsUrl :
FarmDNS :
AdminApiUserName :
TenantApiUserName :
BrokerExternalUrls :

The Service Bus farm has been successfully restored.

Note that it will complain if SbManagementDB already exists, so you will have to delete it or name this one something new. Now we’ll connect in the SbGatewayDatabase.

Restore-SBGateway -GatewayDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=SbGatewayDatabase;Integrated Security=SSPI;Asynchronous Processing=True” -SBFarmDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=SbManagementDB;Integrated Security=SSPI;Asynchronous Processing=True”

This operation will restore the Service Bus gateway database. This may require upgrading of gateway database and
message container databases.
Are you sure you want to restore the Service Bus gateway database?
[Y] Yes [N] No [S] Suspend [?] Help (default is “Y”):
Re-encrypting the global signing keys.
The following containers database has been restored:
WARNING: Failed to open a connection to the following dB: ”
WARNING: The database associated with container ‘1’ is not accessible. Please run Restore-SBMessageContainer -Id 1
-DatabaseServer <correct server> -DatabaseName <correct name> to restore container functionality.
Id : 1
Status : Active
Host :
DatabaseServer :
DatabaseName :
ConnectionString :
EntitiesCount : 13
DatabaseSizeInMB : 0

Restore-SBGateway : The operation has timed out.
At line:1 char:1
+ Restore-SBGateway -GatewayDBConnectionString “Data Source=sql.jefferyland.com; …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Restore-SBGateway], SqlCommandTimeoutException
+ FullyQualifiedErrorId : Microsoft.Cloud.ServiceBus.Common.Sql.SqlCommandTimeoutException,Microsoft.ServiceBus.Co
mmands.RestoreSBGatewayCommand

Do not be alarmed by the scary messages in there. I was alarmed at first but apparently everything went well. Next check your SQL server for SBMessageContainer* databases and you’ll need to run this command for each one. At least, according to Microsoft’s documentation. According to the command I ran it wasn’t necessary.

Restore-SBMessageContainer -Id 1 -SBFarmDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=SbManagementDB;Integrated Security=SSPI;Asynchronous Processing=True” -ContainerDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=SBMessageContainer01;Integrated Security=SSPI;Asynchronous Processing=True”

Id : 1
Status : Active
Host :
DatabaseServer : sql.jefferyland.com
DatabaseName : SBMessageContainer01
ConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=SBMessageContainer01;Integrated
Security=True;Asynchronous Processing=True
EntitiesCount : 13
DatabaseSizeInMB : 48.6875

All entities are up to date. No changes were made to entities.
Please run Start-SBHost.

Now we need to add our host to the Service Bus farm.

Add-SBHost -SBFarmDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=SbManagementDB;Integrated Security=SSPI;Asynchronous Processing=True” -RunAsPassword (ConvertTo-SecureString -Force -AsPlainText password!) -EnableFirewallRules:$true

FarmType : SB
SBFarmDBConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=SbManagementDB;Integrated
Security=True;Asynchronous Processing=True
ClusterConnectionEndpointPort : 9000
ClientConnectionEndpointPort : 9001
LeaseDriverEndpointPort : 9002
ServiceConnectionEndpointPort : 9003
RunAsAccount : DOMAIN\servicebussvc
AdminGroup : BUILTIN\Administrators
GatewayDBConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=SbGatewayDatabase;Integrated
Security=True;Asynchronous Processing=True
HttpsPort : 9355
TcpPort : 9354
MessageBrokerPort : 9356
AmqpsPort : 5671
AmqpPort : 5672
FarmCertificate : Thumbprint: 814AA8261BE6F0DD9031F802A4D26EBAD020770D, IsGenerated: False
EncryptionCertificate : Thumbprint: 814AA8261BE6F0DD9031F802A4D26EBAD020770D, IsGenerated: False
Hosts : {Name: workflow.jefferyland.com, Configuration State: HostConfigurationCompleted}
RPHttpsPort : 9359
RPHttpsUrl : https://workflow.jefferyland.com:9359/
FarmDNS :
AdminApiUserName :
TenantApiUserName :
BrokerExternalUrls :

We’ve finished up the Service Bus farm, hopefully successfully, so now we’re ready for the Workflow Manager farm. Fighting!

This can get a little bit messy if you’re running Service Bus 1.1 as there is a buggy cmdlet. If you’re not using Service Bus 1.1, or you do not receive an error like

Could not load file or assembly
'Microsoft.ServiceBus, Version=1.8.0.0, Culture=neutral,
PublicKeyToken=31bf3856ad364e35' or one of its dependencies.
The system cannot find the file specified.

then you can skip the following. If we are using Service Bus 1.1, then we need to work around a call to an old ServiceBus assembly in one of the cmdlets. Thanks to these posts, http://www.wictorwilen.se/issue-when-installing-workflow-manager-1.0-refresh-using-powershell and https://carolinepoint.wordpress.com/2012/07/10/sharepoint-2010-powershell-and-bindingredirects/ we have a valid work around.

Create or edit a file named C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe.config and paste the following into it:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<configuration>
<runtime>
<assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
<dependentAssembly>
<assemblyIdentity name=”Microsoft.ServiceBus”
publicKeyToken=”31bf3856ad364e35″
culture=”en-us” />
<bindingRedirect oldVersion=”1.8.0.0″ newVersion=”2.1.0.0″ />
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>

Then restart your PowerShell session to make this active. You may want to undo this part after you’re done restoring the farm just to be safe.

Continuing on with the farm build run the following.

Import-Module WorkflowManager

Restore-WFFarm -InstanceDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=WFInstanceManagementDB;Integrated Security=SSPI;Asynchronous Processing=True” -ResourceDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=WFResourceManagementDB;Integrated Security=SSPI;Asynchronous Processing=True” -WFFarmDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=WFManagementDB;Integrated Security=SSPI;Asynchronous Processing=True” -OutboundCertificateThumbprint 814AA8261BE6F0DD9031F802A4D26EBAD020770D -EncryptionCertificateThumbprint 814AA8261BE6F0DD9031F802A4D26EBAD020770D -SslCertificateThumbprint 814AA8261BE6F0DD9031F802A4D26EBAD020770D -InstanceStateSyncTime (Get-Date)  -ConsistencyVerifierLogPath “C:\temp\wfverifierlog.txt” -RunAsAccount DOMAIN\workflowsvc -Verbose

A successful run through should get you output similar to this:

VERBOSE: [5/14/2015 11:56:58 PM]: Created and configured farm management database.
VERBOSE: [5/14/2015 11:56:58 PM]: Created and configured Workflow Manager resource management database.
VERBOSE: [5/14/2015 11:56:58 PM]: Created and configured Workflow Manager instance management database.
VERBOSE: [5/14/2015 11:56:58 PM]: Configuration added to farm management database.
VERBOSE: [5/14/2015 11:56:58 PM]: Workflow Manager configuration added to the Workflow Manager farm management
database.
VERBOSE: [5/14/2015 11:56:58 PM]: New-WFFarm successfully completed.
FarmType : Workflow
WFFarmDBConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=WFManagementDB;Integrated
Security=True;Asynchronous Processing=True
RunAsAccount : DOMAIN\workflowsvc
AdminGroup : BUILTIN\Administrators
Hosts : {}
InstanceDBConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=WFInstanceManagementDB;Integrated
Security=True;Asynchronous Processing=True
ResourceDBConnectionString : Data Source=sql.jefferyland.com;Initial Catalog=WFResourceManagementDB;Integrated
Security=True;Asynchronous Processing=True
HttpPort : 12291
HttpsPort : 12290
OutboundCertificate : Thumbprint: 814AA8261BE6F0DD9031F802A4D26EBAD020770D, IsGenerated: False
Endpoints : {}
SslCertificate : Thumbprint: 814AA8261BE6F0DD9031F802A4D26EBAD020770D, IsGenerated: False
EncryptionCertificate : Thumbprint: 814AA8261BE6F0DD9031F802A4D26EBAD020770D, IsGenerated: False

This will get our WFManagementDB recreated as well. Time to add the host back in!

Add-WFHost -WFFarmDBConnectionString “Data Source=sql.jefferyland.com;Initial Catalog=WFManagementDB;Integrated Security=SSPI;Asynchronous Processing=True” -RunAsPassword (ConvertTo-SecureString -Force -AsPlainText password!) -EnableFirewallRules:$true

This should have your farm up and running. Let’s check the status.

Get-WFFarmStatus

HostName ServiceName ServiceStatus
——– ———– ————-
workflow.jefferyland.com WorkflowServiceBackend Running
workflow.jefferyland.com WorkflowServiceFrontEnd Running

Restoration is done! This is where Microsoft’s documentation leaves you hanging. You need to reconnect the farm with SharePoint.

Register-SPWorkflowService -SPSite “https://sharepoint.jefferyland.com/&#8221; -WorkflowHostUri “https://workflow.jefferyland.com:12290&#8221; -AllowOAuthHttp -Force

Your workflows should now be showing up once again but we’re not done yet, we need to perform some maintenance on the SharePoint server. First clean-up the old certificates using the thumbprint of the old certificate for your filtering criteria:

Get-SPTrustedRootAuthority | ?{$_.Certificate -match “BF5CA00B6A639FE5B7FF5688C9A38FEBFBF03552”} | Remove-SPTrustedRootAuthority -Confirm:$false

Next we need to run some jobs to update the security token, otherwise you’ll get a HTTP 401 Invalid JWT token error. Alternatively you can wait until after midnight for the timer jobs to run themselves, but I’m pretty sure that would not be the healthiest decision here.

In Central Administration go to Monitoring->Timer Jobs:Job Definitions
Run these jobs:
Refresh Trusted Security Token Services Metadata feed.
Workflow Auto Cleanup
Notification Timer Job c02c63c2-12d8-4ec0-b678-f05c7e00570e
Hold Processing and Reporting
Bulk workflow task processing

Now check in your workflows. They should be running nice and healthy! That wraps up this post on rescuing your Workflow Manager farm and saves you from losing a night or two of sleep.

Advertisements

Tutorial on Configuring and Migrating Redirected Folders

In recent migrations I’ve seen that there is some confusion in how to work with redirected folders. Let’s first go over a few reasons for the existence and usage of redirected folders. The most important reason is that it is absolutely critical in an RDS farm if you want any sort of user data persistence between servers, not to mention it will help cut down on the amount of local disk space used by each server. Your users will be able to be load balanced from server to server without worrying about which is their “home” server or having to configure their account on each server. They can still keep their habit of saving critical data to their Documents folder as well. Another reason is that you’ll get all of your users’ profiles stored in a central location. Which means their Documents and Desktop folders will be stored centrally. Which means you’ll be able to back those up. Now when you have this implemented as just a roaming profile all of that data is copied down to the server and then synched back to the central location. This slows things down for everyone since you have network bandwidth being taken up unnecessarily at logon and also longer logon times for the local user. Here is where folder redirection jumps in to help. With your Desktop and Documents and Pictures and so forth being redirected then everything is pulled off a share rather than being copied down to the server. That frees up a lot of bandwidth and speeds up login times so everyone is a lot happier. You’ll want to nip those PSTs right away though, otherwise you could end up with a lot of performance problems.

Anyhow let’s go on to the implementation. We’ll begin with configuring the redirected folders. Create a share, we’ll name it Folders, and configure the share permissions with Everyone:Full. Generally whenever you create a share you want to configure the share permissions as Everyone:Full unless you have a very good reason not to. Normally all permissions you would want to control through NTFS. This simplifies management and troubleshooting. Now your NTFS permissions you’ll first want to disable including inheritable permissions. The permissions you want on this folder are Full Control for SYSTEM, CREATOR OWNER and Administrators, and for Authenticated Users you’ll need to set advanced permissions. You’ll want Create Folders/Append Data, Read Permissions, Read Attributes and Read Extended Attributes. This will create a folder where the data is secure from prying eyes yet administrators will still be able to access it without breaking redirection.

Next up is creating the group policy for configuring folder redirection. Create a new policy and name it Folder Redirection. The section we’ll be working in is User Configuration/Policies/Windows Settings/Folder Redirection. You’ll want to plan out your folder redirection strategy before you start implementing. What folders are important to you, how are you getting the data there, and perhaps even most importantly how are you going to back this policy out when you’re done. Once you’re done planning then start editing your policy. For this tutorial we’re erring on the side of simplicity.

The first setting gives you two options, Basic and Advanced. Most times you will want to use Basic but it depends upon what you are trying to achieve. With Basic you point the folder to the share that you want, as a UNC of course i.e. \\storageserver\Folders\. Once selected you normally will want the option of Create a folder for each user under the root path. It will even show you what the path will look like at the bottom. With these options everyone affected by the policy will be redirected to the same location. With the Advanced option you get more flexibility in how you configure user’s redirected folders since now you can use group membership to configure share selection for the storage of the redirected folders. The next tab over we have Settings. By default users are granted exclusive rights to the folder. Also by default the contents of the folder will be moved to the new location. This simplifies the job of moving content, but the down side is that it prevents you from pre-staging the move instead of having it happen at logon. But you will have planned this out already, right? The last unchecked option is to apply to 2000/XP/2003 operating systems. You’ll want to check this depending upon where these folders will be used. This will disable some redirection options in Vista/7 though.

Now the final option is Policy Removal which you will have also planned out ahead of time. If you select leave the folder in the new location then when the policy is removed their profile still redirects to \\storageserver\Folders\ and the data still remains there. If you select redirect the folder back to the local user profile then what happens depends upon what you checked for Move the contents to the new location. If you have it checked then the folder redirects to their local profile and the data is copied, not moved, to the local profile. You’ll still need to clean up the old location. If you have the option unchecked then the folder will redirect to their local profile but all the data will still stay on the share. Your users will end up with empty local folders. This is why you’ll want to plan your exit strategy because at some point some or all your users data will end up being stored somewhere else. Since we’re preparing a migration scenario most likely everything will be setup with the defaults so that is what we are going to do here, setup the folders with the defaults. We’ll configure redirection for the Desktop, Documents, Pictures, Music, Videos, Favorites, Downloads. Not all of these will be available depending upon what versions of windows you are working with. Also note that there is an option for Pictures, Music and Videos to follow the Documents folder which is what you’ll want to select unless you have a reason to split them amongst multiple shares. Don’t forget to allow time for the policy to replicate to any other DCs or force replication, and that you may need to run gpupdate on the client to force immediate pick-up of the change.

Now that we have configured our folder redirection go ahead and populate a few profiles with data. If you check the Folders share that you created you’ll see that it is getting populated with account names and the redirected folders. Test logging into a few different servers as well to make sure that the folders are following your accounts. You can also pull up the properties on them to verify the path pointing to the share. If that is all working fine then let’s look at migrating the redirected folders.

We’ve got several options for migrating the folders. The simplest method and definitely the one you’ll want to use when dealing with small amounts of data is to let the policy take care of it for you. Let’s test it out. Create a share somewhere else named NewFolders and configure it with the same share and NTFS permissions as listed earlier. Edit your folder redirection policy and change the path to point to your new server. Also make sure you’ve checked Move the contents to the new location. That’s the part that is doing the work for us. Once you’re done with the changes give it a test. You’ll probably see a longer logon the first time as data is copying across. There’s also a chance the it won’t be picked up until the next logon due to asynchronous policy processing. Note that the data was actually moved, not copied. This is great for when there isn’t much data to move, and you can also do it in phases moving one folder at a time. Something else you could do if you want to migrate accounts in phases is to create policies for redirection and link them to migration OUs that you create lower than where the original redirection policy is linked.

When you’re working with larger amounts of data though you may want to pre-stage the data rather than have it be moved at first logon. This requires a bit of work. Since the folders get locked down by default if you have Grant the user exclusive rights checked, the administrator account does not have access to the folders. If you take ownership of the folders, that will break redirection since the policy checks for ownership of the folder. What you’ll need to do is go into the policy and uncheck the exclusive rights option everywhere. At the same time you’ll also want to uncheck Move the contents to the new location. This is best done as earlier as possible in the migration just to make sure all clients have picked up the updated settings to cut down on the amount of weirdness you may encounter. Now once this is done make sure that mentioned NTFS permissions are configured on the top level folder for the share. Now go in and if the Administrators group doesn’t have ownership of the folder take ownership of it, then check the box to replace owner on subcontainers and objects. Ok out of everything then open up the advanced NTFS permissions. Check the box for Replace all child object permissions with inheritable permissions from this object. Now use whatever method you prefer to copy the data from there to the new redirected folders share. Robocopy is my preference.

You’ve now pre-staged all the data and policies are configured so that permissions do not break anything it is time to update the policies to point to the new shared folder. Same as last time just update the UNC to the new location, once again making sure that Move the contents to the new location is unchecked. You will probably want to take the old share offline just to be safe. This will flush out any systems that are not processing group policy properly.

Now what happens if you go ahead and delete the group policy rather than reconfigure it for anything. Reference back to the section on Policy Removal which is on paragraph 5. Assuming the policy you deleted was left at the defaults for policy removal all clients will be left pointing at the old share until told differently. To fix this is simple, create a policy with the new redirection settings and once it is picked up the user will be pointed to the new location. What if you are just trying to remove folder redirection altogether? Hopefully you set the policy removal to redirect back to the local user profile. But if you have not; create a policy and set each redirected folder’s target location to Redirect to the local user profile location. Once this policy has been applied everywhere at that point it is safe to delete altogether.

References:

http://technet.microsoft.com/en-us/library/cc732275.aspx

http://support.microsoft.com/kb/288991

Setting Up Server 2003 as a RADIUS with DD-WRT

A co-worker of mine was having some difficulties in setting up a RADIUS for his wireless network which is what prompted this particular article. For setting up your wireless infrastructure there are times when you need a more centrally controlled solution for the authentication problem. This is where RADIUS, and more to the point Microsoft’s IAS, steps in. For your trivia needs RADIUS stands for Remote Authentication Dial-In User Service, while IAS stands for Internet Authentication Service. Normally I would be setting this up under Server 2008 but our needs were calling for Server 2003. I may follow-up with how to do this under Server 2008 as well and even delve into putting together an IAS farm. The WAP being used is a Buffalo WHR-125 with a fairly current build of DD-wrt v24 SP2 (09/24/09) on it.

First off before installing IAS we will be in need of a certificate for it to use. There are several ways of achieving this. The first method, and easiest/cheapest, is creating a self-signed certificate using the IIS 6 Resource Kit from Microsoft. A particular program need from this is SelfSSL so run through a custom installation and install SelfSSL. Open up a command prompt and navigate to where SelfSSL installed at and here is how we will be constructing a certificate:

C:\Program Files\IIS Resources\SelfSSL>selfssl  /N:CN=server.domain.local /K:1024 /V:1825

This will get you your self-signed certificate. Of course you can use 3rd party certificates as well. Another method is to issue one from an internal CA. Don’t forget to implement CA best practices when using one. I personally would opt for a self-signed certificate unless you already have a CA available.

Next up is getting our IAS installed. You will find this from Add/Remove Programs Add/Remove Windows Components. In there look for Networking Services and go into Details. Internet Authentication Service will be displayed just a few down. Once installed open up the mmc for IAS and let’s get into configuration. Though we should set up our users first. I went with creating a security group named Wireless Authentication and added my users in there. Note that you will need to allow these users for remote access as well. One way is to go into the user’s properties and on the Dial-In tab select Allow access. This isn’t my preferred method though as it creates more work. The other method I shall detail a bit later.

Bring up your IAS controls and you’ll see categories available. We need to get ourselves configured for our access point. To do this we will create a RADIUS Client. Right click on RADIUS Clients and select New RADIUS Client. Give the policy a name and point it to the address of the access point. Next menu is selecting our vendor which we will want to keep as RADIUS Standard for our configuration, as well as most configurations.  Put in a key for this client and note it down as we will need to configure it in the WAP later on. No need for the Message Authenticator attribute as it is used by default with EAP, which is what we will be configuring. For more information about it read here.

We have our client configured on the server but we are also in need of a Remote Access Policy. Right click the Remote Access Policies and select New Remote Access Policy. We will go with the first option for setting up our policy, though creating a custom policy is easy enough as well. On the next screen Access Method we will select Wireless. On the next screen we can put our group to use. Add in your Wireless Authentication group, unless you prefer to control things at the user level. I prefer security groups so that is what we will use. Select PEAP for the authentication method. Check the configuration of it to ensure that EAP-MSCHAP V2 is selected and that the proper certificate is selected as well. If you get an error when selecting Configure complaining about certificates then you need to go back and verify that you have a properly issued certificate. This is where most problems stem from. In the configuration you may also wish to enable Fast Reconnect. I have read about some clients having issues with this but have not had any problems in my configuration. Your mileage may vary. Disable it if you are having problems authenticating clients routinely. Finish this wizard and you’ll have your policy. We’re not quite done with it yet though.

Bring up the properties on your newly created policy. On the encryption tab you will want only Strongest encryption checked. If there are authentication issues though, you will want to enable the others for diagnostics until you figure out what is properly supported by your WAP. This is also where we can enable the alternate method for allowing our users. Go to the Advanced tab and add Ignore-User-Dialin-Properties set to True. This will ignore the setting on your user’s Dial-in tab and truly allow you to control access via groups. Otherwise user settings will trump group settings, which can make for a headache in troubleshooting. Last thing to do is right click the root folder, Internet Authentication Service, and select Register server in Active Directory. What this does is add your server to the RAS and IAS Servers security group, which enables it to read accounts from your AD. Once we are done here we can finally go configure our access point.

This is specific to DD-WRT, so be sure to verify how to configure your own access point. Connect to your access point and go to the Wireless tab, then Wireless Security. Set it to WPA2 Enterprise and make sure you are using AES, unless you have a reason not to. Put in the address for your IAS server and now would be a great time to make sure that it is a static address. Leave the port as 1812 as IAS listens on that out of the box. Finally put in the preshared key that you configured from earlier. Save then apply and your access point is in business. All that is left is configuring your clients.

This is best done through Windows’ wireless configuration. Manually create a new connection configured with your WAP’s SSID and go into the Security settings on it. Set it to use PEAP and if you are using a non-domain joined machine, that also does not have the certificate that you configured the server with, then tell it not to validate the certificate and also not to use your domain logon and password. Connect wirelessly to your access point and see if you’re successful. If you are not then check your server’s System event log for errors. If you are getting bad username/password errors, and you know your username and password are correct, then start looking at your encryption and configured authentication protocols to make sure they all match. If you are seeing errors about no matching policy then make sure you have your user in the right group or matching the criteria of your policy. That covers the majority of problems you will run into when configuring IAS. Even if you don’t have a use for IAS as a RADIUS it is a good idea to set it up a few times for learning purposes when pursuing an MCSE.

Windows Server Backup and Exchange 2007 with iSCSI

Since service pack 2 has recently been released for Exchange 2007 this has enabled the long awaited integration of Exchange aware backups with Server 2008’s new Windows Server Backup. WSB is Microsoft’s replacement of the old ntbackup that we all know and love. This new backup is simpler to use than ntbackup and has a number of interesting new features, but it also lacks some of the more useful features of ntbackup as well. One of these missing features that is rather vexatious is that you have to backup whole volumes. You can’t just backup the mailbox stores, or even specific files and folders as well. Another missing feature is that you can’t just back up to a specific folder nor mapped drive. This can cause problems, especially at small businesses that don’t feel like shelling out for a more robust backup program. You will have to dedicate a whole volume to WSB, so this requires a bit more planning ahead. This is a problem that we had to get creative about solving last night on the spot though.

The client has an Exchange 2007 server running on Server 2008. No backup software had been acquired as they had been waiting for SP2 to enable Exchange aware backups. SP2 installed just fine, which was great considering all the other migration issues we had earlier, but then the hang-up we ran into was discovering that WSB wants its own volume for backing up, and doesn’t want to backup to a mapped drive on one of the other servers. This was a source of consternation for a bit as we did not have a spare volume available nor could we just grab an external drive for this either. Fortunately StarWind Software has this great, and free, iSCSI target software. Using StarWind we were able to turn a chunk of storage on the server into a virtual drive and set it up as an iSCSI target. All of this without having to reboot too, which is a huge plus. We connected this to the Exchange server using iSCSI and that meant we were finally able to backup the server and flush those transaction logs that had been building up. This made for a pretty quick and easy fix as StarWind is simple enough to set up.  If you are in need of a quick fix for your backups this is one way to do it.

Exchange 2007 Single Server Migrations for Profit or Headache

I was originally writing up a guide for migrating, actually transitioning, Exchange 2003 to Exchange 2007. There are lots of guides out there that would have better screenshots and perhaps even better written steps. Basically I would not really be meeting a need as there are already plenty out there doing so. So instead I am scrapping all of my original work and concentrating on issues that I believe are not talked about as much out there. Mostly these issues affect those that are doing single server migrations, which is basically you have one Exchange 2007 server holding all of your roles. They have caused me a great deal of headache and drama which I am sure is true for others doing such migrations as well. I would imagine that this is mostly the SMB sector, which is where the majority of my work in this is being done. Let’s talk about the biggest issue now, client access.

The CAS role plays a big part in your Exchange organization as it is the broker for all requests to your mailboxes. You will have MAPI requests as well as HTTPS, POP3 and others coming into this server. By the way as a security side note the recommended set up is to have your CAS role on your internal network with a reverse proxy in your DMZ for proxying requests through to your CAS. The CAS when it receives a request for a mailbox that resides on a 2003 servers proxies that request through to your 2003 server. No issues at all there. The problem that comes up, though, with having a CAS on the same server as your MB is that web requests no longer get proxied to your 2003 servers, they get redirected. This is due to davex.dll handling the requests on a mailbox server, and it will grab the requests first. Exprox.dll is what handles proxying. This redirection is not configurable either. So that causes a problem when it is redirecting an external request to an internal FQDN. That doesn’t work out too well and you get lots of angry OWA users wondering why their logins take them to an invalid address. For a more in depth explanation take a look here. Let’s take a look at a few ideas for mitigating this issue.

First off an easy fix would be to make sure your Exchange 2003 FQDN has a matching public address. This is not a recommended set up though at all. It is against best practices to have your internal domain match your external domain. Not to mention you can get a number of funny DNS issues going on if this is the case unless you’ve planned things out well. Read this article for some more DNS information, and especially look at the split-brain section. All of this can turn your easy fix into a much more complicated fix. If the stars do just happen to be right on your migration though, then go for this. Set up a public record matching your internal Exchange 2003 name and you’ll be set. This will be transparent to your users.

Next up would be to use a reverse proxy such as ISA 2006. This would be great as it keeps the strict definitions of your DMZ as it keeps your Exchange servers from having to blur the lines. This doesn’t seem to be something that most SMBs care about in my experience though. They don’t seem to see the need for security and how having a properly defined DMZ fits into this. But that goes into an entirely separate article and could sound a bit ranty.

Other methods will require a bit more cooperation from your users. Remember, in Exchange 2007 the OWA access by default is /owa. So you will need to communicate this to your users as you migrate their mailboxes over. Then, remove the /exchange virtual directory through the Exchange Shell and recreate it in IIS. Finally, set up /exchange with a custom 403 redirect to a different port on your external address. Mind you that you’ll need to make sure that port does point to your legacy server. This either requires your firewall to be able to do port translation or changing the ports on your 2003 server.

Finally, and the most recommended method, is to set up a temporary virtual machine that will purely host a CAS role. Then everything will be proxied as it is supposed to be. The down side to this is that it would require a separate license in which case you might as well plan for as separate CAS to begin with.

Fortunately as long as everything is configured properly Outlook Anywhere and ActiveSync seem to work just fine. Some dangers with those is if you are having some DNS issues internally or improper communication with a global catalog. This can add to your headache so you will want to cozy up to rcpping which you can grab from Microsoft and get more info about how to work it from here. Another great site I have recently found out about is the Remote Connectivity Analyzer. This site will enable you to test Outlook Anywhere, ActiveSync, SMTP and Autodiscover with detailed error messages about where these break down. It will become your best friend very swiftly.

I guess the moral of all these suggestions is to make sure you have your migration well planned out. Run it through a test lab first if you are able. Definitely make sure you test it out, and definitely don’t spring it on your users unawares. You could be in for quite a “fun” surprise.

DFS On Core — You’re Doing It Replicated

Anyone taken a look at Windows Server 2008 R2 yet? Things I’m excited about in it are PowerShell on Core, AD cmdlets, and the AD Recycle Bin. PS on Core is the most exciting addition though. Maybe later on I will start delving into R2 and talk about working with that on Core. This time, though, we are going to deal with setting up a basic DFS using Windows Server 2008 Core machines.

Core makes for a low resource file server that you can deploy to do its job without letting layers of the OS get in the way. Using it for a DFS will be a step in the right direction towards high availability of your data as well. Further more it can be used as a way to put some controls on your bandwidth utilization through having replicas of your data in locations that are local to your users. Failover is provided by pointing the users at the namespace which will then direct the users to the nearest server. Let’s run through putting together a setup on Core.

Grab our first server and let’s install the DFS NameSpace role.

C:\> start /w oclist DFSN-Server

Once this is complete we can start breaking out our trusty dfsutil.exe tool. We will start out with making a domain based namespace. Set up a share to use for this.

C:\> mkdir TurksNS
C:\> net share TurksNS=C:\TurksNS /GRANT:”Authenticated Users”,FULL

Don’t forget to customize the share and NTFS permissions to your specific needs.

C:\> dfsutil root adddom \\renocore\TurksNS “2008 Namespace”

You can also add V1 or V2 as a parameter. The default is V2. V1 is a Windows Server 2000 mode namespace while V2 is a 2008 mode namespace. Note that a requirement for a V2 namespace is a Windows Server 2008 domain functional level. If you receive any the RPC server is unavailable errors make sure the DFS Namespace service is running. Easiest way is to reboot but you can also run the sc command to start up the service.

C:\> sc start dfs

After that if you are still getting RPC errors then check your firewall and start going down the usualy RPC troubleshooting path. Let’s verify that we have created our domain based namespace.

C:\> dfsutil domain shinra.inc

You will see your newly created namespace there. Of course it isn’t doing much for us right now so let’s create some targets for it. Create another share on this server (or really any server) and add a link.

C:\> dfsutil link add \\shinra.inc\data \\renocore\data

If browse to \\shinra.inc\data via UNC or just map a drive you’ll now see the data available in there. This get us a running DFS, but it really isn’t anything more than a fancy way to share data right now. There are not multiple targets so no replication is occurring. If this server goes down there goes the access to the data. Let’s get some targets in there to fulfill the D in DFS. Jump onto another server, install the DFSN-Server role, and make yourself a share to add to the pool. Don’t forget to make sure it has the same share and NTFS permissions as your first share, otherwise things could get difficult for troubleshooting problems later on. Once you have it ready we can add the target.

C:\> dfsutil target add \\shinra.inc\TurksNS\Data \\RudeCore\Data

We have our links now. But we still have no replication. To get this setup we need yet another role added.

C:\> start /w ocsetup DFSR-Infrastructure-ServerEdition

We will then set up a replication group for our folder here.

C:\> dfsradmin RG New /RgName:TurksData
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RudeCore
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RenoCore

This gives us a replication group with our two servers added in as members. Next we will bring in our data for replication.

C:\> dfsradmin RF New /RgName:TurksData /RfName:TurksData /RfDfsPath:\\shinra.inc\TurksNS\Data /force

We have a folder set for replication, but now we need replication links so that the data may flow. Note that force is required because we set up our namespace target first.

C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RudeCore /RecvMem:RenoCore /ConnEnabled:True /ConnRdcEnabled:True
C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RenoCore /RecvMem:RudeCore /ConnEnabled:True /ConnRdcEnabled:True

Close to the end but we still need to bring in memberships to this replication group.

C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RenoCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:True /force
C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RudeCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:False /force

Replication should start flowing smoothly now shortly. If you don’t have any data in there or if you have prepopulated the shares then you won’t know for sure if replication is working properly. You can run a test from this command line utility.

C:\> dfsradmin PropTest New /RgName:TurksData /RfName:TurksData /MemName:RenoCore

This will start the test from RenoCore and the data will flow to Rudecore. Generate the results with dfsradmin.

C:\> dfsradmin PropRep New /RgName:TurksData /RfName:TurksData /MemName:RenoCore

You’ll find an html and xml file generated to pull up in your web browser. Of course you may just find it easier to do things on your own with creating a new different file on both shares and verifying if it is replicated to the other. But the good thing about the report is that it is detailed and will help you in tracking down any issues you may be having. You can also use dfsradmin to automatically create the folders for you when you use dfsradmin RF. Just add them into the namespace later on. So let’s touch on one last topic here, replication of large amounts of data.

It is ok to run through this with a small amount of data that the DFS may need to replicate initially, but if you get into large amounts, which I generally consider to be amounts over 400 or 500 GB, you will definitely want to prepopulate things. Otherwise your DFS may choke on a few files initially and cause you all sorts of headaches. Not to mention it just plain gives you more control over everything. This all does depend upon the bandwidth available to you, of course. The method I normally use is robocopy. You would want to use /E /SEC /ZB. Instead of /SEC you could use /DATSOU to include the auditing information.

Extra reading:

DFS Step-by-Step

DFS FAQ

Dfsutil Breakdown

DFS Best Practices

Building Your Fortress with RODCs on Core

Now for the topic that you all have been waiting for. Building an RODC! Read only domain controllers are another one of those awesome additions to Server 2008. An RODC holds read only copies of parts of your AD. They’re ideal for branch offices or even your DMZ where you need heightened security but also still need access to your AD as well. RODCs don’t contain a copy of your credentials but only caches those that you set as per policy. In cases of customized AD-integrated applications you can also mark certain attributes in your AD as filtered. Filtered attributes do not replicate to an RODC, so if the RODC is ever compromised the attacker will not gain this critical information. Furthermore any changes made on an RODC do not replicate back out. If for instance someone makes some changes to the SYSVOL folder, those changes will not replicate out to all the other DCs in the forest. It will make that SYSVOL out of sync with the rest of the forest though and could cause some Group Policy idiosyncrasies. If you are using DFS replication for SYSVOL though this problem is fixed automatically. Later I may talk about how to enable DFS-R.

As a side note, anyone running VirtualBox under Linux and has switched to the newly released 2.6.29 kernel may be having a bit of trouble with their VB installation. If you are receiving an error message like this when starting a VM:

Failed to load VMMR0.r0 (VERR_SYMBOL_NOT_FOUND).
Unknown error creating VM (VERR_SYMBOL_NOT_FOUND).

Then you are in need of editing the vboxdrv Makefile. You should find this in /usr/src/vboxdrv-2.1.4/Makefile. You might need to tweak the version number depending upon your installed version. Uncomment the line # VBOX_USE_INSERT_PAGE = 1. Re-run your /etc/init.d/vboxdrv setup command under your root account (or just use sudo) and you should be good to go. More information about this is available here.

Let’s get a new VM created that we will be purposing for our RODC. Get it installed and joined to the domain, but we’ll be building a different answer file for the dcpromo. One important thing to remember by the way, for practical as well as testing purposes, is that to install an RODC requires a minimum forest functional level of Server 2003. Also for testing and practical purposes remember you only need one DC in your domain running Server 2008. No need to migrate over all your DCs yet. You also have to prep your forest. Login as an Enterprise Admin on your schema master, mount your Server 2008 DVD, and run:

C:\>mkdir C:\adprep
C:\>D:
D:\>xcopy /E D:\sources\adprep C:\adprep
D:\>C:
C:\>cd adprep
C:\adprep>adprep /rodcprep

This copies over adprep files and then preps the forest DNS partitions for replication to an RODC. Now to set up your answer file:

[DCInstall]
InstallDNS=Yes
ConfirmGc=Yes
CriticalReplicationOnly=No
PasswordReplicationAllowed=lablogins
Password=*
RebootOnCompletion=No
ReplicaDomainDNSName=shinra.inc
ReplicaOrNewDomain=ReadOnlyReplica
SafeModeAdminPassword=Pass1word
SiteName=Headquarters
UserName=Administrator

You will also want to specify ReplicationSourceDC= if you have Server 2003 DCs and need to point to your Server 2008 DC. You can also specify PasswordReplicationDenied to deny any additional users/groups replication to this RODC. Once you have your file created run the dcpromo as normal.

C:\>dcpromo /unattend:install.txt

Upon success restart your RODC. If you have your site set up properly they should now be able to log into their systems with authentication through the RODC. Now to do some delving into management of your RODC specifically dealing with the Password Replication Policy (PRP). This is what defines what credentials will be cached and what will never be cached. What happens in the case of a denied password caching the RODC forwards the request on up the WAN to a writable DC for authentication. To view what is currently set for your RODC run:

C:\>repadmin /prp view JenovaCoreRODC allow
C:\>repadmin /prp view JenovaCoreRODC deny

This will show you what is currently allowed and denied for the RODC you have specified.

C:\>repadmin /prp view JenovaCoreRODC auth2

From this you will view all accounts that have been authenticated by this RODC. Finally to know what credentials have been cached by the RODC run:

C:\>repadmin /prp view JenovaCoreRODC reveal

It is important to know what credentials have been cached in case of the RODC being compromised. Now if you are wanting to update the list of what accounts you wish to allow caching for then run:

C:\>repadmin /prp add JenovaCoreRODC allow “CN=Lab Guests,OU=Lab Users,DC=shinra,DC=inc”

This uses the LDAP DN to the account or group that you wish to allow caching for. Something to remember is that an account won’t actually be cached until they have logged in authenticating to that RODC. You can pre-populate credentials via this command:

C:\>repadmin /rodcpwdrepl JenovaCoreRODC CloudCore “CN=Jeffery Land,OU=Lab Users,DC=shinra,DC=inc” “CN=Jeffery Land2,OU=Lab Users,DC=shinra,DC=inc”

You can specify as many users as you would like separated by a space. You will have to specify user accounts and not groups though. Most likely you would want to script this if you’re pre-populating an RODC for a site with limited/sporadic WAN connectivity. Remember that you not only want to allow caching for user accounts but also for any computer and service accounts that require authentication. Otherwise the RODC will attempt to forward the authentication on up and if the WAN is down it will fail due to not having a cached account. You are best off first working with an RODC in a lab environment prior to deployment so that you have worked through all such issues that could arise. Also if an account is both in the allowed and denied lists the account will be denied caching as deny takes precedence.

This should get you up to speed on RODC installation and management. Here is some reading for you to more thoroughly understand RODC implementation and management.

Read Only Domain Controllers
RODC Planning and Deployment
RODC FAS, Credential Caching, and Authentication
RODC Administration

Configuring DNS Zones in Core

Now that you grok more completely the concepts of DNS and how it works we will be going over some of the actual implementation details, on Server 2008 Core of course. We’ll jump on the primary DNS server for our lab and set up a subdomain and put together some records for it. Then we’ll set up another DNS server and do a zone transfer to it and make it authoritative for the zone. We’ll be adding a reverse look-up zone for our ip range as well. That should get you started on managing zones and records from the command line. Let’s begin with a reverse look-up zone for our shinra.inc domain.As mentioned earlier for creating a reverse look-up zone you read from right to left for the ip address. I need to clean-up a bit first, though.

C:\>dnscmd /zonedelete 0.60.10.in-addr.arpa /dsdel

I had a zone left over from some previous work so we are going to remove it and start over. Note the addition of /dsdel to the command. This is require to remove the zone from AD if it is AD integrated. Otherwise you will receive an error such as DNS_ERROR_INVALID_ZONE_TYPE 9611. If you are working with a non-AD integrated zone then it is fine without /dsdel. Now let’s recreate our reverse look-up zone.

C:\>dnscmd /zoneadd 0.60.10.in-addr.arpa /dsprimary

This gets us an AD integrated zone. Pretty much you’ll want to always great AD integrated zones, unless you have requirements such as needing to replicate to a DNS server that is not a DC such as a BIND server set up on your Linux box. AD integrated zones enable you to configure secure dynamic updates. This allows an ACL to secure who can read and update particular records. We’ll set up some PTR records now for our machines.

C:\>dnscmd /recordadd 0.60.10.in-addr.arpa 2 PTR cloudcore.shinra.inc

Now if you execute an nslookup of 10.60.0.2 you’ll find a response of cloudcore.shinra.inc. Here’s the anatomy of how this works. After the /recordadd you specify your zone name which is 0.60.10.in-addr.arpa, then next comes your node which is your ip address relative to the zone name. Since our server is 10.60.0.2 in 0.60.10.in-addr.arpa this would be 2. If the zone was only the first two octets it would be 60.10.in-addr.arpa which would mean our node would be 2.0 for this zone. Then we specify that it is a PTR RR and give the FQDN. We’ll add in a few more records to flesh out the zone.

C:\>dnscmd /recordadd 0.60.10.in-addr.arpa 10 PTR renocore.shinra.inc
C:\>dnscmd /recordadd 0.60.10.in-addr.arpa 12 PTR rudecore.shinra.inc

Note that DHCP clients can add their own PTR records in addition to A records. To verify this list of records we’ve added we’ll do a recordsenum.

C:\>dnscmd /enumrecords 0.60.10.in-addr.arpa @
Returned records:
@ 3600 NS cloudcore.shinra.inc.
3600 SOA cloudcore.shinra.inc. hostmaster.shinra.inc. 13 900 600 86400 3600
2 3600 PTR cloudcore.shinra.inc.
10 3600 PTR renocore.shinra.inc.
12 3600 PTR rudecore.shinra.inc.

This should show that your reverse look-up zone is properly created and populated. We will now move on to our next exercise of creating a subdomain. Since we will also be using this zone for non-AD integrated zone transfers will be creating it is a regular zone which requires it to be stored as a file and not in a directory partition.

C:\>dnscmd /zoneadd lab.shinra.inc /primary /file lab.shinra.inc.dns

We’ll add a few A records for a few non-existent machines to populate the zone. We’ll use a quick batch script to aid in this. Here’s the contents of the script.

for /L %%C in (%1, 1, %2) do dnscmd /recordadd lab.shinra.inc experiment%%C /createptr A 10.60.0.2%%C

Then run from the command line with adddns.bat 10 30. This will populate your zone with a good number of A records. You can verify with dnscmd /enumrecords lab.shinra.inc @. You can also verify that the corresponding PTR record was created with dnscmd /enumrecords 0.60.10.in-addr.arpa @. Now we’ll set up a second server to transfer this zone over to. Configure your server as normal and join it to the AD or not as it doesn’t really matter in this case. If you do join it to the AD you can transfer over AD integrated zones as well though. Let’s get our DNS role installed on it. Here’s a good way to find out the name of a service for installation without having to scroll through a long list.

C:\>oclist | find /I “dns”

Now to install it.

C:\>start /w ocsetup DNS-Server-Core-Role

Once that finishes the role will be installed. We need to configure our original DNS server to allow zone transfers to this new server.

C:\>dnscmd /zoneresetsecondaries lab.shinra.inc /securelist 10.60.0.25

Then we jump back onto our new server to get the zone set up and transferred.

C:\>dnscmd /zoneadd lab.shinra.inc /secondary 10.60.0.2
C:\>dnscmd /zonerefresh lab.shinra.inc

Once this has finished transferring, which with these sizes and being in the same network should be instantaneous, you’ll have a complete read only copy of the zone on this server. Now to make it a master we decommission the old server and make the new one the primary. On the old server we delete the zone.

C:\>dnscmd /zonedelete lab.shinra.inc

Then on the new server we switch it to being a primary zone.

C:\>dnscmd /zoneresettype lab.shinra.inc /primary /file lab.shinra.inc.dns

Then we’ll verify that we were successful from the zone RRs themselves.

C:\>dnscmd /zoneprint lab.shinra.inc

Check your SOA record if you see that your new server is listed then the transfer of the master server was successful. Most likely your NS records will not have been updated properly, so we will go through an recreate those ourselves.

C:\>dnscmd /recordadd lab.shinra.inc @ NS redxiiicore.shinra.inc
C:\>dnscmd /recorddelete lab.shinra.inc @ NS cloudcore.shinra.inc

Now we could even take it a step further and create a stub zone on our previous server for our lab.shinra.inc zone. Hop on your old server and let’s get this created.

C:\>dnscmd /zoneadd lab.shinra.inc /stub 10.60.0.25

Check the zone info and you should be seeing the SOA and NS records in there for the zone, but none of the horde of A records that we had created.

You should be feeling up to speed on managing DNS from the command line on your Core installations. Don’t forget that you can also use these on your full Server 2008 (or even older versions) installations as well. The GUI can be easier but don’t let it be your only tool in your arsenal. Remember that one of the places the CLI can shine is in scripting, as demonstrated earlier. For some reference reading this post will be useful for you as it has a list of commands for dnscmd and a quick example.

DNSCMD Reference

A Sidetrip to Linux with Active Directory

This is a temporary detour into the land of joining a Linux server to your Active Directory. This was one of my first experiences working with Linux on the job so it was quite exciting how there was almost no documentation on how to do this at the time, and what was out there didn’t work quite right or not at all. It took me a while but I eventually got it working. Since there are so many flavors of Linux out there the same methods may or may not work for you. The machine being joined is running CentOS 5.2 with a fresh install.

As always before you start setting this up make sure that your network configuration is set just fine, that you can ping everything and name resolution works. Don’t forget to add an A record for your linux machine. The first thing you need to do is get your kerberos config set up, and set up properly. The majority of the time if something breaks it will be in your kerberos configuration, since the krb config is rather fragile. Open up your /etc/krb5.conf and edit it to look similar to what is below. Remember that the capitalization is extremely important as well as punctuation.

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = SHINRA.INC
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
forwardable = yes

[realms]
SHINRA.INC = {
kdc = cloudcore.shinra.inc:88
admin_server = cloudcore.shinra.inc:749
default_domain = shinra.inc
}

[domain_realm]
.shinra.inc= SHINRA.INC
shinra.inc = SHINRA.INC

Once you have that set up run

kinit administrator

If no errors are returned after entering in your password that should (but not always) mean that your kerberos set up is working fine. Run klist to make sure you have a kerberos ticket. Next up is configuring samba. Edit your /etc/samba/smb.conf file as follows.

[global]
workgroup = SHINRA
realm = SHINRA
netbios name = LINUXTEST
security = ads
password server = cloudcore.shinra.inc
domain master = no
idmap uid = 1000-29999
idmap gid = 1000-29999
winbind enum users = yes
winbind enum groups = yes
winbind use default domain = yes
winbind refresh tickets = true

Then edit your /etc/nsswitch.conf as follows.

passwd: files winbind
shadow: files winbind
group: files winbind
hosts: files dns
bootparams: nisplus [NOTFOUND=return] files
ethers: files
netmasks: files
networks: files
protocols: files winbind
rpc: files winbind
services: files
netgroup: nisplus winbind
publickey: nisplus
automount: files nisplus winbind
aliases: files nisplus

The important part is adding winbind. Your nsswitch.conf may be customized to your network. Now the final file for you to edit /etc/pam.d/system-auth. Look for a line similar to auth sufficient pam_winbind.so and edit it as follows.

auth sufficient pam_winbind.so krb5_auth krb5_ccache use_first_pass

Now we should have everything configured so let’s start things up and get joined to our AD. First we need to set things to auto start and then we’ll start the services.

chkconfig –level 35 smb on
chkconfig –level 35 winbind on
/etc/init.d/smb stop
/etc/init.d/winbind stop
/etc/init.d/winbind start
/etc/init.d/smb start

Next up is joining the domain.

net ads join -U administrator

This should run successfully. To test and make sure you are joined run wbinfo -u and wbinfo -g. These should list the users and groups respectively in your domain. Now you should be set. I’ll go over a few errors that I’ve encountered and possible solutions. It is a touchy process so if this didn’t work for you it may just require a bit of tweaking for your flavor of Linux and your own AD.

Some possible errors you may encounter.

When attempting net ads join -U administrator I get the error:
Host is not configured as a member server.

Check your smb.conf for errors. Make sure that you set security = ads. Run a testparm to make sure you don’t have other configuration errors.

When attempting net ads join -U administrator I get the error:
[2009/03/11 09:55:32, 0] libads/kerberos.c:create_local_private_krb5_conf_
for_domain(594) create_local_private_krb5_conf_for_domain:
failed to create directory /var/cache/samba/smb_krb5.
Error was Permission denied

Manually create the directory /var/cache/samba/smb_krb5. It may be an issue related to using SELinux. I haven’t researched into it enough to determine a proper work around.

My net ads join -U administrator works but wbinfo -u and wbinfo -g are still returning errors.

Make sure your winbind service is running. It is generally best to start winbind before starting smb in my experience.

Finishing Your DHCP Server

Now that we have our wonderfully clustered DHCP server running it is fabulous that it has failover but does us not a spot of good if it isn’t configured. So let’s get that wrapped up. Fortunately this is a lot simpler than building a cluster from the command line. We need a scope created and activated for our DHCP range. I am going to use the 10.60.0.100-200/24 range for this. Now for some added redundancy we could also implement the 80/20 rule with a second DHCP server in addition to our clustered DHCP. The way this works is that 80% of the scope would be kept on the cluster and then 20% would be on another DHCP server. Just in case the cluster went completely bananas. 80/20 is definitely something you should look into if you’re pursuing your MCSE or MCITP. We will not be implementing it for this lab though.

For configuring the DHCP server from the command line you go back into your friendly netsh utility. First off we need to authorize this DHCP server in Active Directory.

netsh>dhcp
netsh dhcp>add server turksdhcp.shinra.inc 10.60.0.55
netsh dhcp>show server

You should see the listing for your server there. Now to jump onto the server.

netsh dhcp>server \\turksdhcp
netsh dhcp server>add scope 10.60.0.0 255.255.255.0 Headquarters

We now have our first scope created. It needs a range added.

netsh dhcp server>scope 10.60.0.0
netsh dhcp server scope>add iprange 10.60.0.100 10.60.0.200

This gives us our 100-200 range. We need to set some options though so that our clients will be correctly configured. Even though this subnet currently does not have a gateway I will be setting one anyways.

netsh dhcp server scope>set optionvalue 003 IPADDRESS 10.60.0.1
netsh dhcp server scope>set optionvalue 006 IPADDRESS 10.60.0.2
netsh dhcp server scope>set optionvalue 015 STRING shinra.inc

And that’s it! A whole lot simpler than actually configuring the cluster wasn’t it. It still may be easier through the GUI but the scripting possibilities are pretty exciting. Don’t forget to bring on a client and test it out to make sure it is working well. Then for those of you studying your MCSE or MCITP check out this fascinating reading of how the DHCP process works. It is knowledge well worth having.

%d bloggers like this: