Start here

Advertisements

Active Directory Internal Naming and DNS Strategy

This post touches on something that is rather simple, yet I’ve seen it done improperly at many of the SMB clients that I work with by a previous provider. This has resulted in some unnecessary complexity and even migrations to a new forest to meet requirements such as for Exchange 2007 when it did not support single label domains. When you are first creating your Active Directory forest you want to put some thought into what you are naming it. You need to first think about the company’s internet facing domain names and what sort of traffic is being generated through them. This changes your DNS strategy depending upon, for instance, if the company’s website is being hosted by the company or if it is hosted by a third party. You will also need to think about where your public DNS is being served from. Another thing to throw into the mix is security, which ties into the previous issue. Let’s take a look at a few things here.

Microsoft has some best practices guidelines here. My personal preference is to go for a non-registered TLD such as .internal or .local so as to provide no confusion with TLDs such as .com or .net. Microsoft would prefer for you to go with a subdomain of your external domain i.e. corporate.contoso.com for your AD forest while using contoso.com on the internet facing side. Either way of doing it a benefit you reap is that name resolution for contoso.com is done externally. The reason for this is that while your internal DNS is authoritative for contoso.local or corporate.contoso.com it is not authoritative for contoso.com itself so it will find a server that is. This will return the internet facing IP address for whatever is in contoso.com. The reason you would want this is because, especially for the majority of SMBs that I work with, most often their website is hosted at a 3rd party provider. If your AD forest was contoso.com that would add complexity as you would have to manage internet addresses both internally and externally as you would no longer be able to forward requests to your public DNS provider. For example for the record of http://www.contoso.com if you switched 3rd party hosting providers you would need to update that A record on your public DNS. You would also need to update that record internally, otherwise the next day your client will be calling in to let you know that their website is “down.”

Now what if you were hosting your own DNS? For security you would want to put your public DNS into your DMZ serving different zones than your private DNS servers. The reason for this is to restrict public access to your internal DNS hierarchy. Access to that would give hackers a huge amount of information on your internal network such as naming conventions, internal ip addressing and even names of your DCs. Your private DNS would then forward requests for contoso.com to your public DNS and management is simplified since internal changes would not affect external changes and vice versa.

Next obstacle to face is what if you were hosting some addresses internally but others are hosted at a 3rd party, such as www .contoso.com goes to your company’s website but mail.contoso.com goes to your OWA. Creation of a zone internally for that specific address would allow internal requests to be managed by your internal DNS while still forwarding requests for the company site to the public DNS side. This simplifies DNS management as well. You would have your mail.contoso.com zone and you could be migrating from one Exchange server to another and all you would have to manage internally is the mail.contoso.com zone. Your public IP address has not changed at all so your public A record for mail.contoso.com has no need to be updated. All those remote users hitting mail.contoso.com would not notice a difference, unless of course you have forgotten to change your NAT and firewall rules but that is an entirely different subject. Also if the reverse is true and you are changing your public IP address you would still only be changing your public DNS records. Private DNS would not be impacted whatsoever.

So what if you were to go with contoso.com for your AD forest as well as your public DNS? DNS changes would be more complex. You would need to manage addresses both externally and internally. An example, you have your mail.contoso.com address created externally and your remote users are using OWA. If they come into the office suddenly all their OWA requests are failing since an A record internally is not created. You create your A record pointed to your Exchange server internally and everything works properly again. Then there is the scenario of the company website which is hosted by a third party. Users are able to access http://www.contoso.com outside the company but inside the company the requests fail. You create an A record pointed to the 3rd party site and everything works again, until you switch your hosting provider. People will be unable to access the site again until you also update the internal DNS record.

There is also the single label domain name to think about. Microsoft recommends to avoid this and I would also recommend avoiding it since it requires even more initial management to get things working properly. It can also cause problems with cross forest trusts.

Keep your DNS simple and you will have less late nights trying to figure out why mail.contoso.com does not work on the company network.

Advertisements

Addressing P2V 0x7b Issues

The other night I was P2Ving several systems and on one I ran into the issue of it blue screening on boot. It is unfortunate but not too uncommon as usually you need to enable IDE drivers on the system prior to the P2V. Microsoft’s article here works for all versions of XP and Server 2003, though I found I needed to expand the mentioned drivers directly from the cd for the SBS system I was working. That unfortunately did not resolve my 0x7b blue screen the other night. This article turned out to be the key to what I needed. Now the part that neither of these mentions is how to fix the problem if you can’t even boot that VM, so as to avoid having to do another P2V of the system. With Server 2008 this is possible to avoid and it can save you a lot of time, especially if the systems are large.

Server 2008 contained a great feature of being able to mount VHDs which is what we’ll be doing. For the first method you’ll want to mount the VHD to a drive letter and then expand the drivers to the \windows\system32\drivers folder in the VHD. Pull up regedit and select the HKLM key. Go to File->Load Hive and open the system registery from the \windows\system32\config\ and give it an easily identifiable name. You’ll find the registery loaded in HKLM under the name that you gave. Now loading the registery this way you won’t find a CurrentControlSet under the SYSTEM key. CurrentControlSet is just a pointer to ControlSetxxx. To find out which ControlSet number the system is set to boot with look ing SYSTEM\Select. The Current dword contains the number that it is using which in most cases will be 1, so go into that particular ControlSet i.e. for 1 it will be ControlSet001. In there you can manually implement the keys from the first article or the second article. In the case of the problem I ran into I had to set the Group Value of wdf01000 to WdfLoadGroup as it was part of the base group. If you want to learn more about service orders take a look at this article and this article.

Once done with those changes unload the hive and close out of regedit. Dismount the VHD and your virtual machine should be good to go.

Outlook pulling the wrong user’s mailbox?

Recent problem I ran into at a client’s site was that when they were attempting to setup an Outlook profile for a specific user it would keep on pulling a different user’s mailbox. Also same thing happened when they pulled the e-mail from the global address list, though oddly enough if done from a Blackberry it was just fine. Checking the two mailboxes involved in Exchange 2007 yielded no wrong information. All the names and aliases were correct. So to dig deeper I broke out adsiedit.msc. Pulling up the user’s properties and checking the mail property showed that both accounts had the exact same e-mail address. A swift change of that to the proper smtp address corrected the problem immediately on all accounts.

Manually Connecting Mailboxes by MailboxGUID or Hey! Where did my mailboxes go?!

On Monday I learned the hard lesson of always making sure that your Active Directory is replicating properly, especially in the middle of a migration. At least this is the best explanation that I can come up with for what happened. We were wrapping up my first Small Business Server 2003 to SBS 2008 migration, with an already less than stellar performance due to the Exchange System Manager refusing to show the new server, and to a few mailboxes that refused to move over. We had to export those monstrosities to pst before we could move them over. Which turned into a mixed blessing later on. Things had come to the point of being ready to remove Exchange 2003 from the SBS 2003 server, which went along fairly smoothly. We rebooted the server ready to breathe a sigh of relief that this portion of the migration was done. Not so lucky, the true nightmare began then. The calls from the users began coming in that they could no longer access their email, even after confirming that they really were pointing to the new server. I fired up the Exchange Management console to find out what was wrong to discover that no mail enabled users showed up, aside from the three we had to import from pst. I took a look for disconnected mailboxes and to my horror we did not have any. Restoring from backup wasn’t an option as we had not been able to take one yet. This began a mostly fruitless 6 hour call with Microsoft. It is very sad when your bluetooth dies and you also are able to recharge it and start using it again during the same call. Fortunately I did not stop doing research into the problem while on the call and eventually cobbled together the solution that I am going to present to you now. My warning is – do NOT do it this way unless you absolutely have to.

Before I had made the call I ran a Get-MailboxStatistics. Interestingly this reported everyone’s mailbox as existing and containing data. This meant that the users had lost their Exchange attributes. To verify this required digging into Active Directory with ADSI edit. Open up ADSI edit and connect to the default view. Drill down to the OU of your user and pull up their properties. You will see a list of attributes set there. Some specific attributes are required to mail enable a user. legacyExchangeDN, homeMTA, mailNickname,  msExchHomeServerName, and finally as well as perhaps the most import is msExchMailboxGuid. homeMDB is required as well but fortunately this was still populated. On the users that were missing mailboxes none of the msExch attributes were set. Fortunately there did remain some users, so looking at those I was able to glean a few of the attributes.

legacyExchangeDN –This attribute needs to be configured to point to the login for your mail enabled user. This drills down through the organization name and the default administrative group name ending down in your user’s login name. Example:  /o=first organization/ou=exchange administrative group (fydibohf23spdlt)/cn=recipients/cn=jland

You can pull this information from your mailboxes actually. Run Get-MailboxStatistics | ft Displayname, LegacyDN

homeMTA – This may not be required under Exchange 2007, but we decided to set it none-the-less. This drills down through Active Directory to where your Exchange server’s MTA resides. Example: CN=Microsoft MTA,CN=SERVER,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=First Organization,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=linthicum,DC=local

mailNickname – This one is easy to miss yet is very important. If you user still doesn’t show up as mail enabled go back and make sure you entered in the mailNickname. This generally would be your user’s login name, though you should consult your organization’s naming scheme to be sure.

msExchHomeServerName – This one is fairly self-descriptive. This points to where your server is located in Active Directory, as based off the organization name. An example is: /o=First Organization/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=SERVER

msExchMailboxGuid – This one is the kicker. Exchange won’t know what mailbox to connect your user to without this info. But it isn’t exactly easy to get ahold of either. First run Get-MailboxStatistics | ft Displayname, MailboxGUID. You’ll see everyone’s msExchMailboxGuid listed right there. Easy? No. Now you have to be able to get that into Active Directory. Which is a royal pain. Go down through the properties of your user and open up msExchMailboxGuid to put in some new information. See how you only have the options of decimal, hex, octal and binary? You need to convert this GUID into something usable.

Go to joeware’s great site and download the adfind tool.  Open up a command line and go to where you extracted the tool and run adfind -gc -b “” -binenc -f ” msExchMailboxGUID={{GUID:98ee00d7-df19-4282-bedf-3a1340b8b7c0}}” –dn where of course you replace the GUID with the one you are searching for.  This will return you some interesting output which still isn’t quite usable, though it may look that way at first glance. Your response is mostly hex, but not fully. You need to translate it. I’ve included a utility at the bottom of this post that you can use to convert this output into full hex. Pull up the table at Ascii Table and use this for your translation. Start going through the characters and when you find one that doesn’t match like for instance a lower case j or the number 4 unpaired, look through the red characters in the table for your character and you’ll see the conversion to hex in the separate column. Go through the whole string this way and you’ll eventually get a fully hex string. Go back into your msExchMailboxGUID and put that in and after you click ok you’ll see that the attribute has been populated with the string that you began with. Look very closely at it to make sure it matches. If there’s some deviation go back and check your look-up tables again. This string should match completely, otherwise your user will end up with an empty mailbox created. Here’s an example of how to convert the returned string:

\D70\EE\98\19\DF\82B\BE\DF\3A\13\40\B8\B7\C0

D7 00 EE 98 19 DF 82 42 BE DF 3A 13 40 B8 B7 C0

And another one:

T\BA\A04l\B8\EEM\9F\D6\40m\258\CE\A0

54 BA A0 34 6C B8 EE 4D 9F D6 40 6D 25 08 CE A0

Your user has now been mail enabled as a refresh of your Exchange Management Console will show, but is still missing a number of Exchange attributes. Run Set-Mailbox “My User” –ApplyMandatoryProperties and all the rest will be filled out for you.

The last bit is to clean up Outlook for your users. Even the ones not using Cached Mode still showed as disconnected until we re-put the server back into their profile. For the ones using Cached Mode it was also a good idea to delete their .oab files from their directory to force them to download a newly rebuilt OAB. This may or may not be necessary in your case as that was most likely related to the timing in the migration. If you didn’t delete them it would populate the DN to the user they’re emailing rather than the email address, thusly bouncing back to them when they emailed.

Some extra reading:

Understanding Mailbox GUIDs

How to Re-Home Exchange Mailbox Accounts

Using ADFind Utility

Update:

I’ve finally gotten around to writing a simple little utility for converting the resulting GUID output from adfind into full hex for pasting back into adsiedit. Usage is guidconvert.exe <adfind guid output>.  Here’s the utility. C# source has been included as well.

Setting Up Server 2003 as a RADIUS with DD-WRT

A co-worker of mine was having some difficulties in setting up a RADIUS for his wireless network which is what prompted this particular article. For setting up your wireless infrastructure there are times when you need a more centrally controlled solution for the authentication problem. This is where RADIUS, and more to the point Microsoft’s IAS, steps in. For your trivia needs RADIUS stands for Remote Authentication Dial-In User Service, while IAS stands for Internet Authentication Service. Normally I would be setting this up under Server 2008 but our needs were calling for Server 2003. I may follow-up with how to do this under Server 2008 as well and even delve into putting together an IAS farm. The WAP being used is a Buffalo WHR-125 with a fairly current build of DD-wrt v24 SP2 (09/24/09) on it.

First off before installing IAS we will be in need of a certificate for it to use. There are several ways of achieving this. The first method, and easiest/cheapest, is creating a self-signed certificate using the IIS 6 Resource Kit from Microsoft. A particular program need from this is SelfSSL so run through a custom installation and install SelfSSL. Open up a command prompt and navigate to where SelfSSL installed at and here is how we will be constructing a certificate:

C:\Program Files\IIS Resources\SelfSSL>selfssl  /N:CN=server.domain.local /K:1024 /V:1825

This will get you your self-signed certificate. Of course you can use 3rd party certificates as well. Another method is to issue one from an internal CA. Don’t forget to implement CA best practices when using one. I personally would opt for a self-signed certificate unless you already have a CA available.

Next up is getting our IAS installed. You will find this from Add/Remove Programs Add/Remove Windows Components. In there look for Networking Services and go into Details. Internet Authentication Service will be displayed just a few down. Once installed open up the mmc for IAS and let’s get into configuration. Though we should set up our users first. I went with creating a security group named Wireless Authentication and added my users in there. Note that you will need to allow these users for remote access as well. One way is to go into the user’s properties and on the Dial-In tab select Allow access. This isn’t my preferred method though as it creates more work. The other method I shall detail a bit later.

Bring up your IAS controls and you’ll see categories available. We need to get ourselves configured for our access point. To do this we will create a RADIUS Client. Right click on RADIUS Clients and select New RADIUS Client. Give the policy a name and point it to the address of the access point. Next menu is selecting our vendor which we will want to keep as RADIUS Standard for our configuration, as well as most configurations.  Put in a key for this client and note it down as we will need to configure it in the WAP later on. No need for the Message Authenticator attribute as it is used by default with EAP, which is what we will be configuring. For more information about it read here.

We have our client configured on the server but we are also in need of a Remote Access Policy. Right click the Remote Access Policies and select New Remote Access Policy. We will go with the first option for setting up our policy, though creating a custom policy is easy enough as well. On the next screen Access Method we will select Wireless. On the next screen we can put our group to use. Add in your Wireless Authentication group, unless you prefer to control things at the user level. I prefer security groups so that is what we will use. Select PEAP for the authentication method. Check the configuration of it to ensure that EAP-MSCHAP V2 is selected and that the proper certificate is selected as well. If you get an error when selecting Configure complaining about certificates then you need to go back and verify that you have a properly issued certificate. This is where most problems stem from. In the configuration you may also wish to enable Fast Reconnect. I have read about some clients having issues with this but have not had any problems in my configuration. Your mileage may vary. Disable it if you are having problems authenticating clients routinely. Finish this wizard and you’ll have your policy. We’re not quite done with it yet though.

Bring up the properties on your newly created policy. On the encryption tab you will want only Strongest encryption checked. If there are authentication issues though, you will want to enable the others for diagnostics until you figure out what is properly supported by your WAP. This is also where we can enable the alternate method for allowing our users. Go to the Advanced tab and add Ignore-User-Dialin-Properties set to True. This will ignore the setting on your user’s Dial-in tab and truly allow you to control access via groups. Otherwise user settings will trump group settings, which can make for a headache in troubleshooting. Last thing to do is right click the root folder, Internet Authentication Service, and select Register server in Active Directory. What this does is add your server to the RAS and IAS Servers security group, which enables it to read accounts from your AD. Once we are done here we can finally go configure our access point.

This is specific to DD-WRT, so be sure to verify how to configure your own access point. Connect to your access point and go to the Wireless tab, then Wireless Security. Set it to WPA2 Enterprise and make sure you are using AES, unless you have a reason not to. Put in the address for your IAS server and now would be a great time to make sure that it is a static address. Leave the port as 1812 as IAS listens on that out of the box. Finally put in the preshared key that you configured from earlier. Save then apply and your access point is in business. All that is left is configuring your clients.

This is best done through Windows’ wireless configuration. Manually create a new connection configured with your WAP’s SSID and go into the Security settings on it. Set it to use PEAP and if you are using a non-domain joined machine, that also does not have the certificate that you configured the server with, then tell it not to validate the certificate and also not to use your domain logon and password. Connect wirelessly to your access point and see if you’re successful. If you are not then check your server’s System event log for errors. If you are getting bad username/password errors, and you know your username and password are correct, then start looking at your encryption and configured authentication protocols to make sure they all match. If you are seeing errors about no matching policy then make sure you have your user in the right group or matching the criteria of your policy. That covers the majority of problems you will run into when configuring IAS. Even if you don’t have a use for IAS as a RADIUS it is a good idea to set it up a few times for learning purposes when pursuing an MCSE.

Windows Media Player and Other Libraries

I have been greatly enjoying Windows 7 recently. Microsoft has done a lot right with it. I enjoyed it so much that I actually migrated my primary workhorse from OpenSUSE to Windows 7. Being able to pin programs to the taskbar as well as programs like remote desktop having quick access to recently used links through the start menu are nice little touches. One of the great things about 7 though is Homegroups. They’re a much needed breath of fresh air for your average workgroup. I can see this benefiting small businesses a lot as you get some immediate access to easy file AND printer sharing. The printer sharing part was what I liked best. Just add yourself to the homegroup and ta da it is there. But what I am going to touch on today is something that gave me some grief for a few hours last night and this morning.

I decided to give this streaming media thing a go. The music I was wishing to listen to, which by the way is the fantastic Piano Concert #2 by Sergei Rachmaninov, was on another system. So I thought that this would be a great time to test out the streaming capabilities. I quickly switched it on in the homegroup for that system and started up Windows Media Player on my desktop to give it a go. Sorry! Access denied. WMP was complaining that it cannot access the file. This would happen for everything in the music library on that system. Interestingly enough though, was that videos and pictures would work just fine. I found it odd to be a permissions issue since that was working, and also if I browsed through via Explorer I could play the same music files with whatever media player I chose, including WMP. It didn’t quite seem like a permissions issue, especially since WMP’s error message was so generic that it could be anything, but I wasn’t quite ready to discount it yet. So I gave it a good night’s sleep and returned to the problem in the morning.

After a cup of mocha things became a little bit clearer. I decided to test playing music on the problem system from my desktop, thereby reversing the stream. This worked just dandy. So I gave the music folders a permissions inspection and found the problem. On the system that could stream music from the library there was an extra user with permissions, namely the WMPNetworkSvc who had Read permissions on the folder. The problem system did not have this permission. Unfortunately it wasn’t so simple a fix as just adding the user as the system would report that the user did not exist. Inspecting other folders such as the pictures and video folders did report the proper permissions. Thusly I fixed things with a bit of Powershell magic, and if you use this don’t forget to substitute your account name for MyAccount:

Get-Acl C:\Users\MyAccount\Videos | Set-Acl  C:\Users\MyAccount\Music

This turned a trick! I could stream music to my heart’s desire. Now as for speculation as to why this particular permission is missing, the only guess I have is due to this system being a system that was upgraded from XP, to Vista, to Windows 7. I could quite easily see something getting fouled up along the way, especially since this is over a number of years. It is a good test system though. The other system I have has gone from Vista to 7 and it did not exhibit the same issue.

Windows Server Backup and Exchange 2007 with iSCSI

Since service pack 2 has recently been released for Exchange 2007 this has enabled the long awaited integration of Exchange aware backups with Server 2008’s new Windows Server Backup. WSB is Microsoft’s replacement of the old ntbackup that we all know and love. This new backup is simpler to use than ntbackup and has a number of interesting new features, but it also lacks some of the more useful features of ntbackup as well. One of these missing features that is rather vexatious is that you have to backup whole volumes. You can’t just backup the mailbox stores, or even specific files and folders as well. Another missing feature is that you can’t just back up to a specific folder nor mapped drive. This can cause problems, especially at small businesses that don’t feel like shelling out for a more robust backup program. You will have to dedicate a whole volume to WSB, so this requires a bit more planning ahead. This is a problem that we had to get creative about solving last night on the spot though.

The client has an Exchange 2007 server running on Server 2008. No backup software had been acquired as they had been waiting for SP2 to enable Exchange aware backups. SP2 installed just fine, which was great considering all the other migration issues we had earlier, but then the hang-up we ran into was discovering that WSB wants its own volume for backing up, and doesn’t want to backup to a mapped drive on one of the other servers. This was a source of consternation for a bit as we did not have a spare volume available nor could we just grab an external drive for this either. Fortunately StarWind Software has this great, and free, iSCSI target software. Using StarWind we were able to turn a chunk of storage on the server into a virtual drive and set it up as an iSCSI target. All of this without having to reboot too, which is a huge plus. We connected this to the Exchange server using iSCSI and that meant we were finally able to backup the server and flush those transaction logs that had been building up. This made for a pretty quick and easy fix as StarWind is simple enough to set up.  If you are in need of a quick fix for your backups this is one way to do it.

RSAT for Windows 7

Having recently converted over to Windows 7 one thing I found missing was the Remote Server Administration Tools. Well they are missing no longer. Go and get your RSAT goodies here! Don’t forget after installing you have to go into Programs & Features and add in the tools that you use.

On another note I have to admit that I am really liking Windows 7. I haven’t been using it very long and never made the time to really explore it during beta. But now that I have put it into full time use I have really come to like a number of the UI features. The quick documents/tabs off a program in the start menu or the pinned icons down in the task bar are great. It especially works out for having remote desktop pinned to the start menu and then a quick start list of my favourite servers just off it. I am also really appreciating the ease of use for the network connections icon in the tray. Very simple to cycle through various VPN connections now.

So overall? I like it a lot. No compatibility issues and has been a very painless conversion process.

Exchange 2007 Single Server Migrations for Profit or Headache

I was originally writing up a guide for migrating, actually transitioning, Exchange 2003 to Exchange 2007. There are lots of guides out there that would have better screenshots and perhaps even better written steps. Basically I would not really be meeting a need as there are already plenty out there doing so. So instead I am scrapping all of my original work and concentrating on issues that I believe are not talked about as much out there. Mostly these issues affect those that are doing single server migrations, which is basically you have one Exchange 2007 server holding all of your roles. They have caused me a great deal of headache and drama which I am sure is true for others doing such migrations as well. I would imagine that this is mostly the SMB sector, which is where the majority of my work in this is being done. Let’s talk about the biggest issue now, client access.

The CAS role plays a big part in your Exchange organization as it is the broker for all requests to your mailboxes. You will have MAPI requests as well as HTTPS, POP3 and others coming into this server. By the way as a security side note the recommended set up is to have your CAS role on your internal network with a reverse proxy in your DMZ for proxying requests through to your CAS. The CAS when it receives a request for a mailbox that resides on a 2003 servers proxies that request through to your 2003 server. No issues at all there. The problem that comes up, though, with having a CAS on the same server as your MB is that web requests no longer get proxied to your 2003 servers, they get redirected. This is due to davex.dll handling the requests on a mailbox server, and it will grab the requests first. Exprox.dll is what handles proxying. This redirection is not configurable either. So that causes a problem when it is redirecting an external request to an internal FQDN. That doesn’t work out too well and you get lots of angry OWA users wondering why their logins take them to an invalid address. For a more in depth explanation take a look here. Let’s take a look at a few ideas for mitigating this issue.

First off an easy fix would be to make sure your Exchange 2003 FQDN has a matching public address. This is not a recommended set up though at all. It is against best practices to have your internal domain match your external domain. Not to mention you can get a number of funny DNS issues going on if this is the case unless you’ve planned things out well. Read this article for some more DNS information, and especially look at the split-brain section. All of this can turn your easy fix into a much more complicated fix. If the stars do just happen to be right on your migration though, then go for this. Set up a public record matching your internal Exchange 2003 name and you’ll be set. This will be transparent to your users.

Next up would be to use a reverse proxy such as ISA 2006. This would be great as it keeps the strict definitions of your DMZ as it keeps your Exchange servers from having to blur the lines. This doesn’t seem to be something that most SMBs care about in my experience though. They don’t seem to see the need for security and how having a properly defined DMZ fits into this. But that goes into an entirely separate article and could sound a bit ranty.

Other methods will require a bit more cooperation from your users. Remember, in Exchange 2007 the OWA access by default is /owa. So you will need to communicate this to your users as you migrate their mailboxes over. Then, remove the /exchange virtual directory through the Exchange Shell and recreate it in IIS. Finally, set up /exchange with a custom 403 redirect to a different port on your external address. Mind you that you’ll need to make sure that port does point to your legacy server. This either requires your firewall to be able to do port translation or changing the ports on your 2003 server.

Finally, and the most recommended method, is to set up a temporary virtual machine that will purely host a CAS role. Then everything will be proxied as it is supposed to be. The down side to this is that it would require a separate license in which case you might as well plan for as separate CAS to begin with.

Fortunately as long as everything is configured properly Outlook Anywhere and ActiveSync seem to work just fine. Some dangers with those is if you are having some DNS issues internally or improper communication with a global catalog. This can add to your headache so you will want to cozy up to rcpping which you can grab from Microsoft and get more info about how to work it from here. Another great site I have recently found out about is the Remote Connectivity Analyzer. This site will enable you to test Outlook Anywhere, ActiveSync, SMTP and Autodiscover with detailed error messages about where these break down. It will become your best friend very swiftly.

I guess the moral of all these suggestions is to make sure you have your migration well planned out. Run it through a test lab first if you are able. Definitely make sure you test it out, and definitely don’t spring it on your users unawares. You could be in for quite a “fun” surprise.

DFS On Core — You’re Doing It Replicated

Anyone taken a look at Windows Server 2008 R2 yet? Things I’m excited about in it are PowerShell on Core, AD cmdlets, and the AD Recycle Bin. PS on Core is the most exciting addition though. Maybe later on I will start delving into R2 and talk about working with that on Core. This time, though, we are going to deal with setting up a basic DFS using Windows Server 2008 Core machines.

Core makes for a low resource file server that you can deploy to do its job without letting layers of the OS get in the way. Using it for a DFS will be a step in the right direction towards high availability of your data as well. Further more it can be used as a way to put some controls on your bandwidth utilization through having replicas of your data in locations that are local to your users. Failover is provided by pointing the users at the namespace which will then direct the users to the nearest server. Let’s run through putting together a setup on Core.

Grab our first server and let’s install the DFS NameSpace role.

C:\> start /w oclist DFSN-Server

Once this is complete we can start breaking out our trusty dfsutil.exe tool. We will start out with making a domain based namespace. Set up a share to use for this.

C:\> mkdir TurksNS
C:\> net share TurksNS=C:\TurksNS /GRANT:”Authenticated Users”,FULL

Don’t forget to customize the share and NTFS permissions to your specific needs.

C:\> dfsutil root adddom \\renocore\TurksNS “2008 Namespace”

You can also add V1 or V2 as a parameter. The default is V2. V1 is a Windows Server 2000 mode namespace while V2 is a 2008 mode namespace. Note that a requirement for a V2 namespace is a Windows Server 2008 domain functional level. If you receive any the RPC server is unavailable errors make sure the DFS Namespace service is running. Easiest way is to reboot but you can also run the sc command to start up the service.

C:\> sc start dfs

After that if you are still getting RPC errors then check your firewall and start going down the usualy RPC troubleshooting path. Let’s verify that we have created our domain based namespace.

C:\> dfsutil domain shinra.inc

You will see your newly created namespace there. Of course it isn’t doing much for us right now so let’s create some targets for it. Create another share on this server (or really any server) and add a link.

C:\> dfsutil link add \\shinra.inc\data \\renocore\data

If browse to \\shinra.inc\data via UNC or just map a drive you’ll now see the data available in there. This get us a running DFS, but it really isn’t anything more than a fancy way to share data right now. There are not multiple targets so no replication is occurring. If this server goes down there goes the access to the data. Let’s get some targets in there to fulfill the D in DFS. Jump onto another server, install the DFSN-Server role, and make yourself a share to add to the pool. Don’t forget to make sure it has the same share and NTFS permissions as your first share, otherwise things could get difficult for troubleshooting problems later on. Once you have it ready we can add the target.

C:\> dfsutil target add \\shinra.inc\TurksNS\Data \\RudeCore\Data

We have our links now. But we still have no replication. To get this setup we need yet another role added.

C:\> start /w ocsetup DFSR-Infrastructure-ServerEdition

We will then set up a replication group for our folder here.

C:\> dfsradmin RG New /RgName:TurksData
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RudeCore
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RenoCore

This gives us a replication group with our two servers added in as members. Next we will bring in our data for replication.

C:\> dfsradmin RF New /RgName:TurksData /RfName:TurksData /RfDfsPath:\\shinra.inc\TurksNS\Data /force

We have a folder set for replication, but now we need replication links so that the data may flow. Note that force is required because we set up our namespace target first.

C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RudeCore /RecvMem:RenoCore /ConnEnabled:True /ConnRdcEnabled:True
C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RenoCore /RecvMem:RudeCore /ConnEnabled:True /ConnRdcEnabled:True

Close to the end but we still need to bring in memberships to this replication group.

C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RenoCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:True /force
C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RudeCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:False /force

Replication should start flowing smoothly now shortly. If you don’t have any data in there or if you have prepopulated the shares then you won’t know for sure if replication is working properly. You can run a test from this command line utility.

C:\> dfsradmin PropTest New /RgName:TurksData /RfName:TurksData /MemName:RenoCore

This will start the test from RenoCore and the data will flow to Rudecore. Generate the results with dfsradmin.

C:\> dfsradmin PropRep New /RgName:TurksData /RfName:TurksData /MemName:RenoCore

You’ll find an html and xml file generated to pull up in your web browser. Of course you may just find it easier to do things on your own with creating a new different file on both shares and verifying if it is replicated to the other. But the good thing about the report is that it is detailed and will help you in tracking down any issues you may be having. You can also use dfsradmin to automatically create the folders for you when you use dfsradmin RF. Just add them into the namespace later on. So let’s touch on one last topic here, replication of large amounts of data.

It is ok to run through this with a small amount of data that the DFS may need to replicate initially, but if you get into large amounts, which I generally consider to be amounts over 400 or 500 GB, you will definitely want to prepopulate things. Otherwise your DFS may choke on a few files initially and cause you all sorts of headaches. Not to mention it just plain gives you more control over everything. This all does depend upon the bandwidth available to you, of course. The method I normally use is robocopy. You would want to use /E /SEC /ZB. Instead of /SEC you could use /DATSOU to include the auditing information.

Extra reading:

DFS Step-by-Step

DFS FAQ

Dfsutil Breakdown

DFS Best Practices

%d bloggers like this: