Start here

The Number One Easy Way to Setup a Failed Migration

It surprises me how much I run across this one but then again I have been guilty of it as well.

Eater of backups

I eat backups! Garr!

There is a very important first step that I find skipped over and forgotten quite often when it comes to running an Exchange migration. Or really any other kind of migration. Have you taken a system state backup of AD yet? No? Then you’re just spinning the bottle and hoping it doesn’t end up with you getting cozy with Microsoft’s support hoping that they can fixed your screwed up Active Directory.

Don’t make the mistake of assuming backups are working

I made this mistake once upon a time. It was from one of the first Exchange migrations I was running. I didn’t feel like being bothered to take a backup of AD as the server was a really slow server. I was confident that the nightly backups had taken care of everything anyhow. Though I didn’t bother to validate this. So I went directly into running the migration and everything was going smoothly at first. Everything looked like it was running great. But then part of the way through I found that AD replication had broken and that it possibly had been that way for a while. It would have been easy to roll back to an AD backup, correct the problem and then retrace my steps but unfortunately that wasn’t an option. Because I hadn’t taken a backup. The nightly backups hadn’t worked in several months either. That lead into a call with Microsoft later on and then having to spend even more hours fixing things manually via ADSIedit when they couldn’t figure it out.

I don’t want to be the one cleaning up after you

It is a very simple step to take at the very beginning. Just grab a backup before you run your first setup.com /PrepareAD. While you’re at it, why don’t you test the backup of your current mail server and make sure that it is working ok as well. Trust me on this. You don’t want to be the guy to explain to your boss that the data is gone as your only valid backup is from 3 years ago. Your backups are working, right? You might want to double check on that just to be sure. I recommend a mock restore for that extra bit of assurance.

Advertisements

The Magic in Troubleshooting the Black Box

Sometimes at work I feel like a magician.
This could be you!

This could be you, the magician at work! Now if only Exchange 2003 looked so good …

For instance some Microsoft Exchange or Hyper-V clustering issue bubbles up through the tiers of help desk, engineers and senior engineers that have had hours of troubleshooting and myriad eyes thrown at it without making a lick of progress. Then the issue gets slid onto my plate and 15-30 minutes later I have everything fixed up and a client looking much happier as he is able to get back to work. Sometimes they’ll ask why was I able to fix the issue so swiftly when all these other people spent hours at work with no resolution. That cues what I fondly call the magician moment.

I always wanted to be a magician when I was a kid. This is one way I get a small taste of it. The magician moment is when the audience is wondering how the rabbit was pulled out of the hat, or perhaps how the levitating woman disappeared. Now you can explain to the client how all of the magic happened and sometimes if they’re a truly technical client they will be very interested in the explanation. But the majority of the time I find that they prefer to think that you just have some special magic that the rest of the world does not have access to. It is a pretty good feeling most of the time.

Now there is a secret to being able to pull this off

time and time again even when the odds are stacked against you. It is all about the trail of logic and being able to follow it through the (logical, not physical) closed off black boxes along that trail. In computers as in science every action produces a reaction. So the first step to setting yourself up for success is to make sure that you know everything you can about how your system, and connected systems, at hand operate. You can’t just hide behind your known system be it Exchange or Hyper-V or Active Directory and declare all other territories unknown and not your problem. There has been many a time I have found the solution to a problem with a timely packet capture with WireShark or checking the routing topology on the local router. That falls in the area of the networking team but with applying my own knowledge of networking I was able to take a huge shortcut and point the problem either directly to the network (with solid evidence) or directly to the server and then fix it.

Love the black box

The point is that magic can happen when you break down the black boxes around you. You can’t completely eliminate the black boxes, but if you can reduce them then you can get an idea of how things are functioning in the neighboring black box. Let’s throw in another example. Recently a coworker of mine came to me with a problem he was stuck on. He’s great with server problems but he makes it very clear that networking is not his cup of tea. A server had successfully gone through a P2V and was up and running in the cluster, but a certain service was inaccessible remotely no matter how he looked at things. He made the assumption that things were incorrectly configured on the networking side of things beyond the server. Now there were several ways he could have tested that theory all of which would not involve looking into the black box of the networking equipment, but it does involve knowing what goes into it and what is expected to come out. A ping test would have verified routing and a probe of the port internally would have told him that it was not open on the server. Quickly checking on the server showed that the port was not open on the server’s firewall. An easy check, but since he considered the black box not his problem he was not able to easily and swiftly reach that conclusion.

So in summary make sure you’re always following logic in your troubleshooting and that you are always learning within and without of your realm of expertise. That will set you on the track of the magician as well. Perhaps you’ll be the next one to be asked how that rabbit came out of the hat.

Do you have any magician moment stories as well? Please share them in the comments as I would love to hear them.

Essential Exchange Troubleshooting – Send Email via Telnet

One of the best tools available for troubleshooting mail flow issues is right at your fingertips. It is quick, simple, and only requires a little training to use effectively. I am always surprised at how very few Exchange administrators seem to use it. You can see some of this in action in my previous NDR troubleshooting post. So let’s delve into some of the basics of how to use telnet for troubleshooting your mail flow issues.

First off, it is a great way to see if your SMTP service is even available. If you cannot connect to it via telnet then you immediately know that you need to check on the health of your services and if that is ok then you most likely have a firewall or other networking issue. So to execute this basic check pop open a command prompt and run

telnet myserver.contoso.com 25

Substituting your server address and if you are troubleshooting an alternate port change the 25 to whatever port you are troubleshooting. The majority of the time it will be port 25 though. If you receive a successful connection you should be greeted by your mail server’s banner, probably something along the lines of below

220 myserver.contoso.com Microsoft ESMTP MAIL Service ready at Mon, 27 May 2013 08:19:44 -0700

This is also a good time to check whether you are seeing the correct server address in your banner. If you are seeing the internal address for myserver.contoso.local you will want to update this.

At this point you need respond with a HELO or EHLO command to start the SMTP conversation. What is the difference between them? HELO is for SMTP while EHLO is for eSMTP. In the context of sending an e-mail via telnet it won’t matter which you use but it mail be useful to use EHLO to see what verbs are being offered, especially if you are suspecting that there mail be a problem with eSMTP.

EHLO mail.alternatecontoso.com

You should receive a response similar to what is below

250-myserver.contoso.com Hello [4.3.2.1]
250-SIZE
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-STARTTLS
250-AUTH NTLM LOGIN
250-8BITMIME
250-BINARYMIME
250 CHUNKING

If you have seen all of the above then so far so good. Your routing is good (assuming you aren’t being routed to the wrong SMTP server, but if you are and you don’t know it then you have bigger problems), your firewall configuration is correct and your hub transport is listening. Also note from the verbs sent above you can see that this service supports TLS and authentication as you can see in the STARTTLS and AUTH NTLM LOGIN verbs.

Now we want to start sending an email to someone on this server. Most likely your postmaster account since you are just testing your mail flow.

MAIL FROM: someone@alternatecontoso.com

You should receive a Sender OK response. If not then you’ll know that you need to look into sender permissions.

250 2.1.0 Sender OK

We need to specify who we are sending to

RCPT TO: postmaster@contoso.com

Here you should receive a Recipient OK response. This is the part where the conversation is most likely to break down and you will get an error code that you can start working with.

250 2.1.5 Recipient OK

So far so good, now we can send the actual email. Start off with the DATA command

DATA

And the server will be ready to receive your input. You can get as fancy or as simple as you like here but once you are done with the message use a . to end the mail input

354 Start mail input; end with <CRLF>.<CRLF>
.
250 2.6.0 <53ea1be2-3d1a-4856-8bdf-3c576c14cfc0@mail.contoso.com> [InternalId=47975] Queued mail for delivery

Assuming everything is still going well you should see either a queued mail for delivery or a spam rejection, depending upon how strict your spam filter is. You also may get an error message that would be worth researching as well. If everything is still going well you can close out the conversation.

QUIT
221 2.0.0 Service closing transmission channel

This is all for sending an email similar to how an anonymous server on the Internet would send an e-mail. There are a few variations that you will want to be aware of as well though. The first is for testing relaying. When you get to the part where you input the recipient you will input a remote server recipient.

RCPT TO: foreignemail@externalserver.com

If you are NOT wanting the server to relay the expected response would be

550 5.7.1 Unable to relay

This is a good thing as you do not want open relays sitting around. But on the other hand if this is an internal connector that is supposed to be relaying then you could have a permissions problem on your hand.

The other method that you would normally use telnet testing for is authentication. This is a bit more complex. After your HELO/EHLO command you will issue

AUTH LOGIN

To which if basic authentication is supported you will receive a response of

334 VXNlcm5hbWU6

This gibberish is actually a base64 encoded response that says “Username:”. The expected response to this is a base64 encoded username. An online utility I recommend is at this site which is pretty simple for encoding/decoding base64 message. So translate the username you are attempting to use into base64 and respond with that. I responded with “logon” encoded.

bG9nb24=

You should receive a response of

334 UGFzc3dvcmQ6

Which translates to “Password.” So now you need to response with the account’s password encoded in base64. My response was “My simple password”

TXkgc2ltcGxlIHBhc3N3b3Jk

You should now receive an Authentication successful message

235 Authentication succeeded

And you can continue with the rest of your steps of sending an email.

Was this post helpful? Do you have any topics you would be interested in seeing me cover in a later blog post? Just leave your suggestion in the comments below or shoot me an email.

OWA Login – Your Account has been Disabled

While this may not be a common issue, or at least I certainly hope it is not a common issue for you, it can be a bit vexing to figure out what is going on. You have a user with a recently restored account that is attempting to login to OWA and they are receiving an error similar to the following:

Your account has been disabled.

Copy error details to clipboard

Show details

Request

Url: https://mail.contoso.com:443/owa/

User host address: 1.2.3.4

User: Jane Doe

EX Address: /o=first organization/ou=exchange administrative group
(fydibohf23spdlt)/cn=recipients/cn=jane doe96d

SMTP Address: jdoe@contoso.com

OWA version: 14.2.318.2

The steps leading up to this error are most likely as follows.

  1. A user’s account was deleted and their mailbox removed recently. Possibly by accident or possibly by company politics.
  2. The user’s account is recreated as opposed to restored (which means a new SID and all the fun that goes along with that) and their mailbox is reattached to the account.
  3. The user now attempts to login with their “new” account into their old mailbox.
  4. Angry calls to your help desk now ensue.

Most likely your first thought was to do an iisreset but in this case you would be wrong. Here is how you clear this issue up swiftly and easily. Open up the EMS and run:

Clean-MailboxDatabase –Identity <Database Name>

This kicks off a scan of AD that updates the status of disconnected mailboxes in the targeted database. Alternatively you could also just tell the user to wait until Exchange runs its maintenance cycle on the database but that answer definitely won’t win you any friends. Now why does this need to be done? As you’ve probably suspected it is due to cached AD information of the disconnected mailboxes. For more info take a look at KB2682047.

SharePoint 2013 mystery error ID4220: The SAML Assertion is either not signed …

While implementing a fresh SharePoint 2013 claims based authentication site using ADFS 2.0 I ran across this error.

ID4220: The SAML Assertion is either not signed or the signature’s KeyIdentifier cannot be resolved to a SecurityToken. Ensure that the appropriate issuer tokens are present on the token resolver. To handle advanced token resolution requirements, extend Saml11TokenSerializer and override ReadToken.

Doing a search Bing/Google search turned up precious little information on this error and it mostly pertained to customer providers, which at this point were not being implemented on the site as this was using the out of the box provider. Going through and validating rules and URLs turned up previous little. It did sound a lot like a certificate error though, so carefully looking into the certificates used showed that I had exported and imported the wrong certificate on the STS. I had grabbed the token decrypting certificate instead of the token signing certificate. This is easily corrected. Export the certificate to a DER encoded file and then use the following commands to update your STS with the correct certificate.

$certPath = “C:\certs\tokensigner.cer”

$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“$certPath”)

New-SPTrustedRootAuthority -Name “Token Signing Certificate” -Certificate $cert

$sts = Get-SPTrustedIdentityTokenIssuer

$sts | Set-SPTrustedIdentityTokenIssuer -ImportTrustCertificate $cert

 

Getting a Windows RRAS VPN server working on XenServer

A quick note on this. I was troubleshooting a problem today of a newly setup Windows RRAS PPTP VPN server was not working. Or rather it was kind of working. You could connect and authenticate, but when it came time to passing traffic you could only ping the RRAS server itself. Which is a bit troublesome if you are wanting to access anything else on the network such as your file server, your domain controller, your Exchange server and so forth.

Capturing traffic via Wireshark did show that traffic from the VPN client would pass beyond the RRAS server and a reply would be sent. It just never makes it back to the client from the RRAS. Some quick queries to Google turned up little beyond more familiar problems of incorrectly configured multihomed RRAS servers. Which proved not to be the case here. It turned out that TCP offloading was rearing its ugly head again. After switching that off in the properties for the NIC in question traffic immediately started passing back and forth properly. This made for happy clients. So the moral of the story is probably that you should always suspect offloading no matter how fixed it is claimed to be. Or perhaps to use Intel NICs instead of Broadcom, but that remains as something that I will have to test out later if I get the opportunity.

Problems Installing Exchange 2010 Service Pack 2 on SBS 2011

Now these problems that occur are very likely originating from an already rather screwed up installation of SBS 2011. I was not involved in the original setup of this particular server but I do know that there had been a large number of problems originally encountered. In this instance the task was to get Exchange 2010 SP2 installed. There are several hoops that you may have to jump through to get this installed, here I will recount what I was required to do.

Firstly you need to make sure that you have closed any instance of the SBS Console. Otherwise you’ll get a failure in the prerequisites. Also initially you’ll need to stop the Windows SBS Manager service though if you can get the install to progress to the point of working on the installed roles rather than the organization that will no longer be a requirement. Once you’re past those prerequisites in theory your installation should go smoothly. But if that is not the case then read on.
The next problem you may encounter is any error in the Hub Transport Role. From the event logs you’ll find this error:

 Event ID 1002 MSExchangeSetup
 Exchange Server component Hub Transport Role failed.
 Error: Error:
 The following error was generated when "$error.Clear();
 if (get-service MSExchangeServiceHost* | where {$_.name -eq "MSExchangeServiceHost"})
 {
 restart-service MSExchangeServiceHost
 }
 " was run: "Service 'Microsoft Exchange Service Host (MSExchangeServiceHost)' cannot be started due to the following error: Cannot start service MSExchangeServiceHost on computer '.'.".
Service 'Microsoft Exchange Service Host (MSExchangeServiceHost)' cannot be started due to the following error: Cannot start service MSExchangeServiceHost on computer '.'.
Cannot start service MSExchangeServiceHost on computer '.'.
The service cannot be started, either because it is disabled or because it has no enabled devices associated with it

Checking your services you’ll also find all of the Exchange services disabled. Service packs and update rollups usually disable the services to prevent them from starting up unexpectedly while the update is being installed, but in this case for some reason SP2 is jinxing itself by not allowing itself to start a couple of necessary services for it to be able to continue. The easiest way to get around this, though not necessarily the safest, is to make sure that at this point all the Exchange services are set to Manual or Automatic. When you see setup get down to the point of setting up the Hub Transport Role then watch your services and wait for them all to be set to disabled. Once they are pop open a Powershell prompt and run:

Get-Service | where {$_.DisplayName –match “Microsoft Exchange”} | Set-Service –StartupType Manual

Now setup will be able to continue with starting the services that it requires for continuing setup. Which may lead to your next problem, it will fail on generating a new self-signed certificate for the Exchange Transport service. You’ll find this error in the event logs:

Event ID 1002 MSExchangeSetup
 Exchange Server component Hub Transport Role failed.
 Error: Error:
 The following error was generated when "$error.Clear();
 Write-ExchangeSetupLog -Info "Creating SBS certificate";
$thumbprint = [Microsoft.Win32.Registry]::GetValue("HKEY_LOCAL_MACHINE\Software\Microsoft\SmallBusinessServer\Networking", "LeafCertThumbPrint", $null);
if (![System.String]::IsNullOrEmpty($thumbprint))
 {
 Write-ExchangeSetupLog -Info "Enabling certificate with thumbprint: $thumbprint for SMTP service";
 Enable-ExchangeCertificate -Thumbprint $thumbprint -Services SMTP;
Write-ExchangeSetupLog -Info "Removing default Exchange Certificate";
 Get-ExchangeCertificate | where {$_.FriendlyName.ToString() -eq "Microsoft Exchange"} | Remove-ExchangeCertificate;
Write-ExchangeSetupLog -Info "Checking if default Exchange Certificate is removed";
 $certs = Get-ExchangeCertificate | where {$_.FriendlyName.ToString() -eq "Microsoft Exchange"};
 if ($certs)
 {
 Write-ExchangeSetupLog -Error "Failed to remove existing exchange certificate"
 }
 }
 else
 {
 Write-ExchangeSetupLog -Warning "Cannot find the SBS certificate";
 }
 " was run: "The internal transport certificate cannot be removed because that would cause the Microsoft Exchange Transport service to stop. To replace the internal transport certificate, create a new certificate. The new certificate will automatically become the internal transport certificate. You can then remove the existing certificate.".
The internal transport certificate cannot be removed because that would cause the Microsoft Exchange Transport service to stop. To replace the internal transport certificate, create a new certificate. The new certificate will automatically become the internal transport certificate. You can then remove the existing certificate.
Error:
 The following error was generated when "$error.Clear();
 Write-ExchangeSetupLog -Info "Creating SBS certificate";
$thumbprint = [Microsoft.Win32.Registry]::GetValue("HKEY_LOCAL_MACHINE\Software\Microsoft\SmallBusinessServer\Networking", "LeafCertThumbPrint", $null);
if (![System.String]::IsNullOrEmpty($thumbprint))
 {
 Write-ExchangeSetupLog -Info "Enabling certificate with thumbprint: $thumbprint for SMTP service";
 Enable-ExchangeCertificate -Thumbprint $thumbprint -Services SMTP;
Write-ExchangeSetupLog -Info "Removing default Exchange Certificate";
 Get-ExchangeCertificate | where {$_.FriendlyName.ToString() -eq "Microsoft Exchange"} | Remove-ExchangeCertificate;
Write-ExchangeSetupLog -Info "Checking if default Exchange Certificate is removed";
 $certs = Get-ExchangeCertificate | where {$_.FriendlyName.ToString() -eq "Microsoft Exchange"};
 if ($certs)
 {
 Write-ExchangeSetupLog -Error "Failed to remove existing exchange certificate"
 }
 }
 else
 {
 Write-ExchangeSetupLog -Warning "Cannot find the SBS certificate";
 }
 " was run: "Failed to remove existing exchange certificate".
Failed to remove existing exchange certificate

This is a very verbose yet also very helpful error. Chances are you’ll most likely encounter this if you are not using the default self-signed certificates but have installed a third party certificate. Though I didn’t check in this case, reviewing the commands being run it may be choking on a third party certificate that has a friendly name of Microsoft Exchange. To fix this one first make sure you have a copy of your third party certificate available and if you don’t then export a copy as you’ll be in need of it later. Once you have that available then run through the SBS Set up your Internet address wizard. This will generate you another self-signed certificate and replace the third party certificate you have in place. It will also remove the third party certificates from your certificate store, which is why you need to make sure you have a copy of the certificate available. Once you have done this re-run setup and you’ll be able to finish your installation of SP2. Don’t forget to put the third party certificate back in place and also it would be a good idea to run ExBPA to make sure you are still in compliance. You’ll also want to make sure that all of your Exchange services are set back to their appropriate startup values as you may be left with all the services set to disabled.

Tutorial on Configuring and Migrating Redirected Folders

In recent migrations I’ve seen that there is some confusion in how to work with redirected folders. Let’s first go over a few reasons for the existence and usage of redirected folders. The most important reason is that it is absolutely critical in an RDS farm if you want any sort of user data persistence between servers, not to mention it will help cut down on the amount of local disk space used by each server. Your users will be able to be load balanced from server to server without worrying about which is their “home” server or having to configure their account on each server. They can still keep their habit of saving critical data to their Documents folder as well. Another reason is that you’ll get all of your users’ profiles stored in a central location. Which means their Documents and Desktop folders will be stored centrally. Which means you’ll be able to back those up. Now when you have this implemented as just a roaming profile all of that data is copied down to the server and then synched back to the central location. This slows things down for everyone since you have network bandwidth being taken up unnecessarily at logon and also longer logon times for the local user. Here is where folder redirection jumps in to help. With your Desktop and Documents and Pictures and so forth being redirected then everything is pulled off a share rather than being copied down to the server. That frees up a lot of bandwidth and speeds up login times so everyone is a lot happier. You’ll want to nip those PSTs right away though, otherwise you could end up with a lot of performance problems.

Anyhow let’s go on to the implementation. We’ll begin with configuring the redirected folders. Create a share, we’ll name it Folders, and configure the share permissions with Everyone:Full. Generally whenever you create a share you want to configure the share permissions as Everyone:Full unless you have a very good reason not to. Normally all permissions you would want to control through NTFS. This simplifies management and troubleshooting. Now your NTFS permissions you’ll first want to disable including inheritable permissions. The permissions you want on this folder are Full Control for SYSTEM, CREATOR OWNER and Administrators, and for Authenticated Users you’ll need to set advanced permissions. You’ll want Create Folders/Append Data, Read Permissions, Read Attributes and Read Extended Attributes. This will create a folder where the data is secure from prying eyes yet administrators will still be able to access it without breaking redirection.

Next up is creating the group policy for configuring folder redirection. Create a new policy and name it Folder Redirection. The section we’ll be working in is User Configuration/Policies/Windows Settings/Folder Redirection. You’ll want to plan out your folder redirection strategy before you start implementing. What folders are important to you, how are you getting the data there, and perhaps even most importantly how are you going to back this policy out when you’re done. Once you’re done planning then start editing your policy. For this tutorial we’re erring on the side of simplicity.

The first setting gives you two options, Basic and Advanced. Most times you will want to use Basic but it depends upon what you are trying to achieve. With Basic you point the folder to the share that you want, as a UNC of course i.e. \\storageserver\Folders\. Once selected you normally will want the option of Create a folder for each user under the root path. It will even show you what the path will look like at the bottom. With these options everyone affected by the policy will be redirected to the same location. With the Advanced option you get more flexibility in how you configure user’s redirected folders since now you can use group membership to configure share selection for the storage of the redirected folders. The next tab over we have Settings. By default users are granted exclusive rights to the folder. Also by default the contents of the folder will be moved to the new location. This simplifies the job of moving content, but the down side is that it prevents you from pre-staging the move instead of having it happen at logon. But you will have planned this out already, right? The last unchecked option is to apply to 2000/XP/2003 operating systems. You’ll want to check this depending upon where these folders will be used. This will disable some redirection options in Vista/7 though.

Now the final option is Policy Removal which you will have also planned out ahead of time. If you select leave the folder in the new location then when the policy is removed their profile still redirects to \\storageserver\Folders\ and the data still remains there. If you select redirect the folder back to the local user profile then what happens depends upon what you checked for Move the contents to the new location. If you have it checked then the folder redirects to their local profile and the data is copied, not moved, to the local profile. You’ll still need to clean up the old location. If you have the option unchecked then the folder will redirect to their local profile but all the data will still stay on the share. Your users will end up with empty local folders. This is why you’ll want to plan your exit strategy because at some point some or all your users data will end up being stored somewhere else. Since we’re preparing a migration scenario most likely everything will be setup with the defaults so that is what we are going to do here, setup the folders with the defaults. We’ll configure redirection for the Desktop, Documents, Pictures, Music, Videos, Favorites, Downloads. Not all of these will be available depending upon what versions of windows you are working with. Also note that there is an option for Pictures, Music and Videos to follow the Documents folder which is what you’ll want to select unless you have a reason to split them amongst multiple shares. Don’t forget to allow time for the policy to replicate to any other DCs or force replication, and that you may need to run gpupdate on the client to force immediate pick-up of the change.

Now that we have configured our folder redirection go ahead and populate a few profiles with data. If you check the Folders share that you created you’ll see that it is getting populated with account names and the redirected folders. Test logging into a few different servers as well to make sure that the folders are following your accounts. You can also pull up the properties on them to verify the path pointing to the share. If that is all working fine then let’s look at migrating the redirected folders.

We’ve got several options for migrating the folders. The simplest method and definitely the one you’ll want to use when dealing with small amounts of data is to let the policy take care of it for you. Let’s test it out. Create a share somewhere else named NewFolders and configure it with the same share and NTFS permissions as listed earlier. Edit your folder redirection policy and change the path to point to your new server. Also make sure you’ve checked Move the contents to the new location. That’s the part that is doing the work for us. Once you’re done with the changes give it a test. You’ll probably see a longer logon the first time as data is copying across. There’s also a chance the it won’t be picked up until the next logon due to asynchronous policy processing. Note that the data was actually moved, not copied. This is great for when there isn’t much data to move, and you can also do it in phases moving one folder at a time. Something else you could do if you want to migrate accounts in phases is to create policies for redirection and link them to migration OUs that you create lower than where the original redirection policy is linked.

When you’re working with larger amounts of data though you may want to pre-stage the data rather than have it be moved at first logon. This requires a bit of work. Since the folders get locked down by default if you have Grant the user exclusive rights checked, the administrator account does not have access to the folders. If you take ownership of the folders, that will break redirection since the policy checks for ownership of the folder. What you’ll need to do is go into the policy and uncheck the exclusive rights option everywhere. At the same time you’ll also want to uncheck Move the contents to the new location. This is best done as earlier as possible in the migration just to make sure all clients have picked up the updated settings to cut down on the amount of weirdness you may encounter. Now once this is done make sure that mentioned NTFS permissions are configured on the top level folder for the share. Now go in and if the Administrators group doesn’t have ownership of the folder take ownership of it, then check the box to replace owner on subcontainers and objects. Ok out of everything then open up the advanced NTFS permissions. Check the box for Replace all child object permissions with inheritable permissions from this object. Now use whatever method you prefer to copy the data from there to the new redirected folders share. Robocopy is my preference.

You’ve now pre-staged all the data and policies are configured so that permissions do not break anything it is time to update the policies to point to the new shared folder. Same as last time just update the UNC to the new location, once again making sure that Move the contents to the new location is unchecked. You will probably want to take the old share offline just to be safe. This will flush out any systems that are not processing group policy properly.

Now what happens if you go ahead and delete the group policy rather than reconfigure it for anything. Reference back to the section on Policy Removal which is on paragraph 5. Assuming the policy you deleted was left at the defaults for policy removal all clients will be left pointing at the old share until told differently. To fix this is simple, create a policy with the new redirection settings and once it is picked up the user will be pointed to the new location. What if you are just trying to remove folder redirection altogether? Hopefully you set the policy removal to redirect back to the local user profile. But if you have not; create a policy and set each redirected folder’s target location to Redirect to the local user profile location. Once this policy has been applied everywhere at that point it is safe to delete altogether.

References:

http://technet.microsoft.com/en-us/library/cc732275.aspx

http://support.microsoft.com/kb/288991

USB Drive Disappears from Removable Storage on XenServer after a Reboot

Quick fix for an annoying problem I ran across where the removable storage no longer shows the attached usb drives after a reboot under XenServer 5.6. Pop open a console window on your XenServer host:

modprobe -r usb_storage … this removes the usb_storage kernel driver

modprobe usb_storage … this reinstalls the usb_storage kernel drives

That should get you your drives back and if you don’t see them then just do a rescan.

xe sr-list | grep -i removable -B 1 … use this to find the UUID of your removable storage SR

xe sr-scan uuid=<uuid of removable storage> … your usb drives should be showing up now ready to be attached to your VM

Quick review of flushdns, registerdns, and DNS queries

There seems to be a bit of a misconception on how DNS cache flushing works. I’ve heard techs talking about running ipconfig /flushdns and ipconfig /registerdns to flush the DNS cache. It looks like there needs to be a bit of clarification on how these commands work:

ipconfig /flushdns: “Flushes and resets the contents of the DNS client resolver cache. During DNS troubleshooting, you can use this procedure to discard negative cache entries from the cache, as well as any other entries that have been added dynamically”

ipconfig /registerdns: “Initiates manual dynamic registration for the DNS names and IP addresses that are configured at a computer. You can use this parameter to troubleshoot a failed DNS name registration or resolve a dynamic update problem between a client and the DNS server without rebooting the client computer. The DNS settings in the advanced properties of the TCP/IP protocol determine which names are registered in DNS.”

Now as you can see from the above documentation that the parameters operate independently. You would only issue a /registerdns parameter in cases where the client system’s name is not being resolved. There is no requirement to run it with the /flushdns parameter.

Something that you may find of interest is that there is also a parameter to show the contents of the DNS cache. ipconfig /displaydns will print out in the terminal window the entire contents of the DNS cache. You can verify from there whether it truly has the correct address for whatever you’re having issues resolving or not.

A quick refresher on how name resolution works. First the name is submitted for DNS resolution. The system checks to see if the name is a FQDN, single label or multi label. This is determined by the dots within the name i.e. http://www.microsoft.com. is an FQDN while http://www.microsoft.com is a multi label and just www is a single label. Note the terminating period on the FQDN and the lack of a terminating period on the multi label name. Let’s first check how resolution works for an FQDN:

1.       Checks DNS cache (this is built from previous DNS queries and the hosts file, hosts file always win)

2.       Queries primary DNS server

3.       If no response in two seconds it queries all remaining DNS servers

4.       Resends queries to all servers at the four and eight second marks

5.       Returns time outs for all queries after thirty seconds

6.       Query is evaluated on whether it is 15 bytes or less

7.       If less then query is submitted for NetBIOS resolution

8.       Query finally fails if no resolution has been achieved

Now if a multi label name was submitted such as http://www.microsoft.com (note the lack of a terminating period) then the resolver terminates it with a period to make it an FQDN and submits it to the same resolution list as above, with a slight difference:

1.       Checks DNS cache (this is built from previous DNS queries and the hosts file, hosts file always win)

2.       Queries primary DNS server

3.       If no response in two seconds it queries all remaining DNS servers

4.       Resends queries to all servers at the four and eight second marks

5.       Returns time outs for all queries after thirty seconds

6.       Queries are re-issued with the connection specific DNS appended to the query

7.       Queries are then reissued devolving the parent DNS until only two labels are left

8.       Query is evaluated on whether it is 15 bytes or less

9.       If less then query is submitted for NetBIOS resolution

10.   Query finally fails if no resolution has been achieved

For a single label name the connection specific DNS is appended immediately and then it is submitted to the same resolution order as the FQDN.

For more information and flow charts look at the documentation links below.

Documentation taken from here:

http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/ipconfig.mspx?mfr=true

http://technet.microsoft.com/en-us/library/cc961411.aspx

%d bloggers like this: