Home » Articles

Category Archives: Articles

Advertisements

Very handy PowerShell commands to make your life easy breezy

When I get some down time, which is rare, around here I go through some of my PowerShell RSS posts. Ed Wilson always has some good ones to read through and I felt like I should pass along this gem from him. The Five Best PowerShell Cmdlets. I was curious so I bit and discovered that somehow I have been missing out on Out-GridView my entire life! It does require PowerShell-ISE to be installed though, so it is not quite out of the box usefulness.

Advertisements

Copying Receive Connectors Hither and Yon

If you have done a number of Exchange migrations, or have a large number of servers to migrate in a single migration, I am sure you have run into the pain of replicating the receive connectors to the new server. Lots of settings to copy down and move over plus there is the headache of explicit permissions granted on the connector in the case of relays or other special use connectors. That can waste a lot of time that you would much rather spend on the finale of Sherlock season 3. Let’s see if we can simplify that today with this script for Copy-ReceiveConnector. You call the script as follows:

Copy-Receive Connector -SourceConnector "EXCHANGE\Alternate Receive Connector" -DestinationServer NEWEXCHANGE -DomainController dc01

This will create a new receive connector on the destination server with all of the settings specified on the old receive connector. It will then loop through all of the non-inherited permissions on the connector and copy those over. You can also specify a new name for the connector via -Name. Onto the code.

<# .SYNOPSIS Copy-ReceiveConnector - Copies a receive connector from a source server to a  destination server .DESCRIPTION Takes the source receive connector a creates a copy on the destination server with values populated from the source receive connector. .PARAMETER SourceConnector Identity of the source receive connector .PARAMETER DestinationServer Server name of the destination Exchange server .PARAMETER DomainController Target domain controller for setting the configuration .PARAMETER Name Optional new name for the connector .EXAMPLE Copy-Receive Connector -SourceConnector "EXCHANGE\Alternate Receive Connector"  -DestinationServer NEWEXCHANGE -DomainController dc01 #>
[CmdletBinding()]
param(
[Parameter(Mandatory=$True)][string]$SourceConnector,
[Parameter(Mandatory=$True)][string]$DestinationServer,
[Parameter(Mandatory=$True)][string]$DomainController,
[Parameter(Mandatory=$False)][string]$Name
)
Import-Module ActiveDirectory
# Get the values for the old connector
$Source = Get-ReceiveConnector -Identity $SourceConnector
# Update the name if specified
if($Name)
{
 $Source.Name = $Name
}
# Custom permission group is not allowed in Exchange 2013 so we need to remove it
# Nothing to be concerned about since the ACEs are explicitly copied over.
$TempArray = @($Source.PermissionGroups) -split ", " | Select-String -Pattern "Custom" -NotMatch
$TempString = "$($TempArray)"
$Source.PermissionGroups = $TempString.Replace(" ", ", ")
# Copy all the values over to create the new connector on the 2013 server
New-ReceiveConnector -Bindings $Source.Bindings -Server $DestinationServer -DomainController $DomainController -Name $Source.Name -RemoteIPRanges $Source.RemoteIPRanges -AdvertiseClientSettings $Source.AdvertiseClientSettings -AuthMechanism $Source.AuthMechanism -Banner $Source.Banner -BinaryMimeEnabled $Source.BinaryMimeEnabled -ChunkingEnabled $Source.ChunkingEnabled -Comment $Source.Comment -ConnectionInactivityTimeout $Source.ConnectionInactivityTimeout -ConnectionTimeout $Source.ConnectionTimeout -DefaultDomain $Source.DefaultDomain -DeliveryStatusNotificationEnabled $Source.DeliveryStatusNotificationEnabled -DomainSecureEnabled $Source.DomainSecureEnabled -EightBitMimeEnabled $Source.EightBitMimeEnabled -EnableAuthGSSAPI $Source.EnableAuthGSSAPI -Enabled $Source.Enabled -EnhancedStatusCodesEnabled $Source.EnhancedStatusCodesEnabled -ExtendedProtectionPolicy $Source.ExtendedProtectionPolicy -Fqdn $Source.Fqdn -LongAddressesEnabled $Source.LongAddressesEnabled -MaxAcknowledgementDelay $Source.MaxAcknowledgementDelay -MaxHeaderSize $Source.MaxHeaderSize -MaxHopCount $Source.MaxHopCount -MaxInboundConnection $Source.MaxInboundConnection -MaxInboundConnectionPercentagePerSource $Source.MaxInboundConnectionPercentagePerSource -MaxInboundConnectionPerSource $Source.MaxInboundConnectionPerSource -MaxLocalHopCount $Source.MaxLocalHopCount -MaxLogonFailures $Source.MaxLogonFailures -MaxMessageSize $Source.MaxMessageSize -MaxProtocolErrors $Source.MaxProtocolErrors -MaxRecipientsPerMessage $Source.MaxRecipientsPerMessage -MessageRateLimit $Source.MessageRateLimit -MessageRateSource $Source.MessageRateSource -PermissionGroups $Source.PermissionGroups -PipeliningEnabled $Source.PipeliningEnabled -ProtocolLoggingLevel $Source.ProtocolLoggingLevel -RequireEHLODomain $Source.RequireEHLODomain -RequireTLS $Source.RequireTLS -ServiceDiscoveryFqdn $Source.ServiceDiscoveryFqdn -SizeEnabled $Source.SizeEnabled -SuppressXAnonymousTls $Source.SuppressXAnonymousTls -TarpitInterval $Source.TarpitInterval -TlsDomainCapabilities $Source.TlsDomainCapabilities -TransportRole $Source.TransportRole
# Next we need to copy over all of the explicity created permissions
$ConnectorPermissions = Get-ReceiveConnector -Identity $SourceConnector | Get-ADPermission | where {$_.IsInherited -eq $false}
$ConnectorPermissions | foreach {
 Get-ReceiveConnector "$($DestinationServer)\$($Source.Name)" | Add-ADPermission -DomainController $DomainController -User $_.User -Deny:$_.Deny -AccessRights $_.AccessRights -ExtendedRights $_.ExtendedRights
}

And as a bonus here’s a script for just copying over the permissions configured on a connector, in case you wanted to roll your own connector but didn’t want to spend the time on redefining all of the permissions. Usage is not quite the same as above as you are just specifying a source and destination connector.

Copy-ReceiveConnectorPermissions -SourceConnector "EXCHANGE\Alternate Receive Connector" -DestinationConnector "NEWEXCHANGE\New Receive Connector"
<# .SYNOPSIS Copy-ReceiveConnectorPermissions - Copies the permissions from the source  connector to the destination connector .DESCRIPTION Takes the source receive connector, retrieves all of the explicitly defined  permissions, then applies them to the destination receive connector .PARAMETER SourceConnector Identity of the source receive connector .PARAMETER DestinationConnector Identity of the destination receive connector .EXAMPLE Copy-Receive Connector -SourceConnector "EXCHANGE\Alternate Receive Connector"  -DestinationConnector "NEWEXCHANGE\New Receive Connector" #>
[CmdletBinding()]
param(
[Parameter(Mandatory=$True)][string]$SourceConnector,
[Parameter(Mandatory=$True)][string]$DestinationConnector
)
Import-Module ActiveDirectory
# We need to copy over all of the explicity created permissions
$ConnectorPermissions = Get-ReceiveConnector -Identity $SourceConnector | Get-ADPermission | where {$_.IsInherited -eq $false}
$ConnectorPermissions | foreach {
 Get-ReceiveConnector "$($DestinationConnector)" -DomainController $DomainController | Add-ADPermission -User $_.User -Deny:$_.Deny -AccessRights $_.AccessRights -ExtendedRights $_.ExtendedRights
}

A Mass Contact Conversion Experience

This may save you a fair bit of trouble in the future. I was working on a client with an Exchange 2007 server that was overrun with contacts. That’s not necessarily a problem in and of itself, but these contacts had served their purpose and now needed to be turned into mail enabled users. They already had a number of duplicates between AD users and contacts and it was causing a lot of trouble in their SharePoint hierarchy. Fortunately PowerShell can step in and save the day! Or would you really enjoy manually copying over all of the contact’s profile data to an AD user? Not my idea of fun.

Here’s the code. You’ll want to tweak it to your purposes of course. I wanted some clear on the screen results as it progressed and also have an HTML report that I can reference afterwards. This marked my first venture in LINQ with moderate success. I will have to revisit things later to get it working exactly how I want.

<#
.SYNOPSIS
Reads in a CSV of contacts or mail enabled users and moves the contacts to mail enabled users or updates
the current mail enabled users

.DESCRIPTION
Convert-CSVContactsToUsers reads in the CSV specified in the command line. This CSV contains a list
of users that either currently have a contact which needs to be converted to a mail enabled
user, have both a contact and a user that is not mail enabled which needs the info from the contact
copied over to the user which is then mail enabled, or have neither a contact or a user and need
to have a mail enabled user created for them.

.PARAMETER FilePath
The path and filename to the CSV containing the required users.

.PARAMETER DomainController
The NetBIOS or FQDN for the target DC to use.

.PARAMETER ResultsFile
The filename to save the HTML formatted results to.

.EXAMPLE
Convert-CSVContactstoUsers -FilePath .\UserList.csv -DomainController dc01 -ResultsFile .\Results.html

.NOTES
Require Server 2008 R2 and should be run from the EMS.
.NET 3.5 and above required for HTML output.
Appropriate domain and Exchange rights are required.

Revision History
v1.0 - Initial release
v1.1 - Added X500 address, moved status report
#>

[CmdletBinding()]
param(
[Parameter(Mandatory=$True)][string]$FilePath,
[Parameter(Mandatory=$True)][string]$DomainController,
[Parameter(Mandatory=$False)][string]$ResultsFile
)

Import-Module ActiveDirectory

#$ErrorActionPreference= 'silentlycontinue'
$CSVList = Import-CSV -Path "$($FilePath)"
$Password = ConvertTo-SecureString -String "expPassword!!" -AsPlainText -Force

# Create a custom table for storing the results
$ResultsTable = New-Object System.Data.DataTable "Conversion Results"
$column01 = New-Object System.Data.DataColumn User
$column02 = New-Object System.Data.DataColumn Result

$ResultsTable.columns.add($column01)
$ResultsTable.columns.add($column02)

# Loop through each CSV entry, check for object existence
# then process the object based on existence and type
foreach($TargetUser in $CSVList)
{
# Check for existence of any objects
$ADContact = Get-ADObject -LdapFilter "(&(CN=$($TargetUser.Name))(ObjectClass=contact))" -SearchScope Subtree -Properties * -Server $DomainController
$ADUser = Get-ADObject -LdapFilter "(&(CN=$($TargetUser.Name))(ObjectClass=user))" -SearchScope Subtree -Properties * -Server $DomainController
$ResultsRow = $ResultsTable.NewRow()
$ResultsRow.User = $TargetUser.Name
[string]$Status = $null

# If both contact and user exist, copy the info from the contact
# into the user's properties, remove the contact, then mail enable the user
if($ADContact -and $ADUser)
{
# First copy over any of the current profile details

if($ADContact.directReports)
{
$ADUser.directReports = $ADContact.directReports
}
if($ADContact.homePhone)
{
$ADUser.homePhone = $ADContact.homePhone
}
if($ADContact.facsimileTelephoneNumber)
{
$ADUser.facsimileTelephoneNumber = $ADContact.facsimileTelephoneNumber
}
if($ADContact.l)
{
$ADUser.l = $ADContact.l
}
if($ADContact.manager)
{
$ADUser.manager = $ADContact.manager
}
if($ADContact.mobile)
{
$ADUser.mobile = $ADContact.mobile
}
if($ADContact.physicalDeliveryOfficeName)
{
$ADUser.physicalDeliveryOfficeName = $ADContact.physicalDeliveryOfficeName
}
if($ADContact.postalCode)
{
$ADUser.postalCode = $ADContact.postalCode
}
if($ADContact.sn)
{
$ADUser.sn = $ADContact.sn
}
if($ADContact.st)
{
$ADUser.st = $ADContact.st
}
if($ADContact.streetAddress)
{
$ADUser.streetAddress = $ADContact.streetAddress
}
if($ADContact.telephoneNumber)
{
$ADUser.telephoneNumber = $ADContact.telephoneNumber
}
if($ADContact.title)
{
$ADUser.title = $ADContact.title
}
if($ADContact.department)
{
$ADUser.department = $ADContact.department
}
if($ADContact.Description)
{
$ADUser.Description = $ADContact.Description
}
if($ADContact.c)
{
$ADUser.c = $ADContact.c
}
if($ADContact.co)
{
$ADUser.co = $ADContact.co
}
if($ADContact.countryCode)
{
$ADUser.countryCode = $ADContact.countryCode
}
if($ADContact.info)
{
$ADUser.info = $ADContact.info
}
if($ADContact.initials)
{
$ADUser.initials = $ADContact.initials
}
if($ADContact.ipPhone)
{
$ADUser.ipPhone = $ADContact.ipPhone
}
if($ADContact.pager)
{
$ADUser.pager = $ADContact.pager
}
if($ADContact.wWWHomePage)
{
$ADUser.wWWHomePage = $ADContact.wWWHomePage
}
if($ADContact.postOfficeBox)
{
$ADUser.postOfficeBox = $ADContact.postOfficeBox
}
if($ADContact.company)
{
$ADUser.company = $ADContact.company
}

# Update the user with the current info
Set-ADObject -Instance $ADUser -Server $DomainController

# Loop through the groups and add the user to them
foreach($ADGroup in $ADContact.memberOf)
{
Add-ADGroupMember -Identity "$($ADGroup)" -Members $ADUser -Server $DomainController
}

# Next, remove the contact
Remove-MailContact -Identity $ADContact.DistinguishedName -Confirm:$false -DomainController $DomainController

# Enable the current user, then copy over the remaining attributes
Enable-MailUser -Identity $ADUser.DistinguishedName -ExternalEmailAddress $ADContact.mail -Alias $ADContact.mailNickname -DomainController $DomainController | Out-Null

# Add the X500 address if the contact was stamped with a legacyExchangeDN
if($ADContact.legacyExchangeDN)
{
$X500User = Get-MailUser -Identity $ADUser.DistinguishedName -DomainController $DomainController
$X500User.EmailAddresses += [Microsoft.Exchange.Data.CustomProxyAddress]("X500:$($ADContact.legacyExchangeDN)")

Set-MailUser -Identity $X500User -EmailAddresses $X500User.EmailAddresses -DomainController $DomainController
}

Write-Host "$($TargetUser.Name) converted."
$Status = "Success"
}
elseif($ADContact)
{
# First remove the contact
Remove-MailContact -Identity $ADContact.DistinguishedName -Confirm:$false -DomainController $DomainController

# Create the mail enabled user
New-MailUser -Name "$($TargetUser.Name)" -ExternalEmailAddress $ADContact.mail -Alias $ADContact.mailNickname -UserPrincipalName "$($ADContact.mailNickname)@domain.local" -Password $Password -ResetPasswordOnNextLogon:$true -DomainController $DomainController | Out-Null

# Then copy in the attributes
$ADUser = Get-ADObject -LdapFilter "(&(CN=$($TargetUser.Name))(ObjectClass=user))" -SearchScope Subtree -Properties * -Server $DomainController
$ADUser.directReports = $ADContact.directReports
$ADUser.homePhone = $ADContact.homePhone
$ADUser.facsimileTelephoneNumber = $ADContact.facsimileTelephoneNumber
$ADUser.l = $ADContact.l
$ADUser.manager = $ADContact.manager
$ADUser.mobile = $ADContact.mobile
$ADUser.physicalDeliveryOfficeName = $ADContact.physicalDeliveryOfficeName
$ADUser.postalCode = $ADContact.postalCode
$ADUser.sn = $ADContact.sn
$ADUser.st = $ADContact.st
$ADUser.streetAddress = $ADContact.streetAddress
$ADUser.telephoneNumber = $ADContact.telephoneNumber
$ADUser.title = $ADContact.title
$ADUser.department = $ADContact.department
$ADUser.Description = $ADContact.Description
$ADUser.c = $ADContact.c
$ADUser.co = $ADContact.co
$ADUser.countryCode = $ADContact.countryCode
$ADUser.info = $ADContact.info
$ADUser.initials = $ADContact.initials
$ADUser.ipPhone = $ADContact.ipPhone
$ADUser.pager = $ADContact.pager
$ADUser.wWWHomePage = $ADContact.wWWHomePage
$ADUser.postOfficeBox = $ADContact.postOfficeBox
$ADUser.company = $ADContact.company
# Copying over the X500 address if it exists
if($ADContact.legacyExchangeDN)
{
$X500User = Get-MailUser -Identity $ADUser.DistinguishedName -DomainController $DomainController
$X500User.EmailAddresses += [Microsoft.Exchange.Data.CustomProxyAddress]("X500:$($ADContact.legacyExchangeDN)")

Set-MailUser -Identity $X500User -EmailAddresses $X500User.EmailAddresses -DomainController $DomainController
}

Set-ADObject -Instance $ADUser -Server $DomainController

# Loop through the groups and add the user to them
foreach($ADGroup in $ADContact.memberOf)
{
Add-ADGroupMember -Identity "$($ADGroup)" -Members $ADUser -Server $DomainController
}

Write-Host -ForegroundColor Yellow "$($TargetUser.Name) created."
$Status = "Success"
}
elseif($ADUser)
{
# Only a user is found
Write-Host -ForegroundColor Cyan "$($TargetUser.Name) already exists."
$Status = "Exists"
}
else
{
Write-Host -ForegroundColor Magenta "$($TargetUser.Name) not found!"
$Status = "Failed"
}

# Update the results
$ResultsRow.Result = $Status

# Clear the variables to prevent false positives on the next loop
$ADContact = $null
$ADUser = $null
$Status = $null

# Update the results table
$ResultsTable.Rows.Add($ResultsRow)
}

# Check if HTML results should be written out
if($ResultsFile)
{
# Build the style sheet for the page
$Style = "<style>"
$Style = $Style + "body{font-family: `"Century Gothic`"; font-size: 10pt;}"
$Style = $Style + "table{border-width: 1px; border-style: solid; border-color black; border-collapse: collapse; }"
$Style = $Style + "th{border-width: 1px; border-style: solid; border-color: black; background-color: #CBFEFF; }"
$Style = $Style + "td{border-width: 1px; border-style: solid; border-color: black; text-align: center}"
$Style = $Style + "</style>"

# LINQ will be used for easier custom formatting
Add-Type -AssemblyName System.Xml.Linq

# Convert the desired columns into HTML and convert it to XML
$xml = [System.Xml.Linq.XDocument]::Parse("$($ResultsTable | Select User,Result | ConvertTo-Html -Head $Style)")

# Define the namespace
if($Namespace = $xml.Root.Attribute("xmlns").Value)
{
$Namespace = "{{{0}}}" -f $Namespace
}

#
# $xml.Descendants().Value is not returning the values for some reason here
# Have to resort to alternate index discovery
#
#$wsIndex = [Array]::IndexOf($xml.Descendants("${Namespace}th").Value, "Result")
[Array]$xmlArray = $xml.Descendants("${Namespace}th")
for($i=0;$i -le $xmlArray.length-1;$i++)
{
#We're color coding the Results column
if($xmlArray[$i] -match "Result")
{
$wsIndex = $i
}
}

# Loop through each row and assign a text color and background color to the result
foreach($row in $xml.Descendants("${Namespace}tr"))
{

switch(@($row.Descendants("${Namespace}td"))[$wsIndex])
{
{"Success" -eq $_.Value} {$_.SetAttributeValue("style", "color: #006600; background-color: #F1FEFE;"); continue}
{"Failed" -eq $_.Value} {$_.SetAttributeValue("style", "color: #660000; background-color: #F1FEFE;"); continue }
{"Exists" -eq $_.Value} {$_.SetAttributevalue("style", "color: #4D6600; background-color: #F1FEFE;"); continue }
}
}

# Save to the desired file
$xml.Save("$pwd/$($ResultsFile)")
}

Use safely!

Save those service accounts!

Have you ever woken up at 2:33 am to the phone ringing off the hook because some service is not longer running and it must be fixed right now? And this service not running is because someone reset the password on a service account last week? I’ve seen this scenario play out way more than it ever should. Let’s nip that problem of service accounts running wild right away.

<#
.SYNOPSIS
Checks all servers available in the domain for services running under service
accounts.
.DESCRIPTION
Find-AllServiceAccounts grabs a list of all servers from the current domain and
uses WMI calls to retrieve a list of services from the servers. It then loops
through the services checking for service accounts and writes any service
accounts found to a CSV specifically for that server.
.NOTES
You will be prompted for credentials when first running this.
#>
Import-Module ActiveDirectory
# Default system accounts to exclude
$ExcludedServices = "NT AUTHORITY\LocalService", "LocalSystem", "NT AUTHORITY\NETWORK SERVICE", "NT AUTHORITY\NetworkService"

# Will pop up a prompt to provide credentials for making WMI calls
# to the remote systems
$Credentials = Get-Credential

# Uses the LDAP search results for computers
$HostList = Get-ADObject -SearchBase (Get-ADRootDSE).defaultNamingContext -LDAPFilter "(&(objectCategory=computer)(|(operatingSystem=Windows Server*)(operatingSystem=Windows 2000 Server)))" -Properties Name
$HostName = Get-Content env:computername

# Pass a service name in and check to see if it matches the excluded
# service accounts that you've preconfigured
function Check-ServiceName($ThisServiceName)
{
<#
.SYNOPSIS
Check-ServiceName verifies if a service is not a default service account
.DESCRIPTION
Check-ServiceName checks the service account for the specified service against
the list of excluded services from the $ExcludedServices variable. Returns $true
or $false.
.PARAMETER ThisServiceName
The service account to be checked against.
#>
  foreach($ExcludedName in $ExcludedServices)
  {
     if($ThisServiceName -eq $ExcludedName.ToLower())
     {
       return $true
     }
  }
  return $false
}

[array]$BadHostList = $null
# Our main loop, iterates through each system listed in
# the file and pulls all of their services
foreach($HostServer in $HostList)
{
  # Grab the list of services from the system
  if ($HostServer.Name -ne $HostName)
  {
    $ServiceList = get-WMIObject -Class win32_service -ComputerName $HostServer.name -Credential $Credentials -Property name,startname
  }
  # If it is the localhost then we need to exclude the credentials parameter
  else
  {
    $ServiceList = get-WMIObject -Class win32_service -ComputerName $HostServer.name -Property name,startname
  }

  # If the host could be contacted, loop through and check each service
  if($ServiceList)
  {
    [array]$CSVArray = $null

    foreach($ThisService in $ServiceList)
    {
      $CheckResult = Check-ServiceName($ThisService.StartName.ToLower())
      if ($CheckResult -eq $false)
      {
        Write-Host "Service: " $ThisService.name
        Write-Host "Account: " $ThisService.StartName
        $CSVArray += ,$ThisService | select Name,StartName    
      }
    }

    if ($CSVArray.count -gt 0)
    {
      Write-Host "Saving .csv for $HostServer"
      Write-Host ""
      $CSVArray | Export-CSV "$HostServer.csv"
      $CSVCount++
    }
  }
  # The host could not be contacted, note it down
  else
  {
    Write-Host "$($HostServer.name) is unreachable."
    $BadHostList += $HostServer
  }
}

if($CSVCount -gt 1)
{
  Write-Host "Done, wrote out $($CSVCount) .csv files."
}
elseif($CSVCount -eq 1)
{
  Write-Host "Done, wrote out $($CSVCount) .csv file."
}
else
{
  Write-Host "No .csv files written out, no service accounts found."
}

if($BadHostList)
{
  foreach($TargetHost in $BadHostList)
  {
    Write-Host "$($TargetHost.Name) at $($TargetHost.DistinguishedName) was unreachable."
  }
}

What this does for you is that it runs through all of your servers (assuming they will respond to WMI queries, you’ll want to check your firewall policies) and dumps out a CSV for any server with service accounts reporting what services are using service accounts and which service accounts those are. This will save you much frustration in tracking down who will be impacted by a service account password reset, security audits, password reset frenzies and just plain documentation.

Speaking of security audits and password resets, in an upcoming post I’ll show you how to easily update all of your service account passwords so that you can spend the rest of the day playing Animal Crossing rather than remoting from server to server to server …

Did this post help you out? Do you have any questions or a specific topic you’d like me to delve into? Just let me know, thanks!

Easy Ways to find your Mail Enabled Public Folders

It looks like some of you are wanting to easily find your mail enabled public folders. Definitely something good to know, especially when you are planning out a migration or are being thrown into a new and probably fragile environment. The documentation is never up to date of course so you have to dig it out yourself. Here comes PowerShell to rescue you from manually slogging through it all!

$Servers = Get-PublicFolderDatabase
foreach($TargetServer in $Servers)
{
    Get-PublicFolder -Recurse -Server $TargetServer.Server | where {$_.MailEnabled -eq $true}
}

This will grab all of the mail enabled public folders from all of the servers in your organization. But in case you need to dig in further, say for instance if you are needing to dig into the AD objects for fixing things like missing homeMDB entries or other woes. Or even just plain documentation. This one liner will do it for you.

Get-ADObject -SearchBase ( "CN=Microsoft Exchange System Objects," + (Get-ADRootDSE).rootDomainNamingContext) -SearchScope OneLevel -Filter { (mail -like "*") -and (ObjectClass -eq "publicFolder")} | select Name

I hope this helps out a few of you out there.

5 Extremely Useful Tools to Simplify your Microsoft Exchange Life

Here’s what I find myself using in my day to day life in working with Exchange. If it weren’t for these then troubleshooting and automation would be a lot more difficult and I would find myself throwing my life away. Why waste time when you could be watching E3 game trailers instead?

  1. PowerShell! This one definitely has to take the first place spot in mention. Not that I have any particular order to how I’m listing everything. If it wasn’t for PowerShell then management of even a single Exchange server would be much more tiresome. Just look at mass creating a number of new users. HR sends you an Excel sheet with all of their details, you save the pertinent bits out to a CSV, then just run it through a little PowerShell script
    $NewUsers = Import-Csv -Path C:\Import\UserList.csv
    foreach($NewUser in $NewUsers)
    {
                    New-Mailbox -Name $NewUser.Name -Password $NewUser.Password -UserPrincipalName $NewUser.UPN -Alias $NewUser.Alias -Database $NewUser.TargetDatabase -DisplayName ($NewUser.FirstName + " " +$NewUser.LastName) -FirstName $NewUser.FirstName -LastName $NewUser.LastName -OrganizationalUnit $NewUser.OU -Phone $NewUser.Phone -ResetPasswordOnNextLogon:$true
    }

    Tada, creating new users has been simplified from hours of manual labor to a few minutes of CSV formatting and scripted importing.

  1. mxtoolbox.com This is a site that gets used often in troubleshooting. I can quickly check on the MX records for any domain including my own, run through a list of BLs to see if my domain is listed, and very importantly run diagnostics on my mail server’s external facing connectors to see what errors may come up. This is where I turn if I don’t have a way to telnet in from the outside. There are a number of other useful tools there as well though they don’t receive as much use as the BL and diagnostics.
  2. testexchangeconnectivity.com A very important site if you are running Exchange migrations. The ActiveSync and Outlook Anywhere tests help greatly for verifying all of your autodiscover functionality and your other CAS services. The Lync tests are great as well if you have that as part of your organization. It is so much easier than having to call up some external user and have them test such and such functionality, again.
  3. migrationwiz.com Speaking of Exchange migrations, I’ve found this particular service to be extremely useful when working with migrations that are not Exchange to Exchange. MigrationWiz will login to the target service and transfer mailboxes to your Exchange server. It is as simple as that. No annoying process of exporting and import PSTs. Furthermore since incremental transfers are available you can spread out your migration without needing to do an overnight cutover.
  4. telnet and nslookup Since these are small I figured I could combine them into one entry. These are also some extremely vital tools in your Microsoft Exchange swiss army knife. I’ve already talked about the usefulness of telnet in previous posts so I won’t bore you all over again. Nslookup is a fantastic way to quickly verify records. Such as comparing your internal DNS records to your external records. Run nslookup autodiscover.contoso.com and nslookup autodiscover.contoso.com 8.8.8.8. The first will check what it is resolving to internally while the second will query Google’s public DNS for the external autodiscover record. MX record lookups are simple as well, nslookup –type=mx contoso.com 8.8.8.8.

Any favorite tools that you find yourself using over and over for your Exchange servers or migrations? Please let me know in the comments.

The Welcome News of the Death of SBS

SBSIntoTheNight

SBS wander off into the black abyss

Somehow I missed this bit of news last year. The Death of the Small Business Server. Possibly because I haven’t had to deal with SBS in a serious manner for a while now. Anyhow from the point of view of an Exchange administrator it is welcome news to see that SBS 2011 is the end of the line for the SBS. I have always found SBS to be a pain to work with. The migration wizards were prone to breaking in mysterious ways. The POP3 connector was horrid to troubleshoot and would still choke on a seemingly normal email all too easily. The most annoying thing was being forced to use the wizards for most things you do, and to suffer the consequences if someone else did not use a wizard. For small businesses there don’t seem to be any genuine alternatives unfortunately. Several Linux based alternatives are presented over at The VAR Guy but I don’t see a full Linux solution being a comfortable route, at least for most of the small businesses I’ve worked with in the past. There are a few odd ones out that embrace any alternatives to Microsoft of course. Windows Server 2012 Essentials and Zimbra Collaberation Server virtual appliance may be a better compromise for MSPs that are reluctant to touch a full Linux alternative for an on-site Exchange substitute. For me I’ll just be happy that I won’t have to worry about Exchange 2013 shoehorned into an SBS.

The Number One Easy Way to Setup a Failed Migration

It surprises me how much I run across this one but then again I have been guilty of it as well.

Eater of backups

I eat backups! Garr!

There is a very important first step that I find skipped over and forgotten quite often when it comes to running an Exchange migration. Or really any other kind of migration. Have you taken a system state backup of AD yet? No? Then you’re just spinning the bottle and hoping it doesn’t end up with you getting cozy with Microsoft’s support hoping that they can fixed your screwed up Active Directory.

Don’t make the mistake of assuming backups are working

I made this mistake once upon a time. It was from one of the first Exchange migrations I was running. I didn’t feel like being bothered to take a backup of AD as the server was a really slow server. I was confident that the nightly backups had taken care of everything anyhow. Though I didn’t bother to validate this. So I went directly into running the migration and everything was going smoothly at first. Everything looked like it was running great. But then part of the way through I found that AD replication had broken and that it possibly had been that way for a while. It would have been easy to roll back to an AD backup, correct the problem and then retrace my steps but unfortunately that wasn’t an option. Because I hadn’t taken a backup. The nightly backups hadn’t worked in several months either. That lead into a call with Microsoft later on and then having to spend even more hours fixing things manually via ADSIedit when they couldn’t figure it out.

I don’t want to be the one cleaning up after you

It is a very simple step to take at the very beginning. Just grab a backup before you run your first setup.com /PrepareAD. While you’re at it, why don’t you test the backup of your current mail server and make sure that it is working ok as well. Trust me on this. You don’t want to be the guy to explain to your boss that the data is gone as your only valid backup is from 3 years ago. Your backups are working, right? You might want to double check on that just to be sure. I recommend a mock restore for that extra bit of assurance.

Quick review of flushdns, registerdns, and DNS queries

There seems to be a bit of a misconception on how DNS cache flushing works. I’ve heard techs talking about running ipconfig /flushdns and ipconfig /registerdns to flush the DNS cache. It looks like there needs to be a bit of clarification on how these commands work:

ipconfig /flushdns: “Flushes and resets the contents of the DNS client resolver cache. During DNS troubleshooting, you can use this procedure to discard negative cache entries from the cache, as well as any other entries that have been added dynamically”

ipconfig /registerdns: “Initiates manual dynamic registration for the DNS names and IP addresses that are configured at a computer. You can use this parameter to troubleshoot a failed DNS name registration or resolve a dynamic update problem between a client and the DNS server without rebooting the client computer. The DNS settings in the advanced properties of the TCP/IP protocol determine which names are registered in DNS.”

Now as you can see from the above documentation that the parameters operate independently. You would only issue a /registerdns parameter in cases where the client system’s name is not being resolved. There is no requirement to run it with the /flushdns parameter.

Something that you may find of interest is that there is also a parameter to show the contents of the DNS cache. ipconfig /displaydns will print out in the terminal window the entire contents of the DNS cache. You can verify from there whether it truly has the correct address for whatever you’re having issues resolving or not.

A quick refresher on how name resolution works. First the name is submitted for DNS resolution. The system checks to see if the name is a FQDN, single label or multi label. This is determined by the dots within the name i.e. http://www.microsoft.com. is an FQDN while http://www.microsoft.com is a multi label and just www is a single label. Note the terminating period on the FQDN and the lack of a terminating period on the multi label name. Let’s first check how resolution works for an FQDN:

1.       Checks DNS cache (this is built from previous DNS queries and the hosts file, hosts file always win)

2.       Queries primary DNS server

3.       If no response in two seconds it queries all remaining DNS servers

4.       Resends queries to all servers at the four and eight second marks

5.       Returns time outs for all queries after thirty seconds

6.       Query is evaluated on whether it is 15 bytes or less

7.       If less then query is submitted for NetBIOS resolution

8.       Query finally fails if no resolution has been achieved

Now if a multi label name was submitted such as http://www.microsoft.com (note the lack of a terminating period) then the resolver terminates it with a period to make it an FQDN and submits it to the same resolution list as above, with a slight difference:

1.       Checks DNS cache (this is built from previous DNS queries and the hosts file, hosts file always win)

2.       Queries primary DNS server

3.       If no response in two seconds it queries all remaining DNS servers

4.       Resends queries to all servers at the four and eight second marks

5.       Returns time outs for all queries after thirty seconds

6.       Queries are re-issued with the connection specific DNS appended to the query

7.       Queries are then reissued devolving the parent DNS until only two labels are left

8.       Query is evaluated on whether it is 15 bytes or less

9.       If less then query is submitted for NetBIOS resolution

10.   Query finally fails if no resolution has been achieved

For a single label name the connection specific DNS is appended immediately and then it is submitted to the same resolution order as the FQDN.

For more information and flow charts look at the documentation links below.

Documentation taken from here:

http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/ipconfig.mspx?mfr=true

http://technet.microsoft.com/en-us/library/cc961411.aspx

Active Directory Internal Naming and DNS Strategy

This post touches on something that is rather simple, yet I’ve seen it done improperly at many of the SMB clients that I work with by a previous provider. This has resulted in some unnecessary complexity and even migrations to a new forest to meet requirements such as for Exchange 2007 when it did not support single label domains. When you are first creating your Active Directory forest you want to put some thought into what you are naming it. You need to first think about the company’s internet facing domain names and what sort of traffic is being generated through them. This changes your DNS strategy depending upon, for instance, if the company’s website is being hosted by the company or if it is hosted by a third party. You will also need to think about where your public DNS is being served from. Another thing to throw into the mix is security, which ties into the previous issue. Let’s take a look at a few things here.

Microsoft has some best practices guidelines here. My personal preference is to go for a non-registered TLD such as .internal or .local so as to provide no confusion with TLDs such as .com or .net. Microsoft would prefer for you to go with a subdomain of your external domain i.e. corporate.contoso.com for your AD forest while using contoso.com on the internet facing side. Either way of doing it a benefit you reap is that name resolution for contoso.com is done externally. The reason for this is that while your internal DNS is authoritative for contoso.local or corporate.contoso.com it is not authoritative for contoso.com itself so it will find a server that is. This will return the internet facing IP address for whatever is in contoso.com. The reason you would want this is because, especially for the majority of SMBs that I work with, most often their website is hosted at a 3rd party provider. If your AD forest was contoso.com that would add complexity as you would have to manage internet addresses both internally and externally as you would no longer be able to forward requests to your public DNS provider. For example for the record of http://www.contoso.com if you switched 3rd party hosting providers you would need to update that A record on your public DNS. You would also need to update that record internally, otherwise the next day your client will be calling in to let you know that their website is “down.”

Now what if you were hosting your own DNS? For security you would want to put your public DNS into your DMZ serving different zones than your private DNS servers. The reason for this is to restrict public access to your internal DNS hierarchy. Access to that would give hackers a huge amount of information on your internal network such as naming conventions, internal ip addressing and even names of your DCs. Your private DNS would then forward requests for contoso.com to your public DNS and management is simplified since internal changes would not affect external changes and vice versa.

Next obstacle to face is what if you were hosting some addresses internally but others are hosted at a 3rd party, such as www .contoso.com goes to your company’s website but mail.contoso.com goes to your OWA. Creation of a zone internally for that specific address would allow internal requests to be managed by your internal DNS while still forwarding requests for the company site to the public DNS side. This simplifies DNS management as well. You would have your mail.contoso.com zone and you could be migrating from one Exchange server to another and all you would have to manage internally is the mail.contoso.com zone. Your public IP address has not changed at all so your public A record for mail.contoso.com has no need to be updated. All those remote users hitting mail.contoso.com would not notice a difference, unless of course you have forgotten to change your NAT and firewall rules but that is an entirely different subject. Also if the reverse is true and you are changing your public IP address you would still only be changing your public DNS records. Private DNS would not be impacted whatsoever.

So what if you were to go with contoso.com for your AD forest as well as your public DNS? DNS changes would be more complex. You would need to manage addresses both externally and internally. An example, you have your mail.contoso.com address created externally and your remote users are using OWA. If they come into the office suddenly all their OWA requests are failing since an A record internally is not created. You create your A record pointed to your Exchange server internally and everything works properly again. Then there is the scenario of the company website which is hosted by a third party. Users are able to access http://www.contoso.com outside the company but inside the company the requests fail. You create an A record pointed to the 3rd party site and everything works again, until you switch your hosting provider. People will be unable to access the site again until you also update the internal DNS record.

There is also the single label domain name to think about. Microsoft recommends to avoid this and I would also recommend avoiding it since it requires even more initial management to get things working properly. It can also cause problems with cross forest trusts.

Keep your DNS simple and you will have less late nights trying to figure out why mail.contoso.com does not work on the company network.

%d bloggers like this: