Home » Posts tagged 'DFS'

Tag Archives: DFS

DFS On Core — You’re Doing It Replicated

Anyone taken a look at Windows Server 2008 R2 yet? Things I’m excited about in it are PowerShell on Core, AD cmdlets, and the AD Recycle Bin. PS on Core is the most exciting addition though. Maybe later on I will start delving into R2 and talk about working with that on Core. This time, though, we are going to deal with setting up a basic DFS using Windows Server 2008 Core machines.

Core makes for a low resource file server that you can deploy to do its job without letting layers of the OS get in the way. Using it for a DFS will be a step in the right direction towards high availability of your data as well. Further more it can be used as a way to put some controls on your bandwidth utilization through having replicas of your data in locations that are local to your users. Failover is provided by pointing the users at the namespace which will then direct the users to the nearest server. Let’s run through putting together a setup on Core.

Grab our first server and let’s install the DFS NameSpace role.

C:\> start /w oclist DFSN-Server

Once this is complete we can start breaking out our trusty dfsutil.exe tool. We will start out with making a domain based namespace. Set up a share to use for this.

C:\> mkdir TurksNS
C:\> net share TurksNS=C:\TurksNS /GRANT:”Authenticated Users”,FULL

Don’t forget to customize the share and NTFS permissions to your specific needs.

C:\> dfsutil root adddom \\renocore\TurksNS “2008 Namespace”

You can also add V1 or V2 as a parameter. The default is V2. V1 is a Windows Server 2000 mode namespace while V2 is a 2008 mode namespace. Note that a requirement for a V2 namespace is a Windows Server 2008 domain functional level. If you receive any the RPC server is unavailable errors make sure the DFS Namespace service is running. Easiest way is to reboot but you can also run the sc command to start up the service.

C:\> sc start dfs

After that if you are still getting RPC errors then check your firewall and start going down the usualy RPC troubleshooting path. Let’s verify that we have created our domain based namespace.

C:\> dfsutil domain shinra.inc

You will see your newly created namespace there. Of course it isn’t doing much for us right now so let’s create some targets for it. Create another share on this server (or really any server) and add a link.

C:\> dfsutil link add \\shinra.inc\data \\renocore\data

If browse to \\shinra.inc\data via UNC or just map a drive you’ll now see the data available in there. This get us a running DFS, but it really isn’t anything more than a fancy way to share data right now. There are not multiple targets so no replication is occurring. If this server goes down there goes the access to the data. Let’s get some targets in there to fulfill the D in DFS. Jump onto another server, install the DFSN-Server role, and make yourself a share to add to the pool. Don’t forget to make sure it has the same share and NTFS permissions as your first share, otherwise things could get difficult for troubleshooting problems later on. Once you have it ready we can add the target.

C:\> dfsutil target add \\shinra.inc\TurksNS\Data \\RudeCore\Data

We have our links now. But we still have no replication. To get this setup we need yet another role added.

C:\> start /w ocsetup DFSR-Infrastructure-ServerEdition

We will then set up a replication group for our folder here.

C:\> dfsradmin RG New /RgName:TurksData
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RudeCore
C:\> dfsradmin Mem New /RgName:TurksData /MemName:RenoCore

This gives us a replication group with our two servers added in as members. Next we will bring in our data for replication.

C:\> dfsradmin RF New /RgName:TurksData /RfName:TurksData /RfDfsPath:\\shinra.inc\TurksNS\Data /force

We have a folder set for replication, but now we need replication links so that the data may flow. Note that force is required because we set up our namespace target first.

C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RudeCore /RecvMem:RenoCore /ConnEnabled:True /ConnRdcEnabled:True
C:\> dfsradmin Conn New /RgName:TurksData /SendMem:RenoCore /RecvMem:RudeCore /ConnEnabled:True /ConnRdcEnabled:True

Close to the end but we still need to bring in memberships to this replication group.

C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RenoCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:True /force
C:\> dfsradmin Membership Set /RgName:TurksData /RfName:TurksData /MemName:RudeCore /MembershipEnabled:True /LocalPath:C:\Data /IsPrimary:False /force

Replication should start flowing smoothly now shortly. If you don’t have any data in there or if you have prepopulated the shares then you won’t know for sure if replication is working properly. You can run a test from this command line utility.

C:\> dfsradmin PropTest New /RgName:TurksData /RfName:TurksData /MemName:RenoCore

This will start the test from RenoCore and the data will flow to Rudecore. Generate the results with dfsradmin.

C:\> dfsradmin PropRep New /RgName:TurksData /RfName:TurksData /MemName:RenoCore

You’ll find an html and xml file generated to pull up in your web browser. Of course you may just find it easier to do things on your own with creating a new different file on both shares and verifying if it is replicated to the other. But the good thing about the report is that it is detailed and will help you in tracking down any issues you may be having. You can also use dfsradmin to automatically create the folders for you when you use dfsradmin RF. Just add them into the namespace later on. So let’s touch on one last topic here, replication of large amounts of data.

It is ok to run through this with a small amount of data that the DFS may need to replicate initially, but if you get into large amounts, which I generally consider to be amounts over 400 or 500 GB, you will definitely want to prepopulate things. Otherwise your DFS may choke on a few files initially and cause you all sorts of headaches. Not to mention it just plain gives you more control over everything. This all does depend upon the bandwidth available to you, of course. The method I normally use is robocopy. You would want to use /E /SEC /ZB. Instead of /SEC you could use /DATSOU to include the auditing information.

Extra reading:

DFS Step-by-Step

DFS FAQ

Dfsutil Breakdown

DFS Best Practices