Home » Articles (Page 2)

Category Archives: Articles

RSAT for Windows 7

Having recently converted over to Windows 7 one thing I found missing was the Remote Server Administration Tools. Well they are missing no longer. Go and get your RSAT goodies here! Don’t forget after installing you have to go into Programs & Features and add in the tools that you use.

On another note I have to admit that I am really liking Windows 7. I haven’t been using it very long and never made the time to really explore it during beta. But now that I have put it into full time use I have really come to like a number of the UI features. The quick documents/tabs off a program in the start menu or the pinned icons down in the task bar are great. It especially works out for having remote desktop pinned to the start menu and then a quick start list of my favourite servers just off it. I am also really appreciating the ease of use for the network connections icon in the tray. Very simple to cycle through various VPN connections now.

So overall? I like it a lot. No compatibility issues and has been a very painless conversion process.

Understanding DNS

Let’s talk about DNS. DNS seems to be one of the mysterious issues for people working on their MCSE I have noticed. It may be due to a lack of experience as generally once you have your DNS server set up it requires very little maintenance thanks to dynamic updates. Or it may be to DNS being considered more along the networking path than the server path so candidates probably just scratch their head and wonder why they’re are looking at this stuff. Or it may be they even consider it just flat out boring. I personally have always had a fascination with names so working with a sometimes arcane system for name management is great fun for me. There are a number of concepts that you should understand first for delving into DNS management.

Microsoft makes things easier on you for actual management. In managing DNS there is a layer of abstraction between you and the server through the mmc GUI that Microsoft provides. A good exercise that I believe all admins should go through at least once is setting up a BIND server for DNS on a Linux machine. Writing out zone files by hand will give you a greater understanding of what goes into a DNS zone. Every zone is composed of a number of records. Let’s look at some of the common ones.

A records – These are the basic records that resolve a name to an ip address i.e. cloud.shinra.inc to 192.168.1.2.
NS records – These contain the DNS servers that are authoritative for this particular zone. Every zone will have at least one NS record.
MX records – These are used for routing mail to SMTP servers in your domain. They can be given weights so that you can specify an order in which these SMTP servers are approached.
PTR records – These are used in a reverse look up zone. As one would guess they map a particular ip address to its respective A record.
SRV records – These are used for locating services within a domain. They can be used for locating DCs or GCs for instance. If these break your AD will be in for a rough time.
SOA records – This is your Start Of Authority record. This points to the primary authority of the zone which is normally the server that created it and is used for revision control.

All of these records are organized into zones. There are several types of zones that you can create. A primary zone is a writable copy that is stored on that particular DNS server. shinra.inc would be an example of a primary zone that I created on cloud when I first set up the AD. You can have multiple DNS servers set up with copies of a primary zone. Then you have secondary zones. These are non-writable copies of a primary zone. They retrieve their copies and updates throne zone transfers from a DNS server with the primary zone. Then there are stub zones which are very useful in managing a DNS structure spread across volatile domains. But before we talk about that more let’s talk about zone delegations first.

A zone delegation would be something created to pass authority for a subdomain to a different DNS server. Let’s say we are setting up the subdomain lab.shinra.inc. We want to grant the lab some autonomy from our structure so we create a delegation for the subdomain to pass control to the lab subdomain’s administrator. In a delegation there is solely an NS record pointing to their primary DNS server and a glue A record containing the ip address of the DNS server. All the rest of the contents of the zone are stored and managed by lab’s DNS servers. So here is how a request would work if a computer in shinra.inc did an nslookup on experiment.lab.shinra.inc. The request would hit cloud.shinra.inc which would determine that lab is a subdomain of shinra.inc. Looking in the lab zone it would find the NS record for lab’s DNS server and the A resource. The request would get passed on to ns.lab.shinra.inc which would then check it’s zone and return the ip for experiment.lab.shinra.inc. But what if the lab environment was in flux? That would be a management pain as we would be constantly updating the delegation’s records for the zone so that queries do not break. This is where stub zones come into play.

A stub zone works like a delegation that has been beefed up. Instead of one NS record is contains NS records for each DNS server in that zone, as well as the corresponding glue A record. It also contains the SOA record from the zone so that it queries the master server of that zone at regular intervals to update its list of records. This way you can add and move around DNS servers more easily as long as your server can still contact the server specified in the SOA, which can also be updated. Do note that stub zones are a read only copy so if they ever fall out of scope you would need to recreate them. Not that this is difficult to do. So how do you know when to use a delegation or a stub zone? That depends upon your goals. Both types allow the subdomain complete control over their DNS environment. Delegations will eliminate zone transfer traffic. Stub zones will keep current with DNS servers in the subdomain through zone transfers. This means you will need to evaluate if the domain will be changing frequently or not.

Now there are two flavors that zones come in, forward lookup and reverse lookup. Shinra.inc is an example of a forward lookup zone, while 0.60.10.in-addr.arpa would be an example of a reverse lookup for 10.60.0.x. The PTR records are stored in the reverse lookup zone. Reverse lookup zones are not necessary for a healthy AD environment, but they are highly recommended. They make troubleshooting easier for your help desk, plus some applications may rely on reverse lookups. For instance some mail servers may do a reverse lookup on your SMTP server and will reject mail from it if they do not find a valid PTR record for your server.

One final zone to remember is the _msdcs zone. This is a special zone that is treated as a subdomain of your AD domain i.e. _msdcs.shinra.inc. This provides the bulk of the functionality required for a healthy AD. You will find all of your SRV locator records in here. Take a look at the second paragraph of this earlier post for some more information on how this is used.

Now that you have an understanding of how DNS works we will move on to managing DNS on your Core installation. Look for that post coming up soon. In the meanwhile here is some extra reading to beef up your DNS knowledge:

Useful Links

Things have been a bit busy so in lieu of content I present links to useful content that should go into your bookmarks/rss.

  • TechExams.net – Here you will find a treasure trove of legal study notes for various exams. It realyl helped me out when I was doing the 70-270- and 70-290. Even more importantly they have a great forum full of people willing and able to help.
  • TechNet – Definitely one for everyone’s bookmarks. You will find all sorts of best practices and handy descriptions of how to set up, deploy and manage various Microsoft systems.
  • MSExchange.org – One of three of the best places to go for any problems or research for Exchange.
  • Petri IT Knowledge Base – This is another place that is great for Exchange, though it is useful for Microsoft systems beyond Exchange as well.
  • Elan Shudnow’s Blog – Another great place to go for Exchange information and pick up a few other tips along the way.

Now for a few links a bit more off the beaten path.

  • VirtualBox – One of the best free desktop virtualization packages available, especially if you are running on Linux.
  • One Hundred Pushups and Two Hundred Situps – You need to stay in shape or get back into shape. Everything works better when you’re healthy.
  • Ask the Headhunter – You can find some great wisdom on job searching here. The book is also good supplementary reading.

Virtualization for Practice

When you start studying for your MCSE or MCITP: EA there does come a point where you have to get some hands on experience with the technology. There are simulations in the exams that you will have to face at some point and if you’re just book smart alone you might not be able to sail through those with ease. Even more importantly when you’re on the job you are going to have to do what your credentials claim you can do. So what’s an easy way to start getting all of that practice right there in the comfort of your very own home? Well for one you can start downloading a trial copy of Windows Server 2008 or of 2003. That will get you the software, but chances are you don’t have too many machines sitting around to use as servers, as well as clients. This is where virtualization comes to the rescue.

Virtualization is what enables you to run a machine, and actually several machines, inside of one physical machine. With these virtual machines you can make up a whole virtual environment of servers and clients purely for your testing and enjoyment purposes. You can even set up several different networks so as to simulate two or more separate sites and/or forests. Virtual machines are also a great way to step towards high availability, but that is for another time as we are just going to talk about them for study purposes right now. So currently there are two easily accessible ways of achieving this virtualization. One way is through the use of a bare metal hypervisor. This will require a separate machine that you will be dedicating purely to virtualization. Another way is through a hosted hypervisor. This typically installs as an application upon your OS of choice and you use at will. For enterprise use a bare metal hypervisor is usually the best solution but for a simple at home lab for study purposes you would probably be best off with a hosted hypervisor. Let’s run through a few popular options we have available.

First and most famous is VMware. You have the options here of VMware Server and VMware Workstation. Server is free but Workstation will cost you. Unless there be a must have feature from Workstation, you should stick with Server. It’s free and it will get your virtual lab going trouble free. There are version available for Windows hosts as well as Linux hosts. The interface is not too difficult to learn and once you have it set up it will stay out of your way. I have read that it is possible to install Hyper-V into VMware but I have not tested it out myself yet. This is something you may wish to keep in mind when planning your MCITP training lab.

Next up is VirtualBox. This virtualization product is relatively new to the scene compared to VMware but it is growing in popularity. This one is also freely available. This is my hosted hypervisor platform of choice in a Linux environment, and it is available for Windows as well. The reason I went with Linux for my host is simple. If you have 4 or more gigabytes of ram then you will want to use a 64-bit operating system. You will also want to make sure that you have hardware extensions such as Intel VTx or AMD-V available AND enabled in the BIOS if you are planning for 64-bit. VirtualBox I have found to be the easiest out of the box experience if you’re using Linux as a host.

Some other options are QEMU and Microsoft VirtualPC. I have not worked with either of these but VirtualPC is another good choice for Windows hosts from what I have read, with no availability for Linux hosts of course.

If you are wanting to delve into bare metal hypervisors then VMware ESX is definitely the platform of choice. VMware has a free edition available named ESXi but it misses a lot of the functionality of the full package. Hyper-V is another option to consider, especially if you are planning on MCITP studies. In the open source arena Xen is another player, with a more commercial flavor being marketed by Citrix.

For my studies I completed them primarily on openSUSE using VirtualBox. There were a few uses of VMware Server and for test machines at previous jobs I used VMware as well. My choice of using Linux for a host was purely because I did not own a 64-bit copy of Windows and was in need of a 64-bit host. I make no recommendation of host but will recommend VirtualBox for your virtualization platform for its simplicity in configuration and use. Either way pick your platform and start practicing!

NTFS, CHKDSK, and You!

For my first post here I want to talk a bit about file systems. Specifically about NTFS because that is what system administrators in the Microsoft world deal with primarily. I also want to talk about chkdsk. I’m sure at least once in your life you’ve seen a chkdsk triggered because of some corruption on your drive. So let’s explore a bit about what is going on there. This is why I want to talk about NTFS.

At my last job we had a situation where one of our servers spontaneously rebooted. It was a mission critical server so we knew about it immediately. Pulling up the display on it showed that chkdsk was running, and that chkdsk was not happy with what it was seeing. There were entries scrolling around like “Replacing invalid security id with default security id for file 1461234” and “Deleting an index entry with Id 8447 from index $SII of file 9.” Naturally this was rather alarming. A rather heated argument ensued about what was really going on, with some people not even sure that this was chkdsk running. In the heat of the moment sadly I was not able to articulate exactly what was going on as well as I could. This gives me an opportunity to remedy that. To take a look at what is happening here though requires delving into some of the structure of the NTFS file system itself.

In an NTFS partition just about everything is a file. This includes where the meta-data is stored as well. NTFS starts out with a boot sector, and this boot sector is contained in a non-relocatable file. $BOOT seems to be pretty self-evident. The boot sector contains where the Master File Table starts at, which is in file $MFT. There is also a mirrored copy of the master file table contained in $MFTMirr. The $MFTMirr only contains the first 4 entries of the MFT, which are in order $MFT, $MFTMirr, $Logfile, $Volume. The default reservation for the MFT is 12.5% by the way. There are cases where you may wish to increase the reservation side so as to allow for more file references. Now this MFT as the name suggests contains an entry for every file on your drive, which is by reference except for really tiny files that can be stored within the MFT. So where do your security attributes go? $Secure.

$Secure contains all of your security attributes. When you add permissions to a file or directory this is where all of those permissions are stored. They are stored in the indexes of $SDH and $SII. When NTFS is checking on the security of an object it uses $SII to do a quick look up to check the security descriptors of the object. $SDH is used for sharing security descriptors and storing new ones. Whenever a new file it is assigned a standard security descriptor that contains the default security attributes. Which leads to what ties us to chkdsk. $Volume. $Volume contains the dirty bit. When you boot up and mount the partition the dirty bit gets set in $Volume. When you shutdown or reboot one of the last things the OS does is reset the dirty bit. If the dirty bit is not reset, then when the OS boots back up it sees that the volume was not dismounted cleanly. Therefor it fires up chkdsk to make sure that everything was written cleanly. So that explains why we had chkdsk starting up. A spontaneous reboot naturally does not do a clean dismount. But what was chkdsk doing?

Chkdsk runs through three stages in this type of situation. The first three stages are verifying the files, then the indexes, and finally the security descriptors. In this first stage of verifying files chkdsk it checks what clusters are actually in use and what the MFT claims. If there are discrepancies then entries in the MFT will be reset, or added. Then chkdsk moves on to the next stage of checking indexes. This basically makes sure that you can get to every file through a directory. If there are legitimate files, but no directories leading to them, then that leaves them orphaned. Chkdsk tries to figure out where the orphan should go but if not then it is put in special directory at the root of the volume. Then finally we hit stage three, our security descriptors. It checks through to make sure that we have consistent security descriptors for all directories and files. If they are marked as inconsistent, which can be caused by the security descriptor block not having a 20 byte padding at the end of the block, chkdsk resets the security descriptor to the default. This explains a number of the entries that we saw scrolling by as mentioned earlier.

So a question that had resulted from this discussion was this: should we interrupt chkdsk when it is running? The answer to this is an emphatic no. Interrupting chkdsk can make a bad file system even worse that it is in the middle of repairing. If you must then be prepared to pull those back-ups as you may need to restore that system. Always make sure to keep good back-ups. In the case of our system here fortunately we had a disaster recovery site that we could draw upon. Which was a good thing as that volume was discovered to be completely hosed. All data was lost. Hopefully now you understand the process a bit more. Then you will be able to explain to your supervisor what is going on and why you should not stop that system, and you will have the documentation to point to for why.

Sources:
Windows Forensics, the Field Guide for Conducting Coporate Computer Investigations by Chad Steel
An explanation of chkdsk and the new /C and /I switches
Chkdsk Finds Incorrect Security IDs After You Restore or Copy a Lot of Data
How NTFS Works
Inside Win2K NTFS, Part 1

%d bloggers like this: