Hello there fellow humanoids, Ned here again. Last week the Mail Sack was a bit thin. This week I had to wrestle it under control. If your interesting question doesn’t show up here, it may just be in the backlog – nothing personal, cobber, maybe next week.
Let’s move out.
- Accidental deletion protection in Win2003
- Forestprep rollup behavior
- DFSN memory usage patterns
- DFSR sync time estimation
- DFS management status check
- GPP LDAP item-level targeting
In previous blog posts AskDS has talked about setting “Protect object from accidental deletion” in Windows Server 2008 and later.
I run Windows Server 2003, and that checkbox is not available for me. I tried adding the Everyone group to DENY delete on a test OU, but I can still delete it as the Administrator. What am I missing?
For Win2003, follow this step-by-step article:
It covers how you have to also set specific deny permissions on the parent object (in my case, the domain root contoso.com). If you do it correctly, when you attempt to delete an OU you will get:
Remember, this is not a panacea – it’s only preventing accidental deletions. An admin that really wants to zap this OU still can, as they can remove the DENY perms easily. The only way to prevent an admin from deleting an OU is to fire him.
I am running a Win2003 forest currently and I will soon be deploying new Win2008 R2 DC’s. Do I need to run ADPREP.EXE /FORESTPREP for Windows Server 2008, then for Windows Server 2008 R2?
Nope. Every version of ADPREP we release contains all previous Windows schema updates. If you wanted you could upgrade your schema from Windows Server 2000 all the way to 2008 R2 in one go.
It’s a reasonable question – not like most folks are constantly upgrading schemas…
Do you know the rough memory consumption I’d see in DFS Namespaces with X number of Links (i.e. “Folders”)? This would be within Windows Server 2008 R2.
For a test I created 100 links in a V1 (Windows 2000 style) namespace through a few quick FOR loops – you’d be amazed what you can do with MD, NET SHARE, and DFSUTIL in a pinch. I found the following after restarting the DFS service on that hosting root…
Here is private working set and committed memory with no DFS Links:
Here it is with 100 Links:
As you can see, not much difference between 100 links and 0 links. Memory went up ~300KB private working set within the user-mode heap memory. Backing Kernel memory of pool and non-paged pool were pretty much unaffected.
So then I took it to 1000 Links:
That made it roughly 3MB higher than the usage at 0 links. So it is actually quite linear and predictable in a simple repro. 100 links was ~300KB, 1000 links was ~3MB.
Finally, I converted the namespace to V2 (Windows Server 2008 Style):
Added a bit more per link, but not much. This is because we have secret relationships with hardware vendors that require us to have more RAM as we release later operating systems. Nah, just kidding, it’s because V2 namespaces sacrifice a bit of memory for higher reliability and scalability. Take off your tinfoil hat, fella.
DFSMGMT.MSC has an effect here as well – the more I used it and navigated around a namespace on that server, the memory usage in the service kept climbing slightly as it retrieved data to send to the snap-in. But that should be rare. As should having even 100 links, much less 1000.
This question came from our pal Mark, who asks always questions that force me to repro. 😉
Is there some way to estimate initial sync time in DFSR?
Think of how accurate the progress bar is when you are using Windows Explorer or Internet Explorer over a slow WAN or the Internet – often very inaccurate, right? That is a very synchronous operation where you are typically copying only one file and it will not be changed by anyone in the middle of being copied. The progress bars tends to move fast, then slow, it freezes, gives outlandish times, and then suddenly finishes.
Now imagine you are having to track progress on 16 files at a time to 30 different servers on 20 different networks of varying speeds and quality, which are also servicing other network data. Impossible to do, pointless to estimate – it will always be wildly wrong. So we don’t bother.
We recently upgraded from Win2003 and started using the new DFSMGMT.MSC console for our DFS Namespace administration. The old DGSGUI.MSC had a little “check status” option that I liked, which is gone now. Can I get that back or use something else?
DFSDIAG.EXE will tell you most things about the health of your environment, not just shares. For example, here I have a 3-server link and one of them is offline:
The old DFSGUI.MSC way of doing this was inherently flawed – it missed a lot of other problems and mainly gave a false sense of security. All it checked was that the share could be enumerated. A common complaint I got supporting Win2003 was “What do you mean DFS isn’t working, it says right here that it passed the status check!?!”
An even better idea than staring at DFS in the management tools is to run System Center or a third party, and have them check status for you. Then it can tell you when things aren’t working, leaving you to catching up on your reading of Windows 7 Phones at Engadget. WANT!
I am using LDAP Query item-level targeting in Group Policy Preferences and trying to provide %USERNAME% as a variable to part of my filter, but it’s not working. I’ve already installed the KB976398 hotfix.
GPP doesn’t necessarily care about Windows environment variables – actually, no application is required to. To see all the variables that GPP will accept as part of configuring a policy or targeting, click on any field in the editor and then press F3.
So in this case, you’d want to use %LogonUser% to get the same info that %UserName% provides:
And for this particular case, we’d use this to apply a policy to a user:
What I don’t necessarily understand is why you’d want this filter. Seems like it would always apply, so why not just make it a user policy at the domain? Oh well, it’s a useful example. 🙂
Finally, I sent my Pop, Uncle, Aunt, and step-mother to a Cubs game on Thursday. Great seats, right along the home dugout in Wrigley Field. And in true Cubs fashion, the game went like this:
13 runs by the Diamondbacks. 13. That’s a football score!
Until next time.
– Ned “$%^#&&* Cubs” Pyle