The difference between "Replication Status" and status of replication


Wanted to talk about a subject that is very often a source of questions, especially in our Support Services. That subject is the “Replication Status” node in ESM under the public folder store object. The question is – does this really show you what the status of your public folder replication is? Customers ask us this very often. While this was briefly mentioned in one of previous blog posts it deserves more detail.


 


Let’s talk about how this functionality actually works:


 


Each public folder store maintains a table called the Replication State table. This table is clustered into groups, each group represents a single folder. Within a group, each row represents the data of a particular partner server. Here are some examples:


 






















































Folder


Server


Changes Seen


Last Change Sent


"Hierarchy"


"this"


<change number set>


#537


"Hierarchy"


Server2


<change number set>


 


"Hierarchy"


Server3


<change number set>


 


"Hierarchy"


Server4


<change number set>


 


"Foo"


"this"


<change number set>


#23


"Foo"


Server2


<change number set>


 


"Bar"


"this"


<change number set>


#5732


"Bar"


Server3


<change number set>


 


"Bar"


Server4


<change number set>


 


 


Etc.


 


Folders which aren’t replicated to (or from) this server have no row in the table. There are other columns as well, but for the sake of describing this specific UI, this is the relevant part.


 


The change number set for each row of "this" server represents the data present in the store (excluding new data that hasn’t yet been broadcast). It includes everything that’s been broadcast out, as well as everything that’s been recorded as received (which may not include everything actually present, due to an interesting design decision). The change number sets for other servers are what the other server has actually told this one it has. It is not "live", but represents only what we’ve been told in the last public folder replication email we got from that server for that folder. By its very nature, it’s stale data.


 


The UI information is built by doing set arithmetic among the rows for a folder. So, if the change#set for "this" server minus the change#set for some other server results in a non-empty set, it means this server has data the other one doesn’t. If you reverse the math, and it’s still non-empty, it means the other server also has data "this" one doesn’t. If the subtraction (both ways) results in the empty set, then the two are in sync.


 


The math is all fine and dandy, but since the data itself is stale, the results really don’t mean much. In order for the data to be even remotely relevant, there needs to be regular communication among the servers. This will only happen if a server feels the need to broadcast public folder data, which it won’t do if no users connect to it and actually make changes there. For example, you might have a "backup" PF store, in its own routing group with PF referrals to it prohibited. Hence, nobody is ever going to post data there (and very likely won’t make hierarchy changes there either). Consequently, that server is never going to broadcast its own status information, and so all other servers are going to think it is woefully out of date.


 


In Exchange 5.5, this whole mechanism was somewhat useful because 5.5 deliberately made a status broadcast every night. This resulted in an enormous amount of replication traffic which usually was of little value (since status information is carried along every other replication mail that’s sent). The decision was made to cut the nightly broadcast of public folder status, and as a result, this UI has gone almost completely useless.


 


Because of all of the above, you will most likely not see this UI in future versions of Exchange.


 


Okay, so now we know that this is not to be depended upon to tell if your public folder replication has completed or not. So how CAN you tell then? There are two things that can be mentioned on the subject:


 


-         Checking the number of items and the size of public folders between servers with a replica of the same folder can give you an idea as to where things stand. Remember that you can actually export the list of folders that have a replica on the specific server by going to the “Public Folder Instances” object in ESM, right-clicking on it and then choosing the “Export List” option. It should be understood though that this is not a “foolproof” way of telling because there are situations that can cause number of items displayed in ESM to be “incorrect” as far as the “real life” items (the ones that clients can see) are concerned. But this is a subject for a different blog post…


 


-         If you are checking for PF replication status because you want to actually remove the original replica of the folder, it should be noted that the act of deleting a replica of the folder does not necessarily delete the PF content of that store. In other words – if the replication did not complete between the store that has a replica you want to delete and some other store that has a new replica – the act of deleting a replica off of the original store will work only if replication has completed. Otherwise, the PF data will not be actually removed off the original PF store. For more details on how this process works, please see the “How to decommission a Public Folder server without losing any data” blog post locaded here.


 


- Dave Whitney


Comments (2)
  1. Bozford says:

    Boy I can’t wait until PFs go away in Exchange 13. They STINK STINK STINK!!!!

    The part that amazes me is that some people are still migrating "shared" mailboxes to public folders.

    I guess that ‘cuz there still isn’t a viable and easy to use alternative (let’s face it, Sharepoint MIGHT be a good product in V3, and V2 is MUCH better than V1, but it still doesn’t cut it).

    Needless to say, I’m waiting with baited breath for Ex13.

  2. jasperk says:

    As always, great post Dave! Clears up things quite a bit.

Comments are closed.

Skip to main content