So, like I mentioned a while back, I'm now a professional bean counter. And as a release manager, I need to find scalable ways of keeping track of the work that several hundred program managers, developers, testers and writers on the Exchange team are doing.
The trick is that I can't afford to get too granular or I will get bogged down. We break down the product development into workitems (which usually but not always implies one dev and one tester, for a period of one or more weeks). In Exchange we also use the term "Features" to refer to groups of one or more workitems. The product and the team is loosely organized around what we call "Component teams", where a component team is a group of people working on roughly the same features or same type of features (such as Admin, Setup, OWA, Mobility, etc). At my level, I will never be successful if I try to keep tabs on every single workitem or what each individual person on the team is doing at any given point - that's for the PMs and dev/test leads to do.
So I have spent some time figuring out what is meaningful information to track. When you're looking at a system of hundreds of people and thousands of lines of code, how do you determine the "health" of the overall project? How do you know if you need to course-correct? How do you know if you're meeting the schedule? What patterns do you look for?
There are many possible pitfalls in this process. One of the risks of release management in general is that you might end up measuring something for measurement's sake, but not actually get any value out of it (and burn bridges trying to enforce it :-). Or you might read too much in to what you're measuring ("Ah, our glop rate is 42% higher than last month, so our quality must be high!") and not looking at the big picture ("Oops - turns out all of our fweepers weren't actually registered in the shwonk database so we are actually only at .8% glop.") or looking for the holes of what you're not measuring but should be ("Uh-oh... although shtnick is 75% complete, it turns out that the remaining 25% actually represents 80% of the walahoos.") And of course there's the ever-changing nature of the product cycle - what is unacceptable at one point (high bug count? lots of old unresolved bugs? specs not ready for review yet?) may be acceptable at a different point.
So there are many levels of A) the work and B) the people doing the work that I need to stay on top of. Fortunately we track all of the work in our bug database, against which I can do just about any query I want. So for example we have bugs tracking each workitem, bugs tracking each spec (including PM/dev/test), and there are uber "Feature" bugs that represent groups of the workitems/specs. I'm oversimplifying, but hopefully you get the gist.
So over the last few months I've been developing my arsenal of tools to help me track A and B. One of the most flexible, powerful and effective tools in my toolkit is Excel. I grew up using and loving 123, but at some point after joining Microsoft, I discovered the glory that is Excel. It is my second favorite Microsoft product, after Exchange. I like it because it is itself a powerful toolkit - it doesn't do much of anything by itself, but it provides a set of tools to help me do so many things. And yes, there's a certain amount of "When all you have is a hammer, every problem seems like a nail" here - I do tend to use Excel as my first tool because it's the one with which I am the most familiar. To a point, there's nothing wrong with that, since you'll be more efficient with tools you already know how to use well.
I have spent the last few weeks developing some spreadsheets that do data mining from our bug database and can show me things such as:
- Spec status (What % of PM specs are in review? How many test specs are in progress? Are the dev specs completed before official workitem coding begins? Are the specs of a central shared team completed first, since other teams are relying on them?)
- Features (In which milestone are we implementing which features? What is the dev spec status of all features for this component team? When will this group of features be in dogfood?)
- Bugs (What is the time differential between bugs being resolved and bugs being closed? Are all the teams operating on bugs in roughly the same timeframe? Are the teams applying priority and severity to their bugs consistently?)
And of course, only measure what's useful. If you measure something and don't know what action you would take on it if the data was returned a certain way, then you probably shouldn't be measuring it.
Most of these spreadsheets follow a similar format:
- One worksheet does a live query against the database to return a certain set of bugs
- Then there's a pivot table based on this live data
- And frequently, a pivot chart
That worksheet may also have additional columns with formulas that calculate additional data dependent on the values returned from the database (e.g. I may look at the "Resolved Date" of the bug and compare it to the current date to find out how "stale" a bug is, and then graph the staleness across all of the teams to see if one team seems to have more stale bugs than others).
Every spreadsheet that does a live query + pivot table has some VBA in it that looks something like the below. This makes it so that the file will requery the database when the file is opened, then refresh the pivot tables and switch to the chart. ApplyDataLabels is also a handy little method - in order to do that step manually, it can take tens of clicks depending on how many series fields are in the data, but just one line of VBA makes the chart much easier to read.
Public WithEvents qt As QueryTable
Private Sub Workbook_Open()
MsgBox ("It will take a few seconds to automatically refresh the pivot tables.")
Set qt = Sheet1.QueryTables(1)
qt.RefreshOnFileOpen = True
Private Sub qt_AfterRefresh(ByVal Success As Boolean)
MsgBox ("Pivot tables refreshed.")
I have recently also made use of this sharepoint web part that does pivot tables/charts. Every time my internal release management sharepoint site is loaded, the team can see a live graphical representation of spec status.
One significant perk of Excel's pivot tables and charts is that it is so easy for others to take the tools I provide and customize them for their own purposes. I rarely get more granular than a component team when looking for trends, but a component PM could use my charts and change a couple of fields to get a view of how the specs are looking for their features. Or a test manager could look at test specs across their organization. And any of them could then create a custom pivot chart on their own sharepoint sites.
Flexibility is a beautiful thing.
 I really need to make an animated gif of bean counting, to make it my official logo. Right now I'm using an animated a gif a dev created that shows a guy hacking away on a keyboard with a gigantic microscope zooming in on him. Any volunteers?
 Well, what were you expecting me to say? 🙂 
 Halo is #3. Outlook is #4.