In a recent comment, Michel asks:
How do you come up with a list of new features that are to be included in a new release?
In a 'Meet the Expert' session here in the Netherlands last December, I discussed a problem for which ' the expert' knew a ' feature design request' was already on the list: how did it get there? Not my single PSS call I assume..
Great question. As usual, it's not an easy answer. There are a ton of inputs that go into the list of things that we might include in a given service pack or release. Of course, we strive to include only bug fixes in service packs, but sometimes these features are such a high priority that we must include them in a service pack. Some of the inputs include (in no particular order):
- Customer escalations via PSS (MS Product Support Services for those who are lucky enough not to have needed to call) or Premier support. The term that the Microsoft person used is Design Change Request, or DCR. This is our term for a change to the product after it's been released. It is unlikely, Michel, that your single PSS call resulted in that feature being on the list of things to be done, but when someone asks for a feature through PSS it can result in a “tick mark“ in a database.
- MSWISH. This is a way that anyone can make a product enhancement suggestion, without needing to call PSS or have a support contract. The URL is http://www.microsoft.com/mswish. You can fill out a web form there, or send email directly to email@example.com. All of these entries do go into a database that ends up at the appropriate product group. This isn't just for Exchange.
- Product Group brainstorming. We may have a developer, tester, or program manager who comes up with an idea and champions it. They will typically solicit customer feedback, and fight for the “ticks in the database” that will get it done. They may also harvest data from PSS call logs to see who called in with a problem that would have been solved with that fix. They may communicate via email with our MVPs (Most Valuable Professionals) to ask their opinion: have they seen a lot of this on the newsgroups, does this sound like a good idea.
- PSS feedback. PSS classifies customer calls, and they keep track of the types of problems that cause the most customer calls. Then they give that feedback directly to us in the product team. They say “if you were to make this easier, then these people wouldn't have had to call“. That's an incredibly powerful tool because it shows you exactly which things you can fix or improve that will result in a direct improvement in customer satisfaction. Because after all, no one wants to call PSS. (even though they are very nice to talk to)
- Microsoft Customer Experience Improvement Program Data. The CEIP is a fairly new tool that we have introduced in a number of products, including MSN, Messenger, Windows Media Player, and Office. For those customers who opt-in to send us their anonymous usage statistics, we can see a picture of their experience using the software. We don't capture data from their files, but rather how the program is being used. As a hypothetical example that I just made up, let's say that we saw that a huge number of people using Outlook go into Tools -> Options... -> “Other“ Tab -> Advanced Options... -> Reminder Options and turned off “Play reminder sound“. We might make the default be to not have a reminder sound in the next release. Or we might put that setting in a more prominent place. You can see the potential. We also use this to look at what kinds of error dialogs people run across. Sometimes we aren't sure how often a particular error will be hit in the real world. If we see that something is hit often, we might be able to put code in place to prevent that condition from occurring in the first place. It helps us know where to spend our time to know what will be the biggest bang for the buck for the customer.
Now I have talked about these “ticks in the database” - these are virtual ticks, there is no set threshold after which a particular feature will get done. The amount of proven demand necessary to do a particular feature will vary greatly depending on the difficulty of the work. For example, if something needs work done in Exchange as well as a change in Outlook, you can imagine that it takes a lot more persuasion and customer data to get that done. Once the decision is made to do a DCR, then we decide what is the most appropriate release. Do we need to correlate it with an Office release, etc. Sometimes, unfortunately, as we start to do the coding or test the feature, we find that it didn't work the way we initially expected or it was more complicated, and we have to re-think the feature. This is one reason that we don't like to discuss upcoming features until the product is very close to being released: you never know when something will need to be cut because of unforseen circumstances.
I hope that demystifies the process a little bit.