Data Driven Engineering: Tracking Usage to Make Decisions

Hello, my name is Peter and I work in the Office Trustworthy Computing (TWC) team. One of my team’s areas of focus is collecting data on how various applications are being used so we can make informed decisions. You’ve probably seen, and based on the comments received to date, have used our Send-a-Smile feedback tool. In addition to that type of qualitative feedback, the last 3 versions of Office have included telemetry through the Customer Experience Improvement Program (CEIP) to help us understand how applications are being used. The combination of qualitative and quantitative data provides valuable insights for making informed design decisions.

What is the Customer Experience Improvement Program?

In short, the CEIP is an anonymous opt-in program that helps us improve Office. If you opt-in to the CEIP, anonymous data about how you use Office are uploaded to Microsoft occasionally in the background.

When you run an Office 2010 application for the first time you are asked about what settings you want to apply to ‘Help Protect and Improve Microsoft Office’ and the CEIP is included in the Recommended Settings. You can also find this in the Privacy Options of the Trust Center. In previous versions, opt-in was through a “Help Make Office Better” balloon that would pop up the first time you ran Office.

image

Of course, we respect your privacy and don’t collect any information that could identify you or your data personally. Your anonymous data is combined with millions of other users to provide us a broad picture of how people use Office.

What do we collect?

We collect a lot of information on our applications, too much to enumerate in a blog post. The engineering teams have defined data points that they are interested in learning about, and added those to the software for data collection. They typically fall in the following categories:

  • Usage. The bulk of the data points fall into this category and they tell us how the software is used. Some of the information collected includes the commands that are on the Ribbon, general feature usage, actions taken in wizards, etc. It allows us to answer general questions like ‘how many users do X’ and ‘how often does X happen’, as well as specific questions like ‘how many documents contain pictures or what is the average size of a Word document’.
  • Reliability and Performance. We want to make sure that our software performs as expected and have as much information as possible in the event it doesn’t. For example, to measure Reliability developers put assertions in the code that tell us when there is a logical inconsistency (e.g., something that was expected did not happen) – knowing how often these happen helps us focus on improving the product in future releases. In the case of Performance, we expect applications to boot and load documents fast – collecting basics like document size and load time allows us to verify how well we’re doing.
  • Hardware/Software Configuration. What kind of hardware people have and how they have configured their various Office applications helps us interpret the data by providing context. For example, if we see a slow document load time, does this happen only on machines with low RAM or a particular processor speed? How do video card characteristics affect transitions in PowerPoint? How does usage differ across languages and locales?

The TWC team provides the expertise and guidance for the different application teams to get high quality telemetry on their particular usage. Since we receive over a billion sessions in a month, we rely heavily on data aggregation and provide several analysis and reporting tools so teams can access the data more easily when they want to know how their customers are using their software.

How do we use the data?

Before we had the data from customers participating in the CEIP, design decisions were quite often based on consulting people who had worked on the product for a long time (opinions) or personal observations of, say, someone’s family members (anecdotes). If you were lucky, you had some data from the researchers in the Office Design Group or a survey done by the Planning team. There was data, but it was from a constrained sample of users, rarely data from real users, doing real work. Throughout the development of the Office 2003 release, the Office teams began leveraging the CEIP data to better understand how real users used the Office applications. With every release, we’ve grown our toolset and have a richer understanding and appreciation of real-world usage data.

For many of the Office 2010 design decisions, we leveraged this usage data to answer questions based on how real customers actually use the applications. To provide a single example, take the question on whether the Ribbon should be collapsed when users were in a particular view in PowerPoint – the discussion was on whether users could still figure out how to start a slideshow. We have a few different entry points to start a slideshow and the reporting tool showed how often each was used.

image

Based on the Command Name and the ID, we know that the one showing 65.9% is not on the Ribbon, but still a significant number of users (25.6%) click the Ribbon. We can drill down even further and see that the vast majority of users access the Slideshow command through the status bar instead of using a hotkey.

image

While the design process involves more than just data, this example shows how your participation in the CEIP can replace the opinions and anecdotes from ‘experts’. Knowledge about actual usage is extremely valuable and ultimately puts us in the position to make intelligent decisions and create a better product for you.

In future posts we will give you an overview of other feedback mechanisms we use to improve the product, such as error reporting to find and fix reliability issues, as well as a tool to collect data for performance and responsiveness issues.

I look forward to reading your comments and questions on how we use data during the development cycle.

Thanks,

Peter Koss-Nobel, Senior Program Manager Lead, Office Trustworthy Computing