by jcannon on July 18, 2006 02:15pm
There is a buzz word floating out there – “business readiness”. It seems everyone (including people here at Microsoft) are trying to capture something important to organizations and people that are responsible for selecting, deploying and maintaining software for businesses. What does it really mean though?
Does it mean that a software package, distribution or application meets a benchmark? Does it mean that it is supportable without getting the Big Three consulting companies involved? Does it mean that all its functionality has been tested using regression test cases? Does it mean performance and scalability of the software meets needs? Does it mean that the software will be kept alive into the future by a vibrant community?
In my opinion it means all of the above.
So, what is the problem?
The problem is that business readiness is in the eye of the beholder! (Definition of beholder – the dude who happens to be holding the software when the music stops!)
I think this is a complex problem for two reasons
It is really hard to objectively measure “business readiness” for any piece of software. How would you assign a rating to software? What all would you need to consider? More about this later.
Organizations may decide for business reasons that “business readiness” is not the most important factor in adopting software.
In the good old days, when the startup I founded was in the throes of finding its first customers – a mid-sized company decided it liked us. We made B2B software – basically a marketplace engine /procurement engine / supply side engine. The customer decided that using our software would be advantageous for them because our engine could be easily adapted to run many different models that they wanted to try out, at little cost. They knew we were still early in our product cycle but they were willing to act as guinea pigs for the business advantage that they would gain. They also benefited from the fact that they wouldn’t have to hire the engineers or the operational support people to maintain the software and that their features would find priority in our product roadmap. I would say that our software was not completely “business ready” at the time (more like early beta quality) but that we were able to help attain the business objectives of the customer, while helping our product move towards business readiness.
I will concentrate on the first point – how do you objectively measure business readiness, and suggest a way to look at this. This is not a recipe, just a few thoughts on what we should pay attention to. Hopefully you can dive into the suggested links and find stuff that helps you evaluate the business readiness of some software you are considering.
There are many levels at which software must be evaluated – I assume here that the functionality of software is not the issue. Of course this is a big assumption, but the evaluation of software “features” is a better understood art than the non-functional aspects of software. (There is even a term called “non-functional requirements” while doing requirements and specifications – I never quite got my head around how something that didn’t function could be a requirement!).
What is the state of the art?
This is a question that is very hard to answer. For any piece of software the best most people can do is to compare it to its competitors in the marketplace. Most organizations that use open source would not have the luxury of having the commercial software to compare against. They would have to rely on word of mouth or other such imperfect evaluations. Even for most commercial software it is hard to get a good grasp of how that software compares with other software.
There are some organizations such as ISBSG (International Software Benchmarking Standards Groups) that is a non-commercial organization that collects data about software projects and quality. This data is submitted voluntarily by organizations that are software organizations all across the world in many different areas of software. The software for which such data is submitted is largely proprietary and commercial.
A good use of ISBSG data would be to compare defect density within an open source project to the benchmark for that kind of application within the ISBSG data. This would serve as an indicator of the quality of the open source software.
Other data available includes “cost per function point” for a project – this can help evaluate if the cost of the project/product to your organization is close to the “standard” price for good quality projects for the application area chosen.
Evaluating the software
Once the gold standard is known other evaluation criteria for the software at hand can be applied. The gold standard provides an quantitative upper bound in terms of number of defects and cost. But IT departments do not run on cost alone….
For open source software there are a number of evaluation benchmarks/certifications being made available. However, the criteria used to evaluate open source doesn’t exist in a vacuum – it is based on hard earned lessons in software development in general. I think that these criteria apply to all software whether open source or commercial software.
Some of the standards bodies out there include:
This organization is proposing a standard model for rating open source software software. The criteria proposed include Functionality, Usability, Quality, Security, Performance, Scalability, Architecture, Support, Documentation, Adoption, Community and Professionalism. It will be interesting to see how this evolves – I think a lot of work needs to be done in this area and a promising start has been made. What OpenBRR has going for it is that it is an industry wide effort incubated by a respected university (Carnegie Mellon – please excuse my bias! J) and is committed to involving open source committed companies to the process of generating the model.
OSMM (Open Source Maturity Model) by NavicaSoft
This has been proposed by Navicasoft a professional services firm focused on open source, providing strategy, implementation, and training services to its clients. The OSMM model considers the following factors Software, Support, Documentation, Training, Integration and Professional Services. Practitioners calculate overall product OSMM scores for products. OSMM has a little bit more momentum, being around longer than OpenBRR, but is less comprehensive or “academic” in its approach – being tied to one company rather than being from an independent organization may not play in its favor.
OMMM (Open Source Maturity Model) by CAP-GEMINI
This is pretty comprehensive model which aims to generate a “score card” for open source products. It applies the criteria of Age, Licensing, Human hierarchies, Selling points, Developer community, Modularity, Collaboration with other products, Standards Support, Ease of Deployment, User community and Market penetration to generate the score card. Since the model has been developed by a consulting organization there is well framed process to apply this model. They have recently moved the project to the www.seriouslyopen.org repository.
There is nothing stopping you from considering criteria from each of those models to evaluate the “business readiness” of the software you are concerned with. I suspect that any good model will show comparable results, or the discordant models will fall by the wayside!
Show me the money
In their “Expert Letter” ,CAP Gemini - developers of the OMMM model, try to make the point (somewhat unconvincingly in my opinion) that because commercial software is developed differently from open source it has to be evaluated differently.
In my opinion, its all about the value the software provides. If the value can be derived down to dollars, that may be the best way of convincing people.
Khaled El-emam, has this cool ROI process that starts with software metrics such as number of bugs and ends up with a dollar calculation about how much a software product/project will cost the users in terms of “cha-ching”. Maybe every product needs to be put through this “business readiness” measurement!
I am now thinking about visualizing the business readiness using some cool graphic tools – “be the software, be the soooooooftware” (apologies to “Caddyshack”!)