A friend insists that we’ll only
know the recession is over when software vendors no longer start every
whitepaper with the phrase “In these tough economic times …” It may be as
reliable an indicator as any.
Meanwhile, in these tough economic
times, I often read of factories suffering so badly that they are “operating at
only 50% of capacity.” For a manufacturing plant, such low utilization is a
disaster. So, gentle reader, what do you think would be the average utilization
of your data center’s capacity? Nothing like 50%, that’s for sure. Typical
enterprise servers run at about 10% utilization according to a recent McKinsey
report. They may, just may, be able to reach as high as 35% with a concerted
There are many good excuses for
this situation, with both business and technical justifications. Enterprise
applications on the same server do not always play together nicely. One will
demand all the memory it can get, sulking unresponsively in a corner if it can’t
get it; another will push over less aggressive applications in order to grab
more CPU. In the SQL Server world, we’re working on that continuously, with
every version adding better resource governance and management. (See http://bit.ly/ss2008rg for specific information
about SQL Server 2008.) Then again, these same applications are often
mission-critical and it is business requirements which force us to isolate
them: from the risk of downtime, or other disruptions. Approaching our problems
in this way, it’s quite easy to add a new server for this app, and another
server for that one, and sure enough, the result is soon 10% utilization.
It won’t do. There’s a capital
cost, and fixed running costs, for every server we add, not to mention the
environmental considerations of wasted energy and resources that weigh heavily
on many of us, recession or not. I have visited datacenters in emerging
economies from Egypt to China where simply having enough power available is a
problem and resource management is imperative.
In the database world, we have
traditionally approached these problems by running multiple native instances of
servers on the same box. This can indeed consolidate hardware and reduce costs.
Nevertheless, IT managers and DBAs are increasingly looking to virtualization.
Why? There are numerous advantages. For example, with virtualization each
application can have a dedicated, rather than shared, Windows instance:
especially useful for mixed workloads; and with virtualization, instances are limited
only by the capacity of the machine, rather than the native 50-instance limit.
SQL Server 2008 works
exceptionally well with Windows Server 2008 R2 and Hyper-V to deliver effective
virtualization. In SQL Server 2008 R2 (shipping in the first half of 2010) we
will support up to 256 logical processors on that
platform to scale those solutions even further. There are some great scenarios for this. Business
Intelligence applications such as Analysis Services and Reporting
Services are prime candidates, especially when mixed BI and operational
workloads peak at different times. Virtualization has other benefits for the database user:
for example, the lifecycle from development to test to production becomes
easier to manage with a consistent, virtualized, environment.
It’s really worth considering
virtualization, and building up your understanding of the technology and
requirements. There’s a great whitepaper at http://bit.ly/sqlcatvirtual with sound advice and background
for any SQL Server 2008 DBA considering this technology. Good material to have
to hand, in these tough economic times.
- Donald Farmer
- Twitter: @donalddotfarmer