Designing Generation 4.0 Data Centers: The Engineers’ Approach to Solving Business Challenges


Part 1: 


A couple of years ago, when our Data Center Services’ Research & Engineering team within Microsoft’s Global Foundation Services (GFS) group kicked off the Generation 4.0 Data Center design project, we began with the question of: What are the primary business challenges facing data center deployments today?  


Challenge #1: Time to Market


We kept coming back to a couple of leading issues. The first focused on time to market and meeting a variable demand profile. The issue is simple: it takes months to years to build data centers, but sometimes businesses need to move faster.


This issue isn’t entirely new, of course. When we designed our Chicago facility we made significant gains in this area by devoting more than half the facility to housing containers that served as modularized server rooms, or Pre-Assembled Components (PACs).  Server PACs dramatically reduce time to market because they are assembled at the same time as the site infrastructure and building for the data center. When the site preparation is complete all we have to do is roll the containers into the new facility, connect a few cables, and we’re up and running.


This advance was the heart of our Generation 3 data center. While we saw it as a great step forward, it almost immediately led us to ask ourselves whether we could take the gains even further by modularizing the entire facility. That led us to the crux of Generation 4, where we created further PACs: Generator PACS, Medium Voltage Switchgear PACs, UPS PACs, etc. By moving to PACs for all these key data center systems, we created a design where almost everything we need to add new capacity can be pre-assembled in parallel and then brought together in a matter of weeks. The fact that the components all come in their own modular containers eliminated the need for much of the on-site construction—which can be the most time-consuming, expensive, and environmentally unfriendly element of building a data center.


Perhaps most importantly, with Generation 4 we can quickly add capacity incrementally in response to demand. Gone are the days when we had to wait 12-18 months for a large data center to be built, only to use a small portion of its capacity while we waited for demand to catch up to capacity. In short, our Generation 4 design delivers a revolution in terms of time to market that the data center industry has never seen before.


Challenge #2: Cost


The next business challenge (after time to market) is cost. We looked at several areas impacting cost, capital efficiency, and return on invested capital (ROIC), which affects cash flow and is calculated using Net Present Value (NPV). Capital outlay itself is now widely measured by our industry in dollar value ($) per watt, versus square foot.


Operational costs that impact Cost of Goods Sold (COGS) are measured in $ per kilowatt per month and are impacted largely by depreciation, energy costs, and operations staffing. A couple of years ago we moved to a chargeback model for power versus real estate, and as Christian Belady pointed out in his blog, that incentive-based approach proved effective in turning the corner on power usage and costs.


To tackle other costs in the data center, the use of server PACs is proving equally effective. Using this approach we project our Chicago facility will deliver cost savings of approximately 30 percent and enable a more efficient cash flow because we will not build out the modularized server PACS until they are required. That’s just the beginning of the cost benefits as we move to Generation 4 and fully modularize the data center. The traditional raised floor is not where the majority of the money or lead time is spent. Instead, it is in the electrical and mechanical systems. Moving to PACs in these areas will reduce costs and free large amounts of capital previously required to construct huge facilities that we might not fully utilize for several years.


Challenge #3: Efficiency


The discussion on the impact of energy costs led us to the next business challenge: efficiency. Efficiency has been called the “fifth fuel” and is regarded as a source of energy in itself. Today the industry is beginning to use Power Usage Effectiveness (PUE) as the recognized metric for data center efficiency.  If we take a look at the larger picture beyond just operational power consumption, the Total Cost of Energy (TCOE) also needs to be considered to address the full lifecycle of the data center–from component manufacturer to transportation to construction and on-site assembly, and even end of life. 


When we considered the goal of making our data centers more energy efficient, our team’s debate focused on the inefficiency of using redundant hardware systems to provide backup and failover capabilities. Previously many data centers have been built with the same level of reliability to the highest common denominator.  Our Generation 4 team no longer believes that hardware redundancy is the best way to ensure service reliability. Therefore, we began with a fresh approach to this industry problem by looking at the latest technology capabilities and then establishing multiple classes of service. Each class was assigned a differentiated chargeback model that would encourage our properties to move to the lowest cost and most efficient service level that meets their business needs. To support this we are developing software that moves reliability across all applications higher up the Open Systems Interconnect (OSI) model all the way to the operating system and application layer. The four classes of data center service we have created are still being researched and we may consolidate them over time. Regardless, our goal around software-based reliability matched to hardware levels of reliability is driving the right discussions.


Challenge #4: Flexibility and Density


The last major challenge we identified was to enable data centers to be flexible and host multiple form factors and levels of density. With traditionally built facilities, the density of the data center is normally set during design. That density then remains unchanged for 15 years or longer during the facility’s lifecycle. Measured in watts per square foot, density can lead to capacity planning challenges. Build with too low a density and the data center will be less energy efficient and take up more real estate than is necessary, which can have a big impact when land is expensive. Build too high a density and you can strand power and cooling, which is where the majority of the costs are.


The Goals our Engineering Team Set


·         Reduce time-to-market and deliver the facility at the same time as the computing infrastructure

·         Reduce capital cost per megawatt and reduce COGS per kilowatt per month by class

·         Increase ROIC and minimize the up-front investment for data centers

·         Differentiate reliability and redundancy by data center class and design the system to be flexible to accommodate any class of service in the same facility

·         Drive data center efficiency up while lowering PUE, water usage, and overall TCOE

·         Develop a solution to accept multiple levels of density and form factors, such as racks, skids, or containers


      Next week, I will talk about the process we used to develop these ideas.  Click here for the Microsoft 4th Gen Video




     Daniel Costello, director of Data Center Services

     Global Foundation Services


Daniel Costello is the director for Data Center Services at Microsoft, responsible for data center research and engineering, standards and technologies, data center technology roadmap, Generation 4 data center engineering, data center automation and integration with IT hardware, operating systems and applications.  Daniel also works closely with Microsoft Research on proof of concepts in support of the data center of the future and manages a team of facility engineers and service architects.  


Interested in learning more about Green IT? Check out Microsoft’s Webcast Series on TechNet!


The series will feature a number of well known speakers, including Christian Belady and Daniel Costello.  The full session listing is provided below:


The series will feature a number of well known speakers, including Christian Belady and Daniel Costello.

The full session listing is provided below:




Time (PST)


Transforming the Data Center with Energy Efficiency (Level 200)


11:00 AM

Hyper-Green Virtualization: Scaling Enterprise IT for Energy Efficiency (Level 200)


10:00 AM

Cloud Computing Futures: Creating Greener Clouds with Microsoft Research (Level 200)


9:00 AM

Improving Energy Efficiency w/ Windows 7 Power Management         

(Level 200)


10:00 AM


Comments (1)

  1. Anonymous says:

    Part # 2 (A couple of years ago, when our Data Center Services’ Research & Engineering team within

Skip to main content