Your Next Data Center - Can You Say "Cookie Cutter"

"Cookie Cutter Data Center", blasphemy I say, "I can build a better data center than anyone else, I'll build it myself"! All of us who have grown up as IT folks harbor that feeling of "we can do it better ourselves". The truth is that feeling of being able to do it better is what makes us good IT folks, but it comes with a serious risk. That risk is that you’re spending valuable time and money making something unique, when the off the shelf option might have been “good enough”.

I'm certainly not an advocate for eliminating the spirit of invention inherent in IT folks, but I do believe that it needs to be managed effectively. The risk of proliferating new solution options whose only benefit is that they are different from your neighbors is a big deal, especially as it relates to data centers.

In just a few short years we've gone from most data centers having a PUE of 2.0 or higher to most new data centers having a PUE of 1.5 or lower, a 50% improvement in roughly three years is nothing to sneeze at. Now we are rapidly moving towards a new standard for PUE of being 1.2. I'm great with this achievement and wouldn't even consider minimizing it or the associated benefit to enterprises and the environment. My guess is that collectively data center improvements in the last year have provided more than 5X the benefit to the environment than the famous Cash for Clunkers program did.

Now for the contrarian point of view

How much do we spend making a data center unique vs. how much benefit do we gain? In other words, if I were to build a data center from scratch and obtain a PUE of 1.18, how much did that unique effort cost as compared to just using another design that would have guaranteed a PUE of 1.2? Unique is expensive, generally speaking, and unless you have the scale of a Yahoo, Google or Microsoft, you probably can't really afford it now that the efficiency improvement opportunity has shrunk so much. So what can we do? Collaboration! What a concept, if we aren't actually using our data centers to compete against each other, why don't we collaborate on building them to a common if not standard design? Remember that there's more to efficiency that the PUE number. What is the ROI of your .02 improvement in PUE? If you spend 2 million to save 2.2 million over 10 years, you'll find that most CFO's won't be too happy with that decision. There's another important point to consider; if we moved to a standard model for building data centers we could drive down the cost of building them and the cost of the MEP gear used to run them. My guess is the cost benefits of standard implementations probably outweigh the benefit of dropping your PUE by 2 one hundredths.

I know this sounds easier than it is

This is a more difficult problem than it might seem. There are a number of perceptions about what's important to a specific data center’s design, and while I won’t try to cover all of them the following are a few of the more obvious;

- Location

- Equipment Density

- Infrastructure (virtualization, physical, cloud or a combination)

- Cost of Power

- Environmental Conditions

- Size

- Etc., etc..

All of the above can be real reasons for thinking you have a unique need and you might be right, but. Data Center's are still designed around the inefficiency of the equipment they house and they are designed to provide additional redundancy for our systems to avoid business interruption. I believe the negative drivers (inefficient IT gear and redundancy or DR) are being minimized through improvement in server design and the introduction of virtualization and cloud.

How are Cloud, Virtualization & Server design affecting Data Center design criteria?

A large part of the world is in the temperate zone, this means we have temperatures that are cool enough to support data center operations without additional cooling from HVAC units all year. In fact new server designs can handle inlet temps of up to 35C or 95F. Now, consider the ability to keep the data center much hotter with the benefit of using a virtual or cloud solution for your entire environment. With a cloud solution your redundancy and system protections are inherit in the platform, you no longer need (in most cases) to build a Fort Knox to protect your systems, the software takes care of it. Another key benefit of having your infrastructure in a cloud is that you can move towards treating your hardware as commodity and allow it to fail in place.  Now if you sum up what I've just said it adds up to a data center that can run hotter, and needs fewer people to operate it. This new data center environment would be incredibly more efficient that any of today's typical DCs would be because you reduce the space required for staff and you can run hotter in a smaller space. Less space, smaller staff, less power, less equipment, etc., etc..

A Few Prognostications about What's Next for the Data Center?

I'm not going to try and answer that question completely, I'm just going to hit the highlights:

- Modularity is a key opportunity area for the DC - Whether that's Containers or within a larger structure

- Rapid Deployment - Like any IT solution getting it into production faster is a huge benefit vs. getting it 100% perfect (that doesn't mean you don't do commissioning. There's a big difference between best possible efficiency and a functionally perfect data center)

- Efficiency is measured in more ways than PUE - PUE is an important measure, but it needs to be compared against overall project delivery and ownership costs

- Remote Management - Data Centers will be added to your environment as "compute capacity" so it makes sense that they should be remotely managed and therefore can be placed almost anywhere

- Failure in Place -  hardware will/should move more towards commodity, which allows the owner to make ROI decisions based on factors like; failure rate, power to compute ratio, ease of acquisition, etc..

- Flexibility - Can be deployed with a mix of densitand tier levels

None of the above happens by accident and if we aren't trying to get there on purpose it will take much longer to happen. So, look around you and see who else might benefit from sharing your next data center build so you can implement the best solution to protect your business going forward and to a positive ROI, not just a “good” PUE.



Data Center Automation

Hi Mark,


You make some very valid and insightful points. What do you feel about treating the automated life-cycle management of server systems as an essential service like power, cooling and network? With the majority of data centre servers running an application server stack (web server, application server, database and possibly content server) I believe there may be scope to introduce automated deployment and management systems from day 1, rather than waiting until a platform evolves to hundereds or thousands of servers. There are a limited number of OS and application software combinations used in this space. I believe it may be possible to hit the ground running with an automation system that can get a platform 90%+ of the way there with minimal IT head-count, reducing the time and effort required to turn-up new facilities.

This is the focus of the academic research I am carrying out for an MSc with the Open University. My survey ( aims to collect the experiences of those who already use automation, are in the planning or deployment phase, or who have considered and rejected it so I can determine what the best time is to introduce automation. I'm finding it challenging to get it out to those that can give me the benefit of their experience, but I'll keep plugging away.



Carl Parkinson