Standardized Data Center Stack Framework Proposal: Driving for a standard to unite owners and operators in how they design, discuss, measure and compare their datacenters regardless of location, industry or function.
When you think about a data center, what do you picture? Almost any aspect could be imagined: mechanical & electrical systems, network infrastructure, storage, compute environments, virtualization, applications, security, cloud, grid, fabric, unified computing, open source, etc. Then picture again, how these items incorporate into areas of efficiency, sustainability, or even a total carbon footprint. Quickly the view of a data center becomes significantly complex, leading to challenges like answering the question of how efficient a data center is to company executives. Where does someone start to measure for these types of complexities? Are the right technologies to do so currently in place? Which metrics should be used for a particular industry and data center design? Data Center professionals all over the world are asking the same questions and feeling the same pressures. You are not alone.
Data centers are changing at a rapid pace; more than any other point in history. Yet with all the change, data center facilities, and IT professionals face numerous challenges in unifying their peers to solve problems for their companies. Sometimes you may feel like you are talking different languages or living on different planets. What do virtual computers and three-phase power have in common anyhow? Has your IT department ever come to you asking for more power without considering that additional cooling is required? Do you have hot spots in places you never expected to have servers? Has virtualization changed your network architecture? Your security protocols? What exactly does cloud computing mean to my data center? Is cloud computing being performed in your data center already? More importantly, how do I align the different data center disciplines to understand how new technologies will work together to solve data center problems?
The IT/Facilities gap is no longer a new topic of discussion. Almost any data center trade show will have at least one session about the infamous gap, but what tools do you have to close the gap? With ever increasing densities, weary data center professionals still have to keep the data center operating, while facing additional challenges relating to power efficiencies and interdepartmental communication.
To compound the problem, ‘green’ has become the new buzzword in almost every facet of our lives. Data centers are no exception to green marketing and are sometimes considered easy targets due to large, concentrated power and water consumption. New green solutions sometimes are not so green due to limited understanding of data center complexities. New green technologies may disrupt cost saving and efficient technologies already in use. Corporations are trying to calculate their carbon footprint, put goals in place to reduce it and may face pressure to apply a new solution without understanding the entire data center picture. Various government bodies around the world have seen the increase in data center power consumption and realize it is only trending up. It is only a matter of time before regulations are put in place, which will cause data center operators to comply with new rules, possibly beyond what a data center was originally designed for.
But we all know that the most visible pressure is that costs are rising. The uncertainty of the economy has everyone looking for ways to cut and optimize data centers further than ever before. Data centers have reached the CFO's radar and are under never ending scrutiny to cut capital investments and operating expenses. So what are data center owners and operators supposed to do? Invent their own standards? Metrics? Framework? Which industry standards and metrics apply to your data center and will they help you show results to your CFO? There has to be a better way. We need to unite as an end user community to create a common voice and attack this problem together.
Enter Data Center Pulse. In September of 2008, this new data center end user community was formed with one simple goal - influence the industry through end users. The DCP membership currently stands at 1067 data center owners and operators representing over 600 companies in 45 countries and almost every industry in the world. DCP members are the customer. They are the people that make the billions in annual purchasing decisions that drive the IT economy.
In February of 2009, Data Center Pulse (DCP) held a summit in Santa Clara, CA to tackle some of the biggest challenges our community was facing. Power, Metrics, Industry Alignment, Server Efficiency, Metrics, and Cloud Computing. Leaders from DCP drove each individual track. Findings from each track were presented to the industry the next day. Through the process, it became clear that a key component was missing. There was a lack of common framework to address all aspects of the data center - i.e. there are common building blocks that make up every data center in the world regardless of country, business, or function.
At the summit, the ‘cloud computing’ track was tasked with trying to understand the data center interdependencies from top to bottom. By doing so, users could analyze the potential outsourcing to a cloud technology solution. From these open questions, discussions and uncertain future technologies one data center operator, from a major financial institution, shared his view on the interdependencies of data centers. This view was a stack of building blocks; the fundamental ingredients, that make up a data center. All of the track leaders and participants realized that everything fits into one or more of these major building blocks. Blocks have interdependencies and turning a knob in one will affect something in another. We agreed that this stack framework should be developed as a common approach to unite users and providers on how to address the data center machine.
The Data Center Pulse Stack grew from this original proposal by including input from other data center operators. The DCP Stack graphic represents the first draft of this stack framework.
CLICK TO ENLARGE
The development continues, but the objective is simple - provide one common framework that will describe any data center, anywhere, doing anything. The discussions are framed around simple questions: Where is the data center? What feeds it? How is it designed? And, what does it do? By addressing each of these questions individual productivity metrics can be broken down into their respective blocks, which enables every data center to measure them the same way. Additional input included adding a baseline and carbon score in which provide a common way to answer "What feeds it?” Everyone needs power and a data center’s carbon score can be calculated. The next step is to apply industry established metrics for each block that is running in the data center. For example, PUE for the MEP layer. The platform layer would have one or more productivity metrics for useful work. Each of the metrics is then rolled up into a top-level efficiency metric that calculates your carbon score out. In essence, the carbon score in, the work performed inside the data center (all layers) and then the carbon score out. Similar to vehicle horsepower ratings, fuel efficiency, and a smog check, the DCP stack allows any data center to be compared with a simple, certified method of measurement that peers, industry manufactures, and company agree upon.
So what does this have to do with answering: What does three-phase power have to do with virtualization? Using the DCP stack like a map, all changes in the data center can be traced and used to identify interdependencies. Three-phase power is often needed for new servers that leverage virtualization (Server/MEP). The new server came with a new Storage Area Network (SAN) switch and storage array (Storage/Network). All the new IT equipment needed more cooling so it had to be placed in an area that can handle more floor tiles(Physical/Spatial/MEP). After installing the new servers, SAN switch, and Storage array 10x more work can be performed than with old equipment but the cooling is less efficient and the new IT equipment uses more power. The individual layer metrics changed, but the top-level efficiency score went up improving the carbon score as well. Consider that even though the cooling metric is less efficient compared to other data center certified scores, what is done in the data center may make the overall score better than a peer’s score.
Members of Data Center Pulse believe the best way to describe, communication, and innovate data center thinking between peers and the industry is through the use of a common data center stack framework. Do you agree? Do you disagree? What would you do? We would like to know. We would like everyone to participate in building this common framework. You can participate in this development by sending email to email@example.com.
The stack framework can be found at HERE. Watch the website and the DCP YouTube channel for more updates on the stack development.