Data Center Pulse Blogs
The argument on whether there is such a thing as "private cloud" just won't go away. Many of the big name SaaS and Public cloud players continue to publish content that poo-poo's the reality of Private Cloud. I'm writing to suggest that the arguments against private cloud are in many cases wrong, and in some cases pure sales FUD (Fear Uncertainty & Doubt). However, before I go into my little diatribe, I want to make clear that just because I'm a private cloud believer, it doesn't mean I don't believe in "public cloud".
Some of the more common arguments on the viability of private cloud revolve around some common themes that IT has been sold by vendors for years:
- - "make better use of your internal staff by having them work on things that drive business differentiation"
- - "private cloud doesn't have infinite scale"
- - "the security of public cloud/SaaS providers is better than anything a private cloud could create"
- - "staffing for running private cloud environments will be difficult" (I thought they said there wasn't a "private cloud")
- - "public cloud offers economies of scale that can't be met by a single enterprise"
So, let's take each one of the points above and attempt to break through the FUD.
"make better use of your internal staff by having them work on things that drive business differentiation"
Who can argue with that statement? I certainly can't because I believe we all should be trying to do that. However, as IT leaders we've been told that by our vendor partners for decades and yet IT as we know it still exists. Why does IT still exist, because inherently most companies understand the "innovation & creativity" benefit of having their own IT team. While empirically I hope for the day when most IT jobs are higher level business interaction level positions, I also understand that that day is many years away, cloud or no cloud.
"private cloud doesn't have infinite scale"
True, but SO WHAT? How often does your business need infinite scale? I've done cost comparisons of well designed private cloud implementations for large enterprise and found them to be very competitive with public cloud. In fact, they were so competitive that having a little extra capacity and having a contract for "burst" capacity was OK. If I knew that I needed 2X or more extra capacity even two or three days a month, then I would consider my options for putting that specific environment in a public cloud. The alternative is to have contracted burst capacity available.
"the security of public cloud/SaaS providers is better than anything a private cloud could create"
Generally speaking this statement might be correct. There are certain industry verticals that could argue differently (Financials), but it's probably true for many. However, does that really change the equation? The difference with internal vs. external security might be minimal compared to an unforeseen legal issue that results from public cloud dependence. In each case the business needs to make a conscious decision on the importance of their IP. If security isn't important to the business when it's internal then it still won't be important in the cloud. Now consider the fact that the worst security threats come from the inside and you can understand why just moving it to the cloud doesn't solve anything. In fact it's very much like outsourcing something that's broken. In most cases all you've really done it make the problem more intractable and created a no-win situation with your vendor.
"staffing for running private cloud environments will be difficult"
So my first comment is; if there's no such thing as private cloud, how difficult could it be to staff for it? However, all kidding aside IT organizations the world over have been building and managing complex infrastructure for years. Arguably a well designed private cloud implementation actually improves usability and simplifies the roles of many IT folks because it can bring automation to otherwise high risk manual tasks.
"public cloud offers economies of scale that can't be met by a single enterprise"
Next to the "infinite scale" point that's always used by public cloud providers, this is the next most common refrain. Unfortunately, it's not necessarily the case. In a previous comment you can build private cloud environments that offer significant scale and provide all the necessary environment, security and staffing requirements at or below the cost of public cloud offerings. I'm not saying it's easy, and in each business use case the drivers and costs will be different, but it can be done. I'm not arguing in favor of private cloud for cost savings, for me it's more about options and the potential to bring new innovation to the business. If for whatever reason I can't utilize public cloud for any or all of my workloads, at least I can get the majority of cloud benefits by having a private cloud.
So, just say yes to private cloud if it's what the business needs, but like any IT solution don't implement it without a clear set of objectives. Do your due diligence and ensure you're bringing the right tool to generate the right business opportunities, whether that tool is a private, public, hybrid cloud, or none of the above.
In closing, I'm not trying to argue against public cloud, but rather to argue that the reality and benefits of both private and public cloud are real.
Long Live Cloud!
I love meeting people that has much passion and drive for technology as I do. Wade Vinson, a Power & Cooling Strategist at HP, is one of those guys. He is better known as the POD father (and yes he gets a lot of guff for that name!). He ranks pretty high on my geekism scale due to his utter exuberance for his lego-set data center. I recently visited the HP site in Houston Texas to look at the latest HP POD. Wade was nice enough to let film a tour with him for our latest Data Center Pulse "On-The-Road" episode. He also tried very hard to convince me he could join DCP Core since he is an "End User" after building his 6MW POD test facility...hmmm...
The jury is still out on standard "containers", but Wade has evolved their approach and come up with a new building block. It's no longer a shipping container, it's a POD (sounds familiar). The POD is wider and taller than a container with everything you need packed inside. Ahh, glorius density! That is music to my ears. Now if it only had a high-temp, liquid cooled IT equipment option with over-clocking (that sounds like a familiar request as well). :-)
Episode 3 of DCP "On The Road" is embedded below. Stay tuned for more episodes on the DCP YouTube Channel. http://www.youtube.com/user/datacenterpulse
Why is an individual computer server different from a data center, other than scale? After all, the physical characteristics of a server are very similar to those of a data center.
Let's Compare the Characteristics of a Server to those of a Data Center
Server Housing (case) = Data Center Building
Case Open Alarm = Data Center Entrance Security & Environment Management Alarms
Fans = Air Conditioning
Power Supplies = Power Supply (transformers, UPS, distribution, PDUs)
Alerts/Alarming = Data Center Monitoring and Alerting
Administrator = Data Center Staff/Manager
Why is the above comparison of a server and a data center important?
"We need to move away from building custom data centers". There I said it, man that hurt. As a data center guy, I can't stand the idea that there will soon be a time when building a unique facility for my company will most likely be the wrong thing to do. Before you start throwing things at your computer screen and yelling my name in anger, consider the following historical examples of the automobile and the personal computer:
The automobile - Prior to the invention of the assembly line by Ransom Olds cars were handmade and individually assembled.
A Little Assembly Line History:
In order to keep up with the increasing demand for those newfangled contraptions, horseless carriages, Ransom E. Olds created the assembly line in 1901. The new approach to putting together automobiles enabled him to more than quadruple his factory's output, from 425 cars in 1901 to 2,500 in 1902.
Olds should have become known as "The father of automotive assembly line," although many people think that it was Henry Ford who invented the assembly line. What Ford did do was to improve upon Olds's idea by installing conveyor belts. That cut the time of manufacturing a Model T from a day and a half to a mere ninety minutes. Henry Ford should been called "The father of automotive mass production."
One could also argue that since the advent of the assembly line and mass production, there continue to be innovations that reduce the complexity and "individual" nature of many automotive components. Most automobile manufacturers don't build their own windshields or individual parts anymore. Instead they are mass produced by others who can apply standards across multiple car lines. You can still buy a handmade car, but you definitely pay the price.
Personal Computers (PCs) - Prior to 1982 many of us bought components and made our own PCs. Who would build their own server or PC today? Maybe high end gamers or organizations looking to solve a very unique problem whose solution is more important than the cost. Generally speaking, building your own computer today would be much riskier, and costlier than buying one pre-built.
Increased attention and new money will accelerate the push to a standard build data center model.
Historically speaking most of us in the data center space understand that there's still room for unique design as we push the limits towards the best, most efficient and flexible data center. However, the data center has gained so much visibility over the last five years that it is no longer the black box that it used to be. This public awareness has dramatically increased investment dollars in the data center industry and created a large number of data center experts. The investment and influx of experts mean that change and improvements happen at a much faster pace than in the past.
If manufacturing automation and standards can be applied to cars and PCs why shouldn't they be applied to Data Centers?
So, I guess what I'm saying is that the data center is ripe for the picking, just like the manufacture of PCs was in the early 80s. As we get to the point of diminishing returns relative to the efficiency and modularity of data centers, companies that continue to try and build their own will be at a distinct disadvantage. We all know that time is one of the most costly of our resources. If you can implement new capacity in a matter of weeks vs. a matter of years, you're better positioning your enterprise to leverage their IT investments to meet changing business demands. You may believe that you could build a data center that's better than pre-fab modular boxes, and you could be right. However, just like in the case of the PC you'll likely discover that your risk and months of lost hours don't justify the small expected improvement in efficiency.
Who is going to win in the modular data center space?
What am I a psychic? I don't have a clue, but the writing is on the wall, first it was inflexible containers and infrastructure that wasn't ready. Now the containers have improved and we've got pre-fab modular designs available and IT infrastructure is quickly catching up. The key driver towards moving to a low cost standard "capacity of compute" model for your DC space will be when the majority of our applications and infrastructure can be distributed and portable through a combination of cloud technologies.
Hold on tight it's going to be a fun and probably rocky ride, but in the end businesses should win and that's what's important. Who would you name as the Ransom Olds or Henry Ford of the Data Center?
This link is to a related blog I wrote a little over a year ago:
The eBay modular Data Center RFP finalists are announced.
I'm stoked! Seven weeks ago, we released the Phoenix Modular Data Center RFP to the industry. The intent was to encourage companies to participate in the RFP, level the playing field and spark some real innovation. In the past we created a list of vendors that we believed were qualified to design and deliver a data center for us. We also told them how we want to build it. But this time, we just gave the parameters that we wanted to achieve. It was up to them to come back with the most creative, flexible and cost effective design they could. We truly leveleraged the talent and ingenuity of the vendors, engineerings, architects and others in the data center community.
The repsonse was even more than I had hoped for. We approved 37 company requests to participate in the RFP. That is almost 30 more than our original list. Regrettably, we had to turn away another 15 companies because they missed the window to participate. But not to worry, this is the beginning of future projects that they will be able to participate in.
The majority of the companies allowed us to share stats about them. They ranged in size from as few as 14 employees to over 320,000. Combined annual revenue was over $110B! They also had built almost 800 data centers and had 227 years worth of experience. 28 design firms, 9 manufacturers mixed with 6 partner submissions. All of them had modular experience and offerings and all of them committed to meeting their efficiency committments in the design. I don't know about you, but with that much brain power and experience, this data center project has the potential to change the game.
Now, our requirements were not easy. Multi-tier, modular, vendor agnostic, scalable, multi-temp air and liquid to each location, rack to container mixtures, 100% free cooling in Arizona year round, future-proof, extreme density, and more. But people really stepped up. In the end we received 17 official submissions. Each was ranked based on how they scored in the scoresheet which was published on the RFP status page early in the process. The companies knew what we needed, they knew how they were going to be scored and that they were competing with other very creative people in their field. There's nothing like a competition to bring out innovation. Last week, my team spent five solid days diving into the details of these submissions. And we made a conscious choice that each of the proposals had to stand on their own merit. We read what they submitted, analyzed their design based on the drawings and narratives they provided and scored accordingly. What we achieved here was a very fair process that gave us six solid finalists.
Today, I'm delighted to announce these finalists. Next week, each company will have two hours to present their solution and answer questions from our review board. In the meantime, we have also provided then with direct feedback on their proposals and how they scored. We expect each to fine tune their proposals from this feedback. It should prove to be a very interesting competition.
The Modularity Battle Continues
The eBay Modular Data Center RFP finalists are listed in alphabetical order.
1. Primary: DPR (http://www.dpr.com)
2. Primary: EDI (http://www.ediltd.com)
3. Primary: Kling Stubbins (http://klingstubbins.com)
4. Primary: McKinstry (http://mckinstry.com)
5. Primary: RTKL (http://rtkl.com)
6. Primary: Skanska (http://skanska.com)
Partner: Cosentini (http://cosentini.com)
I'm also sharing the additional companies that participated in the first round. 13 of them agreed to share their information, 18 requested to remain anonymous. But, each of these companies also received feedback on their proposals so they can adjust accordingly for future RFPs. We saw some very creative solutions and I believe they deserve direct feedback rather than a "sorry you were not selected, thank you for participating" answer.
The participating companies are listed below:
1. Advanced Design Consultants, Inc. (http://www.adcengineers.com)
2. Affiliated Engineers, Inc. (http://www.aeieng.com)
3. BKM Mission Critical Facilities (http://www.bkm-mcf.com)
4. BRUNS-PAK (http://www.bruns-pak.com)
5. DELL (http://dell.com)
6. Elliptical Mobile Solutions (http://www.ellipticalmedia.com)
7. Gensler (http://www.gensler.com)
8. Hanson Professional Services Inc. (http://www.hanson-inc.com)
9. Hypertect, inc. (http://www.hypertect.com)
10. M+W U.S., Inc. – A Company of the M+W Group (http://www.usa.mwgroup.net)
11. NOVA Corp (http://nova-corp.com)
12. Reliable Resources, Inc. (http://www.relres.com)
13. Technology Management, Inc (http://tmiamerica.com)
As I said in my previous blog entry, I truly believe that if we openly share our challenge/problem/need with the industry, we will find incredible solutions. There's too much brainpower out there and we don't kid ourselves that we have all the answers. Along those lines, we also help our owner/operator community by pushing for innovation and sharing. While we will ultimately pick one winner to build our next eBay data center in Phoenix, we are looking for ways to give these innovative companies an opportunity to share their approaches with other DCP members. This blog entry is one of them. I also believe there is no silver bullet - i.e. a single answer for data center design. There are too many variables to consider. We will standardize on many things (containment, economizers, etc), but there will always be different approaches and that is healthy.
We created this video to announce the RFP finalists and give a bit more insight into the process we went through. It also has some great footage of the latest construction of the data center in Phoenix. Stay tuned for more updates on the datacenterpulse youtube channel.
I plan to continue sharing the details of this project with the industry as it happens. Stay tuned for more on my blog and on the Modular RFP Status Page: http://datacenterpulse.org/rfp/modular
Subject: Free online Data Center Summit – case studies & best practices
Join myself, Graeme Hay and Jan Wiersma of Data Center Pulse as we present a roundtable presentation at BrightTALK’s free Next Generation Data Center Summit on July 28. The presentation will be part of an all-day event in which experts discuss innovative tips and best practices through a series of interactive webcasts that you can view from the convenience of your computer.
“The State of the Data Center: What's Next?”
Mark Thiele, Jan Wiersma & Graeme Hay, Data Center Pulse
You will be able to view any or all webcasts live or after on-demand. If you are able to tune in live, you will be able to submit real-time questions to presenters and take part in presenter-led polls. I hope you will be able to join us.
Today my team released an RFP for a small, but dense eBay data center design project in Phoenix Arizona. I am really excited about this RFP for two reasons. First, this is an ambitious project that will not only challenge the engineering and design firms but the hardware manufacturers as well.
9 months ago, I was in sunny Phoenix at the 7x24 conference. A group of about 40 DCP end users gathered on a Sunday morning for a 5 hour brainstorming session on next generation solutions. We discussed the Top 10 list, the stack, the current Chill Off 2 testing, modular data center design concepts and what the future could and should look like. At the end of the session we had our updated top 10 list and a direction decided for the Chill Off 3. The Chill Off 3 would be broken into two parts. The first part is a highly flexible, modular, high-temp data center design that would allow solutions from today and tomorrow to snap in like legos. The infrastructure would also scale as needed rather than building out everything from day one. The second, was the compute load itself and how we could maximize performance not only in the power and cooling systems, but in the actual work being performed. We would look at the entire system performance rather than the components by themselves. More on the compute load part in a future blog posting.
We presented our concepts the following day to the general audience at the 7x24 conference. That sparked 9 months of work to organize. Originally, we were going to to have different engineering and design firms submit different designs to compete on the physical infrastructure to support the next generation equipment tested in the Chill Off 3. But, we realized that this would be difficult to coordinate/build and we needed to have more meat in this for the participants. So, we applied the thinking to an actual eBay project that we were spinning up. The winning design would build the Phoenix data center. In early June we broke ground on a new "warehouse" data center building that would allow this to happen. It is a two story building fortified structurally to handle very dense loads. It is just a shell building ready for the DC design. We plan to have a minimum of 4MW of IT load in 8,000 SF. We also expect to achieve 100% free cooling year round in Phoenix. Yes, I said year round. We also expect to achieve multi-tier deployments in the center. We want to be able to quickly and easily add modules to the DC to achieve street-power only, N, N+1, 2N, and N(2)+1. We plan to match the tier level with the application workload.
Another unique approach to this project is that we are opening it up to a wider audience - a much wider audience - The entire Data Center industry. It touches everything physical in the data center including the IT load. So, along with the 8 firms that we have already added to our list, we are allowing anyone who meets the qualification criteria to submit a proposal.
I won't go into the detail in this blog posting, but I will fill you in one two important pieces of information.
- The RFP opens up July 7, 2010.
- The RFP responses are due August 20, 2010.
- The RFP closes on August 27, 2010.
- Construction start is planned for January 1, 2011
- Construction completion is planned for July 1, 2011
- Application - 2 weeks
- RFI responses - 4 weeks
- Selecting finalists - 1 week (internal)
- Presentations - 1 week
- eBay decision - 2-4 weeks after (internal)
2. The RFP details will be tracked through linkedin in the DCP: INDUSTRY group.
- The details of the RFP, qualification criteria and RFI responses will be tracked in the DCP:INDUSTRY discussion thread.
- Applicants must apply through this webform.
- Join DCP: INDUSTRY here: http://datacenterpulse.org/JoinUs (must be member to see thread details)
We look forward to seeing the innovation come alive this project! For more information, please email firstname.lastname@example.org
Today we added Eddie Schutter from AT&T to the Data Center Pulse board of the directors. You can see the press release here. But for this blog entry, I wanted to give a bit more information and perspective on Eddie and our expansion into local chapters.
I have worked closely with Eddie on emerging technology opportunities and industry efforts for almost three years. The most recent was with the Green Grid where we both sit on the Advisory Council with a great group of End Users. In February, 2010 Eddie and I co-presented at the Green Grid Technical Conference in San Jose, California. The presentation was focused on the GG birds of a feather session and the DCP top 10. This was a result of the collaboration agreement between DCP and the Green Grid established in early 2010.
in April, 2010 the DCP Board decided that it was time for us to harness the interest and capabilities of local end users by establishing DCP chapter groups. We started informal chapters in Utah and Arizona because of the activities and interest in those two states. The first meetings were great, but we realized very quickly that we needed someone to focus on developing this program from the ground up. The Utah end users that we knew of represented over 200MW of current future consumption in the state. The Arizona end users represented over 50MW and that was just scratching the surface. Eddie came to mind immediately to help lead this effort. As mentioned in the press release, Eddie has been active in many different industry groups including the Green Grid, Uptime Institute and 7x24 Exchange in which he helped start the Lone Star Chapter in Austin, Texas. Eddie's day job is the Sr Technical Director of Data Center Architecture and Planning for AT&T. Over the years we have shared best practices, debated on technological approaches and generally brainstormed on what is possible. One of the most memorable was the Data Center Pulse working session at the 7x24 Exchange in Phoenix, Arizona in November of 2009. Eddie, along with the majority of the DCP board, Olivier Sanche and other end user leaders met following the session and decided on a strategy to push for next generation cooling/compute solutions. Stay tuned for more blog updates on recent updates that were spawned from that great session.
We have great expectations from Eddie and the development of the local chapters. We believe that this is a very productive and effective way to have end users participate in work that will directly benefit their companies and the DCP efforts. The majority of the DCP work has been done over linkedin and through conference partnerships over the last few years. This approach will allow a much larger group of individuals have direct input in helping us to shape our efforts to influence the industry. The voice of the customer will get bigger but also smaller - more focused on their own backyard.
With that, I wanted to address a few concerns that people have raised over the last few weeks.
First, we are not competing with any of the industry groups. Quite the contrary. We plan to partner up with other industry groups to optimize peoples time and increase participation of end users. We plan to approach this very much like the conference partnerships we have established. There will be DCP end user only meetings to allow for open dialog and focus to synthesize content and requests before we meet with the industry. There will then be additional sessions with the industry to engage and share the learning and discussion/debate from those sessions.
Secondly, we will be establishing additional chapters, but we want to make sure that we do this in a controlled manner. Eddie will be establishing the bylaws and governance to ensure that we can scale and optimize the local work of our members.
Lastly, this is not just a North American effort. We plan to establish local chapters globally. For example, the activity in China, India, Europe and other parts of Asia (to name a few) is amazing. There are tens of billions of dollars being spent globally in new and retrofit data centers.
Eddie has a full plate, but we have every confidence that he will be able to deliver a great program. If you are interested in helping or have ideas please email email@example.com or contact Eddie directly at firstname.lastname@example.org.
Technical Advisory Board
The Technical Advisory Board (‘TAB’) of Data Center Pulse is intended to advise the Data Center Pulse board and its peer advisory boards (regulatory for example) on the technical agenda of Data Center Pulse, reflecting the technical challenges, pressures and evolving needs of data center owners and operators. This involves the evaluation of technologies and proposals to the Data Center Pulse board of what it should pursue and potentially fund, using the published and evolving 'Data Center Stack'.
The TAB should be made up of Data Center Pulse members who can adequately represent the different facets of our data center owners and operators.
As such, we feel it is important to have different perspectives on the board. These perspectives would come from an industry focus, a technology subject matter expertise (SME) focus and a regional focus.
We feel that the maximum number of board positions should be 10, reflecting an opportunity to populate it with a manageable, flexible yet diverse set of talent that would represent data center end users and operators. TAB members must be willing and able to represent global data center issues and technology solutions through their experience, taking a 'best of breed' approach unrelated to a specific vendor/supplier agenda'.
We are looking to fill the following Technical Advisory Board positions with individuals with depth of expertise in the noted areas:
- 1 - Industry Represenative: ‘Red shift’ (companies at pace or exceeding Moores law for raw computing power)
- 2- Industry Representative: ‘Blue shift’ (traditional enterprise companies growing steadily with GDP)
- 3- Technology SME - Compute including OS/Virtualization
- 4- Technology SME – Network and Structured Cabling (including consideration of logical networking that impacts dc design - e.g. IPv6, L2MP,Energy Efficient Ethernet)
- 5- Technology SME - Storage
- 6- Technology SME - Data Center Power and Cooling (Mechanical, Electrical)
- 7- Regional Technical Input – Europe
- 8- Regional Technical Input - Asia-Pacific
All positions would require input both from a product and capability perspective as well as the management of these products and capabilities in the data center.
While the positions reflect a certain background and experience area, we are looking for people to contribute to the overall TAB agenda and not just in their specific area.
- Prospective TAB members should look to match the following criteria:
- Be an active end user/operator member of Data Center Pulse.
- Demonstrate experience and interest relevant to the position they are applying for (above).
- Have a minimum of 3 years of experience in a ‘day job’ position relevant to the TAB position they are applying for.
- Have a minimum of 5 years experience working for an end user or operator (s) of data centers in a position responsible data centers or services that are provided by data centers.
- Be able to meet the Advisory Board Expectations (below).
- Look to bring a set of realistic challenges to the TAB that need addressing both within your sphere of direct expertise and also as an observer and offeror of data center services.
- Be proficient and comfortable in written and verbal technical and non-technical communication (we expect DCP to communicate a lot by different mediums from blogging to video to conference speaking).
The decision regarding the selection between multiple candidates who meet the above criteria rests ultimately with the DCP TAB chair and co-chair. We recognise though that this is not to be regarded as a formal employment process - if multiple people are keen to contribute to better the industry, we will seek to utilize their skills by participation on DCP projects.
Advisory Board Expectations:
We expect members of the TAB to attend a TAB board meeting every 2 weeks for 1 hour duration. In addition to this, we would anticipate that the TAB may ask for an additional 2 hours per week of offline reading and review and document preparation in support of TAB topics.
The meetings are held online and use online collaborative tools. The duration of tenure on the board will be one year. Chair and Co-Chair positions tenure minimum of one year maximum of two years.
We would hope that this burden is not onerous as a) you will be interested and dedicated to the subject matter and b) it should be in line with the topics you are working on relevant to your ‘day job’ anyway. That said, it will be appreciated.
In additional Data Center Pulse board members have asked for volunteers to work on webcases, blogs and attend conferences to staff DCP booths. Their employers should be prepared to grant them dispensation to perform these tasks in the name of Data Center Pulse (and not their own company name).
How To Tell Us That You Are Interested
Send me an email indicating your interest at email@example.com.
Please include in your email the following:
- your name
- an email address that I can reach you at
- your current employer and the title of your current employment position (e.g. VP of Data Center Architecture in my case)
- your current location (country)
- which of the 8 TAB positions you are interested in (you can indicate more than one) - you can either use the number 1..8 next to each position or the position name
- a brief background on yourself, your career and your technical background that demonstrates your suitability for the TAB position against the recruitment criteria above.
Cloud Computing and Data Centers are killing our planet. Drive to the hardware store, buy a hammer, chisel and piece of stone and begin writing about it. Put away your computer, turn off your internet connection, un-plug the video game console, iPod and the TV. We must do everything we can to get rid of these giant, energy sucking, pollution generating, and planet killing warehouses of death immediately.
All kidding aside, it is true that many of our older data centers are in serious need of improvements in their power efficiency. However, it's also true that data centers contain much of our work effort and play environments. If this work and play was to be distributed in small chunks throughout the business or our households instead of concentrated in data centers they would be considerably more wasteful of our planets resources.
In my 20 plus year career in IT I've always been proud of my ability to bring efficiency to IT, and the business. When Data Center Pulse was founded the driving motivation was to push for the development of power sipping IT equipment designs, combined with more efficient data centers. In parallel we're actively working to persuade owners to implement those new solutions more quickly. We strongly believed that the IT/Data Center industry had a need to focus more attention on effective use of energy. The DCP Leadership team was made up of like minded individuals that each have work history examples of a focus on reducing energy consumption. So why would I be writing an article about data centers getting a bad rap? It seems like I should be agreeing with those articles, as it seems like I'm contradicting myself. Well, that couldn't be farther from the truth.
There is no doubt that there are good ways and bad ways to build data centers, from site selection, to air flow dynamics and especially on how the IT equipment is put to use. This effort to build an efficient space to meet the ever growing need for compute capacity doesn't seem to be enough. At least it isn't if you read some of the green oriented blogs that hound data center builders, cloud providers and perceived "GreenWashing" in the technology industry regarding them.
What I always find missing in these stories criticizing large data centers and cloud computing is a comparison of the alternatives and a fair assessment of market and political drivers affecting data center build decisions. I'm very much in favor of conservation and doing the best we can for the environment, but we also need to find a way for society to keep moving forward. In a capitalist system, profit is the only way we can maintain forward momentum. So I believe we have to first make a consideration, if you put free Twinkies in front of someone with limited funds, but charge them $20 for a similar amount of Broccoli, generally speaking which one will the person pick to eat? In other words, why would a company pay 15 cents a kilo Watt hour for clean energy, when federal and state governments are using taxpayer funds to practically give away fossil fuel generated energy? Our Federal government continues to subsidize the coal and fossil fuel industry to the tune of billions of dollars every year. And unless I'm mistaken, these energy companies seem to be enjoying fairly strong profit margins.
The following are some examples borrowed from sourcewatch.org;
"Examples of U.S. Treasury Department Funding
Examples of new or proposed coal-fired power plants that are funded in part by tax-exempt debt include the following:
- The Prairie State Energy Campus Project in Illinois is a mine-mouth 1600 MW supercritical steam turbine power plant without carbon capture technology. The more than $4 billion plant has several participating partners, with one partner, the Northern Illinois Municipal Power Agency (NIMPA), buying 120 MW of the 800 MW plant with $303 of its $318 million investment portion financed with tax-exempt debt.
- The Longleaf Energy Station in Georgia is a proposed 1200 MW pulverized coal fired power plant supported by the Early County (Georgia) Development Authority with federally backed local development bonds.
- The Two Elk coal plant in Wyoming is a proposed coal plant that purports to use so-called "waste coal" and has received hundreds of millions of dollars in tax-exempt debt authority since it was classified as a solid waste recycling facility. Approval for the tax-exempt financing is currently being audited by the Internal Revenue Service.
A new program under the American Recovery and Reinvestment Act, Build American Bonds (BABs), expands the U.S. Treasury's use of financing tools to subsidize coal-fired power plants. Under the program, issuers of the taxable bonds are provided a 35% direct pay interest subsidy to reduce the costs of borrowing. Power companies are eligible for these federally subsidized taxable bonds funding under BABs: American Municipal Power Ohio used the tax-exempt bond market to finance the construction of the Prairie State Energy Campus in Illinois and, after the 2009 financial crisis began, issued through the BABs program nearly $500 million dollars of federally subsidized taxable bonds to finance the last phases of construction. The bonds have also been used for scrubbers at existing plants."
It's not all bad news though. There are many states with aggressive clean energy plans in place with subsidies for developing renewable energy supplies. Also, the federal government is working with the G20 to eliminate government subsidies for fossil fuel development. President Obama has already removed the subsidies from this year's Federal budget.
Another positive trend is the number of US states that are pushing energy providers to supply a significant portion of their power through renewable sources over the next 10-15 years.
Using the information in figure 1 below, try to find an activity other than reading a book (not a digital one) or sleeping that uses less energy than 10 Google searches. This should be a wakeup call that the concentration of compute resources and their effective utilization are much more efficient than everyone doing the same work through independent personally owned computer, network, storage resource and or their car.
We're not Providing Real Incentives for the use of Renewable Energy to Companies Building Data Centers
How can we expect our companies who only survive if they stay competitive to ignore the opportunity of something cheaper, with more incentives to boot? The fact is the state is making the site more cost effective at tax payer expense, while they help subsidize the energy provider and then take more tax money to clean up the mess created. If we're going to go after someone for perpetuating this bad behavior we should be going after the federal and state governments. If we as a nation can provide reasonable incentives to corporations to make the right choices, then we drive up the use of alternative energy and drive down the incentive to do otherwise. Technology and data centers can be implemented poorly, but our planet stands little chance of surviving the forward march of the human race without them.
The Complexity of Data Center Energy Sourcing
Data center owners are faced with a myriad of choices to select the "perfect" site for their next Data Center. When finding an energy source the Data Center owner will face both business and government politics often incentivizing poor choices. Combine unfocused sustainability efforts and missing or backwards incentives from the government and it puts the selection of clean or renewable energy at the bottom of priority list.
It's no wonder companies like Digital Realty Trust are doing so well these days. The complexity and work effort associated with making a decision around your data center's site selection is extremely high. There are a number of factors (Geography, Network, Water, Staff, etc) that need to be considered, most of which can be accommodated by an experienced Facilities or IT person with data center knowledge. However, as you can see from figure 2, the layers of concern on the question of energy alone can be a back breaker.
Often times the Facilities or IT person that is making Data Center decisions is not a political or energy sector expert and is lucky to see beyond their immediate layer of concern and rarely gets past the corporate layer. Who then is accountable for corporate energy decisions? Unfortunately the answer all to often is "no one" the multiple layers often keep the "right" decision out of reach despite the best intentions.
Using Lots of Energy Makes the Data Center a Target of Opportunity and of Protest
Yes, it's bad to have to use coal to power a large data center, but isn't it worse that the tax payer is actually supporting the effort? I'm not suggesting that we should reduce our focus on data centers either, as evidenced by this Data Center Pulse proposal to the federal government. I am suggesting that data centers, especially modern efficient ones aren't the real problem, in the majority of cases they are creating huge environmental benefits and energy savings versus the alternative of everyone running their own IT equipment. We can't forget that all of us use technology every day. If everyone was to provide their own equipment in their homes, imagine the impact to power draw? We would have billions of systems, all of which are running at or below 5% average utilization, and don't forget that we'd have even less control over where the power came from for all these home based servers, network and storage devices. The energy used to build all this extra IT gear, along with the waste generated would quickly outstrip the world's ability to support it.
I think our next target should be those Planet Killing Busses!
Maybe we should yell at bus manufacturers for making vehicles that use so much energy? After all the average bus carries 10X the number of passengers that a car does, but uses at least 5X the energy. I guess I should go back to driving. It turns out it's not data centers that are going to kill the planet, it's busses!
My next blog might take a while because I'm going to have to ride my bike to the hardware store and buy that hammer, chisel and stone. I need to hurry because I'm worried there will be a rush on iStones.
A big THANKS to Jeremy Rodriguez for his contributions to this article!
“Why Haven’t All of You Adopted Amazon’s Cloud?” seems like the question Werner Vogels keeps asking (http://bit.ly/9cw6RG). Over the last year he’s made it clear several times that there is no such thing as a private cloud (I still disagree to some extent), and also that we should all be adopting Amazon’s service.
I like Werner and I like Amazon, but I do think it’s a little arrogant to just assume that because we say that Amazon’s cloud is the “one true cloud option” (Hallelujah) that it makes it so. In fact Amazon might have the best option available today for some workloads, but the truth is there isn’t any one cloud that solves everyone’s problems.
The average CIO still is not convinced that the cloud providers in the market today care as much about their applications as they do. There’s also the question of security. While I generally agree that most hosted cloud solutions are safer than an enterprise environment, it takes time to prove that out to everyone’s unique comfort level.
There’s also the little fact that moving an application is often times very costly and potentially interruptive. In most cases CIOs are going to look for a natural evolutionary event to justify moving their key apps into the cloud instead of forcing the issue just because they can. The cost of maintaining an inefficient architecture in most cases is just a fraction of the cost of the potential business interruption and or the work of doing the migration.
Time will tell whether Amazon can solve everyone’s problems, but yelling at us about it won’t make it happen any sooner. In the mean time there will be a strong push by enterprises to gain as much benefit as they can from utilizing a combination of cloud solutions, including internal private cloud.