Data Center Pulse Blogs
As part of our partnership with The Green Grid, Data Center Pulse is assisting The Green Grid's Thermal Sub-work group by asking our DCP members to participate in the following survey regarding Economizers:
Please click on the link or paste it into your browser and participate. The survey is open until Feb 5th 2011.
Project Mercury is born...
Today, we are pleased to announce that EDI, along with their partners AHA Consulting Engineers and Winterstreet Architects, have been selected as the winner of the Modular Data Center RFP - now dubbed as Project Mercury.
This has been an extremely interesting process for us with an unexpected result. EDI, a small company that we had never even heard of before, was able to meet all of the challenging requirements we had proposed to the industry through the Modular RFP process in a cost effective, simple design. In addition, a very compelling ultra dense product named "eHive" emerged from Skanska, one of the RFP finalists. It has not been released publicly yet (stay tuned for follow up). While Skanska was not selected for the RFP, their modular product was innovative enough to warrant further consideration in this data center deployment. All in all, the open RFP process did exactly what we had hoped. It enabled design engineers the opportunity to shed the traditional barriers, consider the difficult challenges and start with a clean slate. The outcome was new and compelling solutions as well as new innovative products driven by the free cooling, density and flexibility requirements.
In the video below we go into more details about the process, the team and the challenge that still lies ahead. We made the decision to engage with EDI and explore the Skanska product in mid November. From that point the team has been working non-stop to make up the time we lost during the RFP process. While the selection process took almost two months longer than anticipated, the end date did not change. We are still laser focused on completing the Data Center by summer 2011.
In the design sessions that started in mid-November there has been lively discussions and debate where phrases such as "Hot Water Cooling","Extreme Density", "Rapid Deployment", "Multi-Tier", and "Rack & Roll" were common. Day one, I sat down with the team and tasked them to solve the challenge holistically, not just from a Data Center availability perspective. I detailed the IT equipment that will be going into the Data Cener and how it must be fully integrated with the facility. it is all about density, rapid deployment and sustained efficiency even under varying work load. This balance is where extreme efficiency can emerge. Data Center Facilities and IT equipment are not mutually exclusive, they should work in harmony.
The argument on whether there is such a thing as "private cloud" just won't go away. Many of the big name SaaS and Public cloud players continue to publish content that poo-poo's the reality of Private Cloud. I'm writing to suggest that the arguments against private cloud are in many cases wrong, and in some cases pure sales FUD (Fear Uncertainty & Doubt). However, before I go into my little diatribe, I want to make clear that just because I'm a private cloud believer, it doesn't mean I don't believe in "public cloud".
Some of the more common arguments on the viability of private cloud revolve around some common themes that IT has been sold by vendors for years:
- - "make better use of your internal staff by having them work on things that drive business differentiation"
- - "private cloud doesn't have infinite scale"
- - "the security of public cloud/SaaS providers is better than anything a private cloud could create"
- - "staffing for running private cloud environments will be difficult" (I thought they said there wasn't a "private cloud")
- - "public cloud offers economies of scale that can't be met by a single enterprise"
So, let's take each one of the points above and attempt to break through the FUD.
"make better use of your internal staff by having them work on things that drive business differentiation"
Who can argue with that statement? I certainly can't because I believe we all should be trying to do that. However, as IT leaders we've been told that by our vendor partners for decades and yet IT as we know it still exists. Why does IT still exist, because inherently most companies understand the "innovation & creativity" benefit of having their own IT team. While empirically I hope for the day when most IT jobs are higher level business interaction level positions, I also understand that that day is many years away, cloud or no cloud.
"private cloud doesn't have infinite scale"
True, but SO WHAT? How often does your business need infinite scale? I've done cost comparisons of well designed private cloud implementations for large enterprise and found them to be very competitive with public cloud. In fact, they were so competitive that having a little extra capacity and having a contract for "burst" capacity was OK. If I knew that I needed 2X or more extra capacity even two or three days a month, then I would consider my options for putting that specific environment in a public cloud. The alternative is to have contracted burst capacity available.
"the security of public cloud/SaaS providers is better than anything a private cloud could create"
Generally speaking this statement might be correct. There are certain industry verticals that could argue differently (Financials), but it's probably true for many. However, does that really change the equation? The difference with internal vs. external security might be minimal compared to an unforeseen legal issue that results from public cloud dependence. In each case the business needs to make a conscious decision on the importance of their IP. If security isn't important to the business when it's internal then it still won't be important in the cloud. Now consider the fact that the worst security threats come from the inside and you can understand why just moving it to the cloud doesn't solve anything. In fact it's very much like outsourcing something that's broken. In most cases all you've really done it make the problem more intractable and created a no-win situation with your vendor.
"staffing for running private cloud environments will be difficult"
So my first comment is; if there's no such thing as private cloud, how difficult could it be to staff for it? However, all kidding aside IT organizations the world over have been building and managing complex infrastructure for years. Arguably a well designed private cloud implementation actually improves usability and simplifies the roles of many IT folks because it can bring automation to otherwise high risk manual tasks.
"public cloud offers economies of scale that can't be met by a single enterprise"
Next to the "infinite scale" point that's always used by public cloud providers, this is the next most common refrain. Unfortunately, it's not necessarily the case. In a previous comment you can build private cloud environments that offer significant scale and provide all the necessary environment, security and staffing requirements at or below the cost of public cloud offerings. I'm not saying it's easy, and in each business use case the drivers and costs will be different, but it can be done. I'm not arguing in favor of private cloud for cost savings, for me it's more about options and the potential to bring new innovation to the business. If for whatever reason I can't utilize public cloud for any or all of my workloads, at least I can get the majority of cloud benefits by having a private cloud.
So, just say yes to private cloud if it's what the business needs, but like any IT solution don't implement it without a clear set of objectives. Do your due diligence and ensure you're bringing the right tool to generate the right business opportunities, whether that tool is a private, public, hybrid cloud, or none of the above.
In closing, I'm not trying to argue against public cloud, but rather to argue that the reality and benefits of both private and public cloud are real.
Long Live Cloud!
I love meeting people that has much passion and drive for technology as I do. Wade Vinson, a Power & Cooling Strategist at HP, is one of those guys. He is better known as the POD father (and yes he gets a lot of guff for that name!). He ranks pretty high on my geekism scale due to his utter exuberance for his lego-set data center. I recently visited the HP site in Houston Texas to look at the latest HP POD. Wade was nice enough to let film a tour with him for our latest Data Center Pulse "On-The-Road" episode. He also tried very hard to convince me he could join DCP Core since he is an "End User" after building his 6MW POD test facility...hmmm...
The jury is still out on standard "containers", but Wade has evolved their approach and come up with a new building block. It's no longer a shipping container, it's a POD (sounds familiar). The POD is wider and taller than a container with everything you need packed inside. Ahh, glorius density! That is music to my ears. Now if it only had a high-temp, liquid cooled IT equipment option with over-clocking (that sounds like a familiar request as well). :-)
Episode 3 of DCP "On The Road" is embedded below. Stay tuned for more episodes on the DCP YouTube Channel. http://www.youtube.com/user/datacenterpulse
Why is an individual computer server different from a data center, other than scale? After all, the physical characteristics of a server are very similar to those of a data center.
Let's Compare the Characteristics of a Server to those of a Data Center
Server Housing (case) = Data Center Building
Case Open Alarm = Data Center Entrance Security & Environment Management Alarms
Fans = Air Conditioning
Power Supplies = Power Supply (transformers, UPS, distribution, PDUs)
Alerts/Alarming = Data Center Monitoring and Alerting
Administrator = Data Center Staff/Manager
Why is the above comparison of a server and a data center important?
"We need to move away from building custom data centers". There I said it, man that hurt. As a data center guy, I can't stand the idea that there will soon be a time when building a unique facility for my company will most likely be the wrong thing to do. Before you start throwing things at your computer screen and yelling my name in anger, consider the following historical examples of the automobile and the personal computer:
The automobile - Prior to the invention of the assembly line by Ransom Olds cars were handmade and individually assembled.
A Little Assembly Line History:
In order to keep up with the increasing demand for those newfangled contraptions, horseless carriages, Ransom E. Olds created the assembly line in 1901. The new approach to putting together automobiles enabled him to more than quadruple his factory's output, from 425 cars in 1901 to 2,500 in 1902.
Olds should have become known as "The father of automotive assembly line," although many people think that it was Henry Ford who invented the assembly line. What Ford did do was to improve upon Olds's idea by installing conveyor belts. That cut the time of manufacturing a Model T from a day and a half to a mere ninety minutes. Henry Ford should been called "The father of automotive mass production."
One could also argue that since the advent of the assembly line and mass production, there continue to be innovations that reduce the complexity and "individual" nature of many automotive components. Most automobile manufacturers don't build their own windshields or individual parts anymore. Instead they are mass produced by others who can apply standards across multiple car lines. You can still buy a handmade car, but you definitely pay the price.
Personal Computers (PCs) - Prior to 1982 many of us bought components and made our own PCs. Who would build their own server or PC today? Maybe high end gamers or organizations looking to solve a very unique problem whose solution is more important than the cost. Generally speaking, building your own computer today would be much riskier, and costlier than buying one pre-built.
Increased attention and new money will accelerate the push to a standard build data center model.
Historically speaking most of us in the data center space understand that there's still room for unique design as we push the limits towards the best, most efficient and flexible data center. However, the data center has gained so much visibility over the last five years that it is no longer the black box that it used to be. This public awareness has dramatically increased investment dollars in the data center industry and created a large number of data center experts. The investment and influx of experts mean that change and improvements happen at a much faster pace than in the past.
If manufacturing automation and standards can be applied to cars and PCs why shouldn't they be applied to Data Centers?
So, I guess what I'm saying is that the data center is ripe for the picking, just like the manufacture of PCs was in the early 80s. As we get to the point of diminishing returns relative to the efficiency and modularity of data centers, companies that continue to try and build their own will be at a distinct disadvantage. We all know that time is one of the most costly of our resources. If you can implement new capacity in a matter of weeks vs. a matter of years, you're better positioning your enterprise to leverage their IT investments to meet changing business demands. You may believe that you could build a data center that's better than pre-fab modular boxes, and you could be right. However, just like in the case of the PC you'll likely discover that your risk and months of lost hours don't justify the small expected improvement in efficiency.
Who is going to win in the modular data center space?
What am I a psychic? I don't have a clue, but the writing is on the wall, first it was inflexible containers and infrastructure that wasn't ready. Now the containers have improved and we've got pre-fab modular designs available and IT infrastructure is quickly catching up. The key driver towards moving to a low cost standard "capacity of compute" model for your DC space will be when the majority of our applications and infrastructure can be distributed and portable through a combination of cloud technologies.
Hold on tight it's going to be a fun and probably rocky ride, but in the end businesses should win and that's what's important. Who would you name as the Ransom Olds or Henry Ford of the Data Center?
This link is to a related blog I wrote a little over a year ago:
The eBay modular Data Center RFP finalists are announced.
I'm stoked! Seven weeks ago, we released the Phoenix Modular Data Center RFP to the industry. The intent was to encourage companies to participate in the RFP, level the playing field and spark some real innovation. In the past we created a list of vendors that we believed were qualified to design and deliver a data center for us. We also told them how we want to build it. But this time, we just gave the parameters that we wanted to achieve. It was up to them to come back with the most creative, flexible and cost effective design they could. We truly leveleraged the talent and ingenuity of the vendors, engineerings, architects and others in the data center community.
The repsonse was even more than I had hoped for. We approved 37 company requests to participate in the RFP. That is almost 30 more than our original list. Regrettably, we had to turn away another 15 companies because they missed the window to participate. But not to worry, this is the beginning of future projects that they will be able to participate in.
The majority of the companies allowed us to share stats about them. They ranged in size from as few as 14 employees to over 320,000. Combined annual revenue was over $110B! They also had built almost 800 data centers and had 227 years worth of experience. 28 design firms, 9 manufacturers mixed with 6 partner submissions. All of them had modular experience and offerings and all of them committed to meeting their efficiency committments in the design. I don't know about you, but with that much brain power and experience, this data center project has the potential to change the game.
Now, our requirements were not easy. Multi-tier, modular, vendor agnostic, scalable, multi-temp air and liquid to each location, rack to container mixtures, 100% free cooling in Arizona year round, future-proof, extreme density, and more. But people really stepped up. In the end we received 17 official submissions. Each was ranked based on how they scored in the scoresheet which was published on the RFP status page early in the process. The companies knew what we needed, they knew how they were going to be scored and that they were competing with other very creative people in their field. There's nothing like a competition to bring out innovation. Last week, my team spent five solid days diving into the details of these submissions. And we made a conscious choice that each of the proposals had to stand on their own merit. We read what they submitted, analyzed their design based on the drawings and narratives they provided and scored accordingly. What we achieved here was a very fair process that gave us six solid finalists.
Today, I'm delighted to announce these finalists. Next week, each company will have two hours to present their solution and answer questions from our review board. In the meantime, we have also provided then with direct feedback on their proposals and how they scored. We expect each to fine tune their proposals from this feedback. It should prove to be a very interesting competition.
The Modularity Battle Continues
The eBay Modular Data Center RFP finalists are listed in alphabetical order.
1. Primary: DPR (http://www.dpr.com)
2. Primary: EDI (http://www.ediltd.com)
3. Primary: Kling Stubbins (http://klingstubbins.com)
4. Primary: McKinstry (http://mckinstry.com)
5. Primary: RTKL (http://rtkl.com)
6. Primary: Skanska (http://skanska.com)
Partner: Cosentini (http://cosentini.com)
I'm also sharing the additional companies that participated in the first round. 13 of them agreed to share their information, 18 requested to remain anonymous. But, each of these companies also received feedback on their proposals so they can adjust accordingly for future RFPs. We saw some very creative solutions and I believe they deserve direct feedback rather than a "sorry you were not selected, thank you for participating" answer.
The participating companies are listed below:
1. Advanced Design Consultants, Inc. (http://www.adcengineers.com)
2. Affiliated Engineers, Inc. (http://www.aeieng.com)
3. BKM Mission Critical Facilities (http://www.bkm-mcf.com)
4. BRUNS-PAK (http://www.bruns-pak.com)
5. DELL (http://dell.com)
6. Elliptical Mobile Solutions (http://www.ellipticalmedia.com)
7. Gensler (http://www.gensler.com)
8. Hanson Professional Services Inc. (http://www.hanson-inc.com)
9. Hypertect, inc. (http://www.hypertect.com)
10. M+W U.S., Inc. – A Company of the M+W Group (http://www.usa.mwgroup.net)
11. NOVA Corp (http://nova-corp.com)
12. Reliable Resources, Inc. (http://www.relres.com)
13. Technology Management, Inc (http://tmiamerica.com)
As I said in my previous blog entry, I truly believe that if we openly share our challenge/problem/need with the industry, we will find incredible solutions. There's too much brainpower out there and we don't kid ourselves that we have all the answers. Along those lines, we also help our owner/operator community by pushing for innovation and sharing. While we will ultimately pick one winner to build our next eBay data center in Phoenix, we are looking for ways to give these innovative companies an opportunity to share their approaches with other DCP members. This blog entry is one of them. I also believe there is no silver bullet - i.e. a single answer for data center design. There are too many variables to consider. We will standardize on many things (containment, economizers, etc), but there will always be different approaches and that is healthy.
We created this video to announce the RFP finalists and give a bit more insight into the process we went through. It also has some great footage of the latest construction of the data center in Phoenix. Stay tuned for more updates on the datacenterpulse youtube channel.
I plan to continue sharing the details of this project with the industry as it happens. Stay tuned for more on my blog and on the Modular RFP Status Page: http://datacenterpulse.org/rfp/modular
Subject: Free online Data Center Summit – case studies & best practices
Join myself, Graeme Hay and Jan Wiersma of Data Center Pulse as we present a roundtable presentation at BrightTALK’s free Next Generation Data Center Summit on July 28. The presentation will be part of an all-day event in which experts discuss innovative tips and best practices through a series of interactive webcasts that you can view from the convenience of your computer.
“The State of the Data Center: What's Next?”
Mark Thiele, Jan Wiersma & Graeme Hay, Data Center Pulse
You will be able to view any or all webcasts live or after on-demand. If you are able to tune in live, you will be able to submit real-time questions to presenters and take part in presenter-led polls. I hope you will be able to join us.
Today my team released an RFP for a small, but dense eBay data center design project in Phoenix Arizona. I am really excited about this RFP for two reasons. First, this is an ambitious project that will not only challenge the engineering and design firms but the hardware manufacturers as well.
9 months ago, I was in sunny Phoenix at the 7x24 conference. A group of about 40 DCP end users gathered on a Sunday morning for a 5 hour brainstorming session on next generation solutions. We discussed the Top 10 list, the stack, the current Chill Off 2 testing, modular data center design concepts and what the future could and should look like. At the end of the session we had our updated top 10 list and a direction decided for the Chill Off 3. The Chill Off 3 would be broken into two parts. The first part is a highly flexible, modular, high-temp data center design that would allow solutions from today and tomorrow to snap in like legos. The infrastructure would also scale as needed rather than building out everything from day one. The second, was the compute load itself and how we could maximize performance not only in the power and cooling systems, but in the actual work being performed. We would look at the entire system performance rather than the components by themselves. More on the compute load part in a future blog posting.
We presented our concepts the following day to the general audience at the 7x24 conference. That sparked 9 months of work to organize. Originally, we were going to to have different engineering and design firms submit different designs to compete on the physical infrastructure to support the next generation equipment tested in the Chill Off 3. But, we realized that this would be difficult to coordinate/build and we needed to have more meat in this for the participants. So, we applied the thinking to an actual eBay project that we were spinning up. The winning design would build the Phoenix data center. In early June we broke ground on a new "warehouse" data center building that would allow this to happen. It is a two story building fortified structurally to handle very dense loads. It is just a shell building ready for the DC design. We plan to have a minimum of 4MW of IT load in 8,000 SF. We also expect to achieve 100% free cooling year round in Phoenix. Yes, I said year round. We also expect to achieve multi-tier deployments in the center. We want to be able to quickly and easily add modules to the DC to achieve street-power only, N, N+1, 2N, and N(2)+1. We plan to match the tier level with the application workload.
Another unique approach to this project is that we are opening it up to a wider audience - a much wider audience - The entire Data Center industry. It touches everything physical in the data center including the IT load. So, along with the 8 firms that we have already added to our list, we are allowing anyone who meets the qualification criteria to submit a proposal.
I won't go into the detail in this blog posting, but I will fill you in one two important pieces of information.
- The RFP opens up July 7, 2010.
- The RFP responses are due August 20, 2010.
- The RFP closes on August 27, 2010.
- Construction start is planned for January 1, 2011
- Construction completion is planned for July 1, 2011
- Application - 2 weeks
- RFI responses - 4 weeks
- Selecting finalists - 1 week (internal)
- Presentations - 1 week
- eBay decision - 2-4 weeks after (internal)
2. The RFP details will be tracked through linkedin in the DCP: INDUSTRY group.
- The details of the RFP, qualification criteria and RFI responses will be tracked in the DCP:INDUSTRY discussion thread.
- Applicants must apply through this webform.
- Join DCP: INDUSTRY here: http://datacenterpulse.org/JoinUs (must be member to see thread details)
We look forward to seeing the innovation come alive this project! For more information, please email firstname.lastname@example.org
Today we added Eddie Schutter from AT&T to the Data Center Pulse board of the directors. You can see the press release here. But for this blog entry, I wanted to give a bit more information and perspective on Eddie and our expansion into local chapters.
I have worked closely with Eddie on emerging technology opportunities and industry efforts for almost three years. The most recent was with the Green Grid where we both sit on the Advisory Council with a great group of End Users. In February, 2010 Eddie and I co-presented at the Green Grid Technical Conference in San Jose, California. The presentation was focused on the GG birds of a feather session and the DCP top 10. This was a result of the collaboration agreement between DCP and the Green Grid established in early 2010.
in April, 2010 the DCP Board decided that it was time for us to harness the interest and capabilities of local end users by establishing DCP chapter groups. We started informal chapters in Utah and Arizona because of the activities and interest in those two states. The first meetings were great, but we realized very quickly that we needed someone to focus on developing this program from the ground up. The Utah end users that we knew of represented over 200MW of current future consumption in the state. The Arizona end users represented over 50MW and that was just scratching the surface. Eddie came to mind immediately to help lead this effort. As mentioned in the press release, Eddie has been active in many different industry groups including the Green Grid, Uptime Institute and 7x24 Exchange in which he helped start the Lone Star Chapter in Austin, Texas. Eddie's day job is the Sr Technical Director of Data Center Architecture and Planning for AT&T. Over the years we have shared best practices, debated on technological approaches and generally brainstormed on what is possible. One of the most memorable was the Data Center Pulse working session at the 7x24 Exchange in Phoenix, Arizona in November of 2009. Eddie, along with the majority of the DCP board, Olivier Sanche and other end user leaders met following the session and decided on a strategy to push for next generation cooling/compute solutions. Stay tuned for more blog updates on recent updates that were spawned from that great session.
We have great expectations from Eddie and the development of the local chapters. We believe that this is a very productive and effective way to have end users participate in work that will directly benefit their companies and the DCP efforts. The majority of the DCP work has been done over linkedin and through conference partnerships over the last few years. This approach will allow a much larger group of individuals have direct input in helping us to shape our efforts to influence the industry. The voice of the customer will get bigger but also smaller - more focused on their own backyard.
With that, I wanted to address a few concerns that people have raised over the last few weeks.
First, we are not competing with any of the industry groups. Quite the contrary. We plan to partner up with other industry groups to optimize peoples time and increase participation of end users. We plan to approach this very much like the conference partnerships we have established. There will be DCP end user only meetings to allow for open dialog and focus to synthesize content and requests before we meet with the industry. There will then be additional sessions with the industry to engage and share the learning and discussion/debate from those sessions.
Secondly, we will be establishing additional chapters, but we want to make sure that we do this in a controlled manner. Eddie will be establishing the bylaws and governance to ensure that we can scale and optimize the local work of our members.
Lastly, this is not just a North American effort. We plan to establish local chapters globally. For example, the activity in China, India, Europe and other parts of Asia (to name a few) is amazing. There are tens of billions of dollars being spent globally in new and retrofit data centers.
Eddie has a full plate, but we have every confidence that he will be able to deliver a great program. If you are interested in helping or have ideas please email email@example.com or contact Eddie directly at firstname.lastname@example.org.