Popular Topic Tags
Data Center Pulse Blogs
After some time of blogging on my Dutch website (www.janwiersma.com ), some of my English friends asked me if I could re-publish and extend some of them in English. This way the knowledge can be shared with a bigger audience.
While my English writing style definitely needs some work, I will give it a go at my DCP blog.
Get ready for an avalanche of blog’s. ;-)
Saying a modern server uses too much power is like saying a train uses more power than a horse drawn wagon. Of course it does, but it also does way more work. Let's not forget what's important to the question of cost and that is simply how much work is the server performing?
I've contributed to this noise in the past, but recently I've had a change of heart. After reading several recent articles that mention the cost of power exceeding the cost of the server, I've come to a new realization on this issue. The power use must be measured as it pertains to work potential.
Watts vs. Work Output
A single server today is twice as capable as a single server from two years ago. There are more CPUs and each CPU is more powerful. There's more memory and each dimm is faster and has more capacity than its predecessor. So, even though the amount of power being used by each server has gone up, the actual "watts per work output" has gone down.
We must always be looking to make the solutions we create more efficient in design, life cycle and use characteristics. However, we shouldn't overlook the fact that these systems are replacing work that would otherwise use much more power.
So, if you're really worried about how much power your servers are using, there are myriad opportunities you can pursue;
Use of cloud
Improved Management platforms for your infrastructure (Cloud & Virtual)
Applications (software) written with efficiency of operation as a consideration
Right sizing your environments
Using a well design and efficient data center
What you shouldn't do is fall into the trap of "not seeing the forest for the trees". Focus on the activities that generate business benefit and do those actions with sustainability and efficiency as part of your process. If you start trying to extend your servers life instead of implementing solutions that create business opportunity you'll be missing the forest. You might also find that many companies are actually improving their bottom line by replacing efficiently utilized servers more often, so they can reduce wasted energy and increase the work output per Watt.
Is it safe to build a data center anywhere along the coast? Can you really protect the availability or accessibility of your systems in the face of hurricanes, earthquakes, and other natural disasters? Just because you've built a solid structure, doesn't mean you can guarantee accessibility and your data center is nothing without connections.
Keeping the Data Center alive
The general assumption by most is that keeping your data center running is akin to keeping it "available". The reality is that availability and running do go hand in hand. However, running doesn't guarantee availability.
The data center community spends billions of dollars every year on facilities that can withstand a variety of disasters. Money is spent on risk avoidance for fire, floods, earthquakes, hurricanes, minor terrorist attacks or even the angry worker trying to drive a truck through the door. However, the primary limitation is still that these protections in most cases only help to guarantee that the data center keeps running.
My Data Center is Safe, what else matters?
So, you figured out how to protect your data center from the disaster du jour, congratulations. It's too bad that even though your data center was protected your customers can't use it or your employees can't access it. That's right, you've got power and HVAC, the servers and disk drives are humming, but your customers can't access anything, why is that?
The data center is no different than a man (person). As with any living being you need your ecosystem to support your existence.
Example disaster scenario: Human in an urban environment
You've built a strong house, with good security. You maintain a small supply of food and water, and maybe even have a generator. Then a major disaster occurs that eliminates your ability to be mobile and your access to many external services.
What happens when:
- you need a doctor?
- you run out of milk?
- you need a firefighter or police officer?
- you need to travel?
- you can't connect to NetFlix?
- your 15 year old can't update Facebook?
- Etc., etc.
As you can quickly gather from the above example the human in this case is likely to start having problems fairly quickly, even though they are "alive" post disaster. While alive is almost always better than the alternative, being alive for a few extra days is not the same as surviving to live on.
Example disaster scenario: Data Center
You've built an earthquake safe data center (A building that can withstand the proverbial "big one"). You've even installed base isolation under the server cabinets to avoid any serious vibration or shock to the IT equipment. Unfortunately, the water main two blocks away doesn't have the same earthquake protection, and neither did the nearby highway overpass that feeds the area your data center is in. Then, because of a road cracking and the overpass collapse all the fiber coming to your facility is gone. Good news though, the data center is still running on its own generators and temporary water supply.
What happens when;
- your customer(s) needs to access equipment remotely like they most often do?
- your staff can't gain access to the facility because of local conditions?
- your staff is unavailable because they are worried about their own family safety and health issues?
- you begin to run low on diesel, or water and the roads are impassable or you aren't the priority for limited supplies?
- Etc., etc.
The above situation can apply to any number of disasters from a hurricane to a tornado or even a major fire, snow storm or flood. In the above scenario your data center might actually be functioning as an island, but what good does that do you? Sure, your equipment is safe, and when services are restored you won't have to rebuild your infrastructure, but in the mean time you are down as far as your customers are concerned.
Modified thinking around data center design and location is required
Historically the concerns I've voiced above were more limited in risk and to some degree even to potential of disaster occurrence. However, there are several factors that are changing or should be changing our long held assumptions about what "safe" means for our data center facilities.
Factor 1: Climate Change
Believe in Climate Change or not the fact remains that on a regional and global scale we are seeing more natural disasters than ever and these disasters are getting bigger in scope. Fires, hurricanes, droughts, tornadoes, and other weather phenomenon are increasingly getting nastier. If we hold the assumption that weather related disasters will occur more frequently, and will impact new regions, we see a significant magnification of the problem occurring (I.e., More storms that are stronger, in combination with a wider area of influence or impact). There's even a new study that suggests that rapid climate change can increase the number and severity of volcanic eruptions.
Factor 2: Density & Value
We are continuing to place more value (real & perceived) on our access to technology. Every day we move more of what we used to do with a car or our hands in to a technology solution. Through the increased availability of technology we are finding even more ways to take advantage of it, effectively accelerating the growth of technology use. This greater our emphasis on technology becomes, the more dependent on it we becomes. As this dependence grows, the importance of it being available to us goes up.
Factor 3: Infrastructure Demands Changing
There are many variables that affect how our IT infrastructure solutions are built and protected. In some cases the advent of cloud oriented solutions means you have greater protection from localized disasters (I.e., Google, Microsoft, Yahoo, etc). However, in other cases, legacy IT, enterprise IT, and Big Data there is likelihood that more protection in a given location will be necessary, not less.
Given the three factors above it would seem to point to the notion that how you build to protect your data center should get a higher priority, but even more importantly, where you build is critical.
As with any major IT or business decision a number of factors of opportunity and risk need to be weighed before you make a final choice. Each of the factors need to be considered; latency, sustainability, access to connectivity, power, water, and skilled staff, etc. So, if you find yourself looking at several options whose costs are similar, but one has a better more survivable location, you know what you should pick. Why accept the risk that a tornado might remove the data centers roof or a flood might inundate your equipment, if you don't need to?
Location, Location, Location...
I've written about site selection decisions before (Blog 1 & Blog 2), but never have I felt it so important to consider the disaster risk at the same level as I do now. With the recent hurricanes on the East Coast, and tornados hitting more often in more locations than ever, I don't see how you can avoid the question of "Is this the safest place for me to entrust my companies IT jewels?"
Whether Human or Data Center, your ecosystem of support is critical
Dig deep into your list of requirements for where you company's compute infrastructure should be placed, eliminate the requirements associated with latency, power, sustainability, political risk, etc. Then using the remaining options, pick the one that offers the best combination of price and protection, keeping in mind that protection is much more than a well-built facility. However, that doesn't mean that if you find a safe geographic location you can forget about things like roof penetrations, wood in the construction, poor operational habits, and other basic facility operations and design factors that are critical to service availability. With all the concern I've voiced about weather and geography, the two most likely issues are fire (keep wood out) and people (have excellent operations habits and tools).
Back in March, I wrote about the 2012 Data Center Top 10, the current pulse of what is hot, interesting, challenging or emerging from the DCP community. “Renewable Power Options” came in at Number Five but for eBay, it’s near the top of the list.
We fundamentally believe that the future of commerce can be better than it is today; not only more convenient and accessible to consumers, but greener, cleaner and more efficient. The technology infrastructure and energy behind eBay’s commerce platforms are core to this vision. I’ve written here many times about the radical efficiency measures and innovative design approaches that my team, in tandem with our industry partners, has integrated into our data center portfolio. But as remarkable as those accomplishments have been, we are still using more carbon-intensive electricity than we would like. For the last three years, we’ve traversed the complicated regulatory environment and ever-expanding technology arena to source clean energy where we operate. Today, I’m excited to announce our next step in that journey.
On September 4th, eBay sent a Request For Qualifications (RFQ) to organizations that can supply or develop renewable energy for eBay data centers and office locations in Utah and other locations in the Western U.S. A natural next step following the clean energy legislation we helped develop and pass in Utah earlier this year, this work will also supplement other clean energy programs eBay is pursuing, including our recently-announced collaboration with Bloom Energy. As with our entire corporate energy portfolio, we are agnostic as to the renewable technology used, and are open to everything from wind to solar to geothermal to trash power – or anything else that makes economic and ecological sense for our business, and for the local community. To ensure that we have a complete picture of the options available to us, we are leveraging our public Request For Information (RFI) process, established in 2010, which yielded our award winning Data Center, Project Mercury in Arizona and our latest expansion, Project Quicksilver in Utah. My hope is that the renewable power community will step up with proposals on how we can expand our portfolio with innovative and cost effective clean energy solutions in our own backyard.
Vendors who meet the requirements of the RFQ will be invited to respond to a more detailed RFI under a mutual non-disclosure agreement (MDNA). RFI responses are due by October 31, 2012.
Email firstname.lastname@example.org if you wish to participate, or have any questions about the RFQ or this process. We want to hear from you. We can’t wait to announce the results and take a big step closer to our vision of a commerce platform powered 100% by clean energy. Stay tuned for updates!
Join Mark Thiele at the 2012 Cloud Asia Conference in Singapore on May 14th.
This is an outstanding regional conference, with a long list of very strong industry and end-user speakers.
It would be great to meet in person, so if you decide to join us, please look me up.
Please join Data Center Pulse at the Data Centres Europe Conference May 23rd & 24th in Nice, France.
Jan Wiersma DCP Board member & Director for Europe is speaking
Mark Thiele President & Founder of DCP will be speaking
We will also be putting together a Data Center Pulse gathering on the 25th if we can get enough interest from members.
This looks to be a great conference, with excellent content and a long list of very good speakers, so please join us if you can.
I fully expect controversy will be created by this blog especially amongst some of my awesome friends in the Sustainability/Green space and I am not actually advocating putting away the cleats and buying a game console.
This blog will probably anger some of my friends who are parents. However, it's not meant to suggest a specific change as much as an effort to get us looking at the real issues, and not be taken in by loud mouthed commentators on both sides of the sustainability issue. The idea for this blog was generated while I was attending a Sustainable Silicon Valley event in Palo Alto with the Ambassador to Norway, the Honorable Barry White.
Could playing soccer really be worse than playing video games for the environment?
Much ado has been made about the amount of energy utilized by power hungry data centers and new technology in general. It's estimated that over 3% of the power used in the US goes directly to running all our data centers. That's an amazing figure especially when you compare how relatively few data centers there are versus cars or traditional office buildings. However, the question I've asked before and I am now asking again (only in a different way) is; are those data centers actually hurting us or are they benefiting us by consolidating IT gear and providing us efficiencies through the use of technology?
An obvious comment by the kill data centers at all cost groups might be that we need to find a way to reduce the creation of applications and technologies that require a data center. Interestingly enough, it seems data centers are trying to save our planet and kids playing soccer are killing the planet. Here's the scenario.
Save the Earth, Play a video game
Video Game (PS3 or Xbox360) vs. playing soccer on a club team (could be volleyball or basketball, doesn't matter)
Being Driven to Soccer
Video Game Console, Sound & TV combined
10 miles driving @ 20 MPG city
.5 gallons of gas @ 32.91 kWh per gallon
4 hours of play
Combined power used 1.8 kWh*
Extra clothes washed
Grass cut & watered
Fridge opened 4 time
Cloud based compute & network resources used
1 kWh (generous)
Why the question marks (?) in the above table? I felt that considering the overwhelming numbers put up just by driving the car, there was no need to make a point of any of the other resources utilized. Also, it could easily be argued that the other activities associated with Soccer are much more impactful than the activities associated with video game playing. Even if you only drove once for every 5 times you played four hours of video games, you would still have a greater negative impact on sustainability.
*Perspective: It's important to note that a cabinet filled with high performance servers uses approximately 25 kW of power per hour. Even the use of an all-electric vehicle doesn't make soccer more sustainable than playing video games (est. impact of electric vehicle in the above example would by 5.5 kWh used)
- That same 25 kWh cabinet filled with running servers can support thousands of gamers, Google searches, map requests, travel bookings, financial research, etc. all at the same time.
It isn't always obvious what being sustainable is:
Solar panels: The rumors abound regarding the negative impact of construction and waste of solar panels. One estimate I heard was that it takes 20 years to recover the carbon created during construction. However, upon further research I found just the opposite. Solar panels seem to be an excellent and almost immediate opportunity to reduce creation of carbon emissions with a lifetime benefit of as much as 89% reduction over 30 years. What is true is that there is still a good way and a bad way to use this technology.
Personal Note: I really wish we could develop better ways to store energy created by intermittent sources of energy (I.e., Sun & Wind). I would also love to see solar panels used to power a data center, but unfortunately, they don't make sense economically or space wise. As they could never fit in or around a typical data center and supply enough power to make a difference, and being intermittent it requires that they be redundant to an alternative source of energy.
Owning a hybrid: The Prius is another seemingly awesome concept for sustainability. However, I'm afraid the reality is somewhat different from our generally vain reasons for owning one. While a hybrid car can in fact save you significant dollars on fuel, the overall lifecycle impact of a Prius isn't that much different than any other combustion engine vehicle. It might make the driver feel better, but being sustainable shouldn't be about "feeling better" it should be about "doing better". I would argue that the appropriate use of a hybrid does improve the equation significantly, but if you're a < 1000 mile a month person who generally drives on the highway, don't buy a hybrid.
Important thoughts from the Sustainable Silicon Valley meeting
I think the single most important thought coming out of our little save the world dinner party was the idea that finding ways to reduce use in our everyday lives and our businesses is the most effective way to have a positive impact. One of my primary focuses in IT has always been on the efficient use of resources. I know it's weird, but efficient use means you're only using what you need when you need it, which by its very nature means your being more sustainable.
I know the above isn't a new thought, but many of us think that the hybrid or Solar panels are the best way for us to reduce our usage. In some cases the hybrid, Solar panels and other "Sustainable" or "Green" solutions can conserve when implemented and used properly. I believe it's incredibly important for us to consider where the world is today on the tipping scale of climate change risk. If the scientists are correct and we are nearing the point of no return as far as global warming is concerned, then it would stand to reason that our consumption at the moment is critical to affecting change. If we assume that we can continue to consume at current rates because buying a hybrid or putting up solar panels will save us, I'm afraid we might just be hastening our climate change in the name of saving it.
What's the point?
As I said earlier "being sustainable isn't about feeling better, it's about doing better. If you are really interested in lowering your personal carbon impact on the world, then make your changes in an educated fashion. Don't do it for the sake of vanity.
- Reduce use of resources through reduce use, not by masking the use
- Push for change in the areas that can really provide benefit, not on things like Ethanol
- Push for legislative change in where/how we provide tax payer funds to support the generation of energy
- We don't have to believe in human influenced climate change to want a cleaner planet and even if humans aren't the cause of global warming, why take the risk?
- Moving away from using fossil fuels isn't an if but rather a when, so let's help our politicians to accept that fact and focus on it now, not when it's too late.
We are all using more and more technology every day, and there's nothing anyone can do to stop that. The benefits in efficiency, defense, resource utilization, entertainment, travel and many other areas mean that no matter what any one person or government does, others will exploit it for the advantage. I'm obviously advocating that data centers aren't the evil some like to believe they are but that doesn't mean we are absolved from the need to make them efficient. Also whenever possible we need to power them from clean and or renewable energy sources. In a recent blog I even made the point that building data centers to support high density server environments was being more sustainable than low density designs. The high density design reduces the number of buildings required to support the same amount of work.
So, the next time you visit or drive by a data center think twice before you assume that data centers are the problem, but please don't get rid of the kid's cleats.
Previous blogs on this topic:
Prius or hybrid:
Have you ever wondered what is on the mind of Data Center End Users? Why they make the decisions they make? What problems they are trying to solve? What keeps them up at night? Back in 2009, Data Center Pulse took a shot at capturing those thoughts through the 2009 Top 10. Over the last three years, this list has morphed as the interests, challenges and solutions emerged.
Today we are pleased to release the 2012 Top 10 that I was able to present at the Green Grid Technical Forum on March 7, 2012 in San Jose, CA. The 2012 Top 10 was vetted with the attendees of the DCP Summit held in conjunction with the Green Grid. DCP members discussed and debated the Top 10 along with the primary topics selected by attendees - The Green Grid Case Study on Project Mercury (video) and the Service Efficiency Metric Proposal. You can see the results from the Summit on my latest blog entry, DCP 2012 Summit Results.
2012 Top 10
- Facilities & IT Alignment
- Top Level Efficiency Metric
- Standardized Stack Framework
- Move from Availability to Resiliency
- Renewable Power Options
- "Containers" vs Brick & Mortar
- Hybrid Data Center Designs
- Liquid Cooled IT Equipment Options
- Free Cooling "Everywhere"
- Converged Infrastructure Intelligence.
The Top 10 is the current pulse of what is hot, interesting, challenging or emerging from the DCP community. We were able to record the Top 10 presentation I gave at the Green Grid Technical Forum closing session. The presentation showed how the Top 10 list has morphed over time as End User interests and challenges have changed, as well as provide context on each of the entries.
The DCP charter is to influence the industry through end users. We hope this latest Top 10 will give you insight into what is important right now - i.e. The Pulse.
On March 5, 2012 DCP members from as far away as Japan and Taiwan converged on the Doubletree hotel in San Jose, CA for an all day collaboration session with end user peers - The DCP 2012 Summit was held in conjunction with the Green Grid Technical Forum. Almost 50 of my industry peers from companies like Yahoo!, Microsoft, LBNL, Stanford University, Salesforce, @ Tokyo, Delta, Equinix and others, focused on discussing what's hot - i.e. the current "pulse" in DCP. With over 2200 members in 66 countries, there is definitely a lively "pulse".
The summit registration process yielded three priority topics
- The Green Grid Case Study on eBay's Project Mercury
- The new Service Efficiency Metric proposal.
- The DCP Top 10 for 2012.
This year we changed the format. Instead of choosing 6 or 7 topics and breaking out into parallel groups, we selected a smaller number and held them in series so all members could be involved in the rich discussion and debate. The format worked out well. We had over 3 hours of discussion on Project Mercury, 2 1/2 hours on the Service Efficiency Metric and a wrap up hour on the Top 10 which I presented on behalf of DCP at the Green Grid Technical Forum closing session on Wednesday, March 7, 2012 (Watch for an upcoming blog and video on that next week). Below are three videos summarizing the event and the two primary topics.
As Mark and I discussed last January in Episode 33: Three Years Later, we are getting back to basics. These collaboration sessions are one of the key reasons that end users participate in Data Center Pulse. The networking, discussion, debate and innovation that comes from them is aligned with the Data Center Pulse charter to influence the Data Center industry through end users.
DCP 2012 Summit Summary
DCP 2012 Topic 1 - Project Mercury Case Study
DCP 2012 Topic 2 - Service Efficiency Metric Proposal
In December of 2011 we hosted an exclusive Data Center Pulse collaboration session one day before we held the opening of the eBay Data Center, Project Mercury. The goal of this collaboration session was to bring 50 of our Data Center peers together to deep dive into the project, the lessons learned and discuss/debate the relevance of these concepts being applied to their data centers. We also did something new in this session - we allowed 5 vendors to participate. Wait, before you cry foul and question why we would go against our charter, I need to lay out some context. We invited the design and construction teams (EDI Ltd, AHA Consulting Engineers, Winterstreet Architects & DPR) to participate in the closed door session with members. These were the engineers that did the actual work, not sales, marketing, etc. They had very relevant insight and learnings into the challenges and lesson learned. That session went very well with lots of people discussing and debating the implementation and practicality to application in their environment. Once we finished that session, we had parallel deep dives with the Dell and HP technologists who were directly responsible for the Container, Server and Storage designs and implementations in Project Mercury. It was engineers talking to engineers.
This collaboration session turned out be one of the most productive we've had to date. Below is a video with footage from the event, a quick tour of the Project Mercury Data Center and Interviews with some of the attendees.
We are hosting our next DCP Summit on Monday, March 5, 2012 in San Jose. You can email email@example.com to receive the password to register. View the summit details here. One of the topics at the summit will be the Project Mercury Case Study published by the Green Grid on February 27, 2012.