Data Center Pulse Blogs

Deep thoughts on Cloud SLA’s

OK, they aren't really deep thoughts so much as they are observations on the SLA assumptions made between provider and customer.

Earlier today there was a great back and forth on Twitter about SLA's and Cloud. Following are some of the associated Tweets:

When SLAs are broken cash compensation may trump credits. It hurts more & if service is really bad may simplify exit. #CloudViews

So much talk about Cloud SLA's. The real issue is how the provider measures outages vs. how the customer does. Penalties are immaterial.

@mthiele10 Not being able to compensate for "actual loss" doesn't mean give up on defining any kind of penalty. We're beyond "eye for eye".

MRT @mthiele10: ...Cloud SLA's. The real issue is how provider measures outages vs. how customer does. Penalties are immaterial. #CloudViews

The above Tweets and many others that occurred over a 30 minute period got my writing juices flowing, which is where this blog was born.

Assumption 1: We need SLA's, but maybe not for the commonly held belief of why

Cloud SLA's are really an opportunity for the customer to learn more about how the provider can protect them from risk. The process of building and agreeing to the SLA is also a necessary step for the customer to take in evaluating their own assumptions on the quality of existing internal architectural design. In other words, the customer must fully grasp and internalize the actual capabilities and design characteristics of their application and physical infrastructure prior to moving anything to a cloud provider. It's only through this self-evaluation will the customer determine if what the cloud provider is offering will meet the business requirements.

Key thought: Assumptions between IT and the business when the application was in-house will be put through a much more vigorous test when it's supported on a public cloud offering. In other words, mistakes that you might have been able to apologize your way through when the application is internal won't be so easy to ignore when they are more "public".

Assumption 2: Penalties are necessary as a way to measure success or failure

I believe my interpretation of "Penalties" is slightly different from many in the Cloud buyers community, but I could be wrong. It's also possible that I haven't read the right blogs.

As suggested in the above "assumption 2" statement, penalties should be used more as a measure of success in the relationship. The fact that you've moved a business critical function into a public cloud means that you need to create a partnership with your provider that mirrors any expectation of communication and responsiveness you would have as a buyer of an internal service. It's essential to realize that communication is the first and easiest process to break. If you were having trouble communicating with your internal teams and customers, moving an application into the public cloud won't simplify the issue, it will magnify it.

It may seem counterintuitive, but you really must fully understand how to manage and support your application effectively before you give it to someone else.  If you hand if off thinking "they'll take care of it", you're in big trouble already. The most common mistakes made in outsourcing are "doing it for the wrong reasons", and "not fixing the service before handing it off".

Assumption 3: A Cloud SLA should be considered a "work-in-progress"

Your SLA should be a mirror of your current needs for any number of common metrics:

  • Performance
    • By region, by application, by process, by instance, etc.
  • Scale needs
    • Up, down, how fast, lifecycle management
  • Cost
    • When, for what, who
  • Failover or recovery requirements
    • RTO, RPO, data locations
  • Merger & Acquisition efforts
    • Integration, new markets, new regions
  • Enterprise growth plans
  • Etc., etc.

Seeing as how all of the above metrics are likely to change as your company changes, you need to stay close to your new cloud provider partner to avoid prioritization miscues. So again, the SLA must be treated as a work in progress and a living document. If you haven't gone over your SLA with your customers and your provider partner in three months or longer, you're falling down on the job.

Key Points:

Your SLA should be a tool for helping you define requirements against "real world" needs and capabilities, while taking into account what you actually have versus what you're asking for.

Communication, communication, communication, it is the key to having a successful partnership, whether it's cloud or some other business arrangement. Build in regular and meaningful opportunities to communicate or find yourself surprised when expectations aren't met. This isn't something you leave up to someone else to do. If the provider isn't offering it, then demand it, if they can't support it, then you're with the wrong provider.

Your SLA with any provider, but especially a CSP, must be considered both a "work-in-progress" and a "living document".  As technology, the economy, and your business change, so should the SLA.


Does Loyalty Have a Spot at the Table in Modern Companies?

It can be really frustrating as a job seeker to have an interviewer ask "why have you moved around so much?" Of course you'd like to say what you feel, but that won't go over well, so you come up with some other answer that is "interview correct".

But really, why do many of us move around so much these days? There are some obvious answers; layoffs, bad leadership, termination, poor fit, better opportunity, etc., etc. I happen to believe that there are deeper underlying reasons and that the above are but the symptoms. Loyalty (or lack thereof) being the primary reason!

Call me old fashioned, naïve, idealistic or all of the above, but I'm a huge fan of loyalty. People will help you move your belongings over an entire weekend for nothing more than a cold beer and a few slices of pizza. Is it good will, no, it's loyalty.

I love this quote on loyalty by Charles Jones

               "Loyalty is something you give regardless of what you get back, and in giving loyalty, you're getting more loyalty; and out of loyalty flow other great qualities."

How is Loyalty Destroyed by the Average Company?

Loyalty is destroyed because we are too busy thinking about making the numbers this quarter. Unfortunately, missing the numbers is really the answer, but I'll provide some detail to help explain why.

The above quote by Charles Jones really says it all, the problem lies in how we as leaders "think" we're doing something, when in reality we are only "saying" it. That's correct, most often with loyalty, customer service, and other behavior oriented qualities most of us talk about it, but don't actually live it. Loyalty isn't something you talk about, it's something you do. It doesn't matter whether the situation is hard or even if your job is on the line, you're either loyal or you're not. Unfortunately, if you're not, the people that work with you figure that out quickly.

Even when we think we're being loyal, we compromise ourselves by not being prepared. We are driven by the quarterly number and all else can be sacrificed if it helps meet that number. Using quarters to drive our businesses, instead of real drivers associated with who and what makes you successful in the long term is where we're failing.

That's Right, People Make our Companies Successful

So, if people make us successful, why are people always the first to get Sh!# on? If you thought you were going to be a little short of funds next month would you dump one of your kids or maybe stop payment on your child support? No, any sane mother or father wouldn't do that, so why do we think it's OK in a company.

I realize we're not a welfare state, and I'm not advocating that we become one. I don't believe in letting people keep their jobs whether they perform or not, quite the contrary. What I do suggest is that if you really want to get the benefit your people have to offer, then you need to give something in return. I worked at HP towards the end of the Bill & Dave period when employees are talked about and treated like family and layoffs were virtually unheard of. Unfortunately, many of our leaders at the time misinterpreted the "we are a family" motto into believing "we never hurt anyone". The simple problem was a failure to recognize that even your own mother and father can bring the hammer down, but it doesn't stop them from loving you. In fact, by bringing the hammer down but still providing you with food to eat and a place to live they are demonstrating loyalty to you and your future. I really believe that Bill & Dave were the "parent" type of leaders.

What Can We Do to Fix Our Companies?

We need to invest in our people and by "investing" I mean more than just pay them fairly or occasionally provide them some training. As leaders we need to help our teams see the company's future and correspondingly their future as well. If your team knows they will be a part of the future and they'll be given the tools to succeed in that future, they will pay you back in ways that can't be measured by the simple difference between "internal" salary and the "outsourced" equivalent.

Teach your leadership team to demonstrate loyalty, even when it hurts. By doing this you'll not only gain the trust of your teams, but you'll be forced to think ahead about where the business is going and how market changes might affect your employees. Thinking ahead seems to be one of the toughest choices for most enterprises. Don't get me wrong, I know that most companies create a business plan, but generally speaking it's focused on how big we're going to grow and which products and markets will get us to that growth. Little attention if any is placed on how we'll manage our people forward.  If you don't want to take my word for all this, all you have to do is look at Google. They pay better salaries for coders than most companies do, and the majority of their employees live and work in some of the most expensive markets in the world.

So, give loyalty another try. The ROI for loyalty might not be easy to put together for the CFO, but there are hundreds of examples in the world of leaders who have grown incredible companies with outstanding long term employees, through loyalty.


Get Out Of The Way Of Progress

Thoughts on the Term "Cloud" & All its Assumed Meanings

On September 15th, I made the following rash statement on Twitter:

mthiele10Mark Thiele

We have to move past selling Cloud. Cloud is purely an infrastructure evolution, we now have to sell "What can we do differently or better"

Following are some of the back and forth tweets related to the above:

jayfry3Jay Fry

PRT @mthiele10: We have to move past selling Cloud...purely an infrastructure evolution..."What can we do differently/better" [Yep: a biz Q]


"@mthiele10 I think sales people should just say that cloud is a state of mind. #GroovyBaby" <+1


 @mthiele10 That may be the best articulation I've seen of what's been bugging me about "cloud conversations of late"


@mthiele10 @jamesurquhart Been saying that from the start. Terms and products are irrelevant - solving problems is key.

The above tweets are just a few samples of what was a lively and positive conversation, with the main point being that some of the folks who have the most to gain from using the term "Cloud" already feel that too many of us are using it inappropriately.

The point of my Tweet was to get people thinking about business opportunity and less about marketing terms or buzz words. As a long time IT infrastructure person I see much to get excited about in infrastructure solutions dressed as "cloud". However, I've been excited in the past about any or all of the following changes:

-        Client-Server computing

-        Phone based email

-        1U servers

-        Blade Servers

-        Virtualization

-        NAS vs. SAN

-        IaaS, PaaS, SaaS, etc.

Did any of the above solutions prior to the last bullet significantly change business? Did you see advertisements about how 1U servers would make your IT and business more agile? You might have, but generally speaking they were sold as infrastructure tools that could help you be more valuable or cost effective for your customer. Solutions that are parading as "game changers" and "converged infrastructure for a more agile business" or "Cloud, it will make you thinner" (OK, I made that up, but you get the message), are nothing more than an evolutionary step in the way that IT infrastructure is being made available for what's actually important and that's the delivery of applications your customers use.

Don't get me wrong, I love the promise of more flexible infrastructure: AKA Cloud

As a long time infrastructure guy I always dreamed of a day when the network or the data center was the center of attention (be careful what you wish for). Now that the day has arrived, I feel that I wanted the wrong thing for the right reasons. The average infrastructure person is an underdog, they work hard shoveling coal into the fire, and are only noticed when the boiler blows. Unfortunately, that is how it will likely remain, IT infrastructure staff are likely doomed to being the unsung heroes of IT, but at least now they have new and improved toys.

Cloud is the "new and improved toy" that can be used in a myriad of ways to help address long standing IT flexibility and usability concerns. With the right deployments you can reduce the risk of long term vendor lock-in or deliver an application to a remote location in a matter of minutes or hours. Most importantly by implementing the best possible infrastructure solution you're getting out of the way of progress.  I hope that doesn't sound too cold hearted, but it is the truth. No business that isn't in the business of infrastructure ever won a competitive battle or moved a market by buying a fast server, a larger disk array or a fatter network pipe. However, that doesn't take away from the fact that the appropriate use of infrastructure in combination with smart applications can really be a "difference maker" for your company's performance.

So the next time someone tries to sell you "converged infrastructure" or "Cloud", ask them first which of your current business problems or opportunities their solution is ready to fix. If they can answer that question successfully, then you've got something to discuss! Good luck and keep shoveling that coal, someone's got to keep the train moving.

Previous Related Blogs:

Private Cloud - Real or Fantasy

Cloud Project Planning

The Hidden Costs and Risks of Cloud & how do I mitigate them

Discerning Freedom and Servitude in the growing cloud management space

Other Blog articles:


What Does Data Center Modularity Mean to You?

The word du jour at least in the data center space is "Modularity". The only word used more often and loosely in the IT space is "Cloud".  Even though the two words "Modularity" and "Cloud" are hyped, it doesn't mean there aren't real opportunities in both areas of technology. The trick is in understanding how the terms should be used, and where and how they should be applied.

Definition of Modularity:

-        Designed with standardized units or dimensions, as for easy assembly and repair or flexible arrangement and use: modular furniture; (borrowed from

Interestingly most people ascribe the definition above to the building of modular data centers. Sadly, this definition doesn't do a modern data center any justice, nor does it buy the data center owner what they really need.

In simple terms many of us assume a modular data center is one that can be built incrementally with standardized blocks of capacity.  Building data centers is a CapEx intensive endeavor, so anything you can do to reduce your exposure is a good thing. However, if the only thing you consider when building your data center is whether you have enough "space", then you're missing several critical areas of opportunity and cost management.

Data Centers are first and foremost warehouses, highly technical and very expensive warehouses, but warehouses all the same. In a typical warehouse for furniture you might not have anything more critical from a functional design characteristic than how big the doors are and how high the ceiling is. Once you've got those critical considerations accommodated then it's just a bunch of common use space. What if the warehouse were a multipurpose facility that had different security and temperature requirements? Now try to build more space on this facility with a single variety building block. You'll start to see that since your capacity for refrigerated space was very small, but your high security area was large you end up with a mismatch of need, which equals wasted space and wasted cash.

With today's "modular" data centers you're buying chunks of data center capacity that are all built to a specific standard

               Example: Tier III and 5kW per cabinet (or 300 Watts per square foot)

A standard building block of capacity is great if you always need that exact standard. What happens when you need a portion of space that can efficiently handle 25kW per cabinet or maybe you need 50 cabinets of Tier IV. Now you're beginning to see where the efficiency assumptions in many of today's modular data centers begin to let you down. But the issue is really more complex than my simple analogy above, creating a modular data center means that you have direct management of the expansion or contraction of any of the primary service capabilities of your facility. These service capabilities include, power density, cooling, space, and physical durability among others. How could you possibly future proof your facility for a 15 - 20 year lifespan if the day it's built it has the same capabilities it will have when it's retired. Do you really want to own another large data center that's "full", but has lots of empty cabinet and floor space?

So when you're shopping for data center space, do your homework and make sure you give the often used but rarely supported modularity message a real test before you buy.


Project Quicksilver


Earlier this week we pre-announced that Ebay will be launching another public Modular Data Center RFP through Data Center Pulse. This is the second round of the public RFP process. Project Mercury, which was the result of the first public RFP, will finish commissioning by the end of this month and will be fully operational by October. Today we formally announce project Quicksilver. Quicksilver is Liquid Metal Mercury that moves and changes very quickly. Besides the obvious play on the Mercury name, we picked this name because it represents the capability we are looking for in data center portfolio. How can we create a generic, flexible data center infrastructure that can move and change with our business needs?

As you will see, we are taking the public RFP to the next level. The video below gives more insight into the project by describing the Scope, Requirements, Process and Schedule. 


You can watch the project page for all of the updates as we go through this journey. My team, partners and suppliers have done incredible things in Project Mercury. I look forward to the next phase in our evolution as we execute Project Quicksilver. 

Interested parties, should email for more information.

Let the new battle begin!



eBay Modular Data Center RFP, Round 2!


It seems like forever since I have had a chance to blog! Needless to say, we've been absolutely swamped with business growth and pushing innovation as far as we can take it!

One year ago this month we tried an innovative modular RFP process which opened up the design of the new eBay Phoenix Data Center to the industry. As I write this blog, we are knee deep in the commissioning of this ground breaking design dubbed Project Mercury. The challenge we put out through Data Center Pulse has yielded one of our most innovative designs to date. The goal was to unleash the creative minds in the design and consulting arena by outlining the business and technical challenges then letting them tackle the "how". I am proud to announce that the process works. It works very well! The collaborative, partnership nature of this project has made it one of the best I have ever worked on. Barriers were shattered, competitors became partners, and the impossible became possible while the project rapidly evolved and our design requirements were exceeded. But I digress! This blog entry is not to announce the details of Project Mercury (more on that in Oct/Nov as we open it up). This blog is a heads up to the industry that eBay will be kicking off round two of the Modular RFP process! But this time, we're taking it to the next level - Salt Lake City, Phase II!



The process will begin August 19, 2011! Let the design competition begin! Stay tuned to the Data Center Pulse YouTube channel and the modular RFP page for more information. For more information, please email

Making A Change

It was a little less than a year and a half ago when I made the announcement that I was moving to ServiceMesh, now I've made another change. What the heck, do I have ADD? Well, I do have ADD, but that's not why I've made a move to a new company.

I joined ServiceMesh because I believed in their vision for cloud management and I still do. During my time there I was able to delve deeply into the world of cloud in general, and specifically as it applied to large enterprises. ServiceMesh was on to something when I joined, and that's not changed. They still have an amazing story in the cloud management space and it's only improving.

So How Did This Change Come About?

As part of my role at ServiceMesh I would occasionally work with partners, helping to develop an adoption strategy. Over the last few months I've been leading a project to do just that at Switch . It was during this project that it became obvious that my data center background, combined with cloud experience was the perfect fit to help Switch achieve its goals. So, while I have moved from ServiceMesh to Switch, it was more like an "employee" transfer than a resignation. At Switch I will have several responsibilities related to Data Center Tech, in combination with ensuring that the solution vision shared by ServiceMesh and Switch is realized. So, in effect, I'm working for both companies now.

Needless to say, I'm extremely excited about my new role. I'm back closer to the data center again, and I still get to play seriously in the cloud. The move to Switch was made easier by the fact that I believe they have the best data center solution on the market. So, as a data center guy, how could I resist.

If you find yourself in Vegas and would like to chat, be sure to look me up.


Are You a Server Hugger? - Ownership Disease, How it Can Hurt You in IT?


Ownership has several important connotations and I use it to define my take on personal responsibility for pretty much every aspect of my life.  However, it can also mean a "systems" approach to "owning" all aspects of a specific service, solution or function (I.e., I own the Data Center top to bottom).  While both of the previous "ownership" definitions are positive, there is a "darker" aspect of owning (hugging) "things" in IT. 

The audible symptoms of being Positive for Ownership Disease

I've been in IT for over 20 years now and I've seen all the symptoms as we move from one service strategy to another or make an effort to transform the technical architecture of a component of the infrastructure. Maybe some of you will recognize the following paraphrased quotes that are examples of the symptoms of ownership disease;

"We can't move to client server, there's no way it will ever be as rock solid as big iron"

"No, Mr. CFO, we shouldn't move to VoIP, it's not ready for production. Our current switch has been running without interruption for years, why take the risk?"

"Virtualization isn't production yet, when you really want to ensure performance you have to stay on hardware"

"If you want storage performance and dependability it has to be SAN, and it has to be from (three letter acronym here)"

"We can't use outside air for the data center. It's too dirty and it will increase the failure rate of our hardware"

In it's current form the disease is most often identified through the following phrase:

"We shouldn't adopt cloud, it's not secure"

The reality is that in any one of the above quotes there is just enough truth to scare the unaware executive into staying the course. What is generally not explained is lost opportunity vs. a realistic risk value. I'm sure most of us can quickly look at these examples and see the issues (I.e., how important was 100% uptime on the legacy phone switch vs. saving $2 million a year by going with a VoIP solution or even outsourcing it?).  In reality if the CEO/CFO understood the real risk value vs. the savings or business benefit, they would likely have said make the change.

You can call the above quotes myths, FUD, or lies, but in most cases it really boils down to "ownership" and the fear that not being the owner of something (that I know better than anyone else) will put my job at risk. The truly unfortunate aspect of this fear of job loss or at a minimum status reduction, is often generated or perpetuated (mostly inadvertently) by leadership.  

The visible effects of being Ownership Disease Positive

There a number of places you can look to find the visible symptoms of the disease, but the most obvious is the delayed adoption of solutions that can bring real change to your business and consequently to your IT team. While it's true that technology should never be adopted for the sake of "technology", it's also true that real opportunity for improvements in cost, management agility, and business execution can be lost when the disease strikes. Let's look at a couple current examples:

Virtualization - As a commonly utilized building block for efficient IT, and improved execution, virtualization is also the most often used tool for on-ramping to the cloud. Companies that failed to make real investment in a strategy for virtualizing their environments now find themselves behind the curve in the adoption of cloud computing.

Data Centers - Being the beating heart of the IT organization, it's strange that the data center is often overlooked as nothing more than an expensive room that we occasionally have to throw lots of money at. Lack of adoption of things like virtualization or outside air will now mean that you're stuck with a beast that is at max capacity with only one quarter of the actual space utilized. You're also stuck with an inflexible design that won't allow you to quickly take advantage of new business opportunity or to just as quickly scale back to avoid wasting cash during a difficult business climate.

Why am I Blaming The Leadership?

Who do you blame when your favorite team doesn't win the championship? You might blame the team owner, or maybe there's one player you really dislike, but in most cases it falls on the coach. So, if your IT team isn't dealing effectively with ownership disease, you have to look to leadership first. What can we do to reduce the risk? This is the tricky part, I know I don't have all the answers, but the following are a few of my recommended strategies to help protect your team from becoming infected.

Recognition and Job Comfort (or freedom from fear)

I'm a huge believer in appropriate recognition:

Example Role: In a high impact support function with lots of customer interaction, getting regular positive feedback (daily) is critical. I'm also a believer that you find the job for the person. If you find yourself perpetually having to explain customer service principles to the same person, you need to find what they are good at. If you spend too much time trying to change negative behavior, instead of reinforcing positive behavior you end up with nothing.  Why expend 80% of your energy to get a 20% solution when if you focus on natural abilities the good stuff practically happens by accident and even weirder is the employee likes their job.

When recognizing someone be sure that the message you convey is the right one. If you find yourself saying "Gosh George, without you knowing the phone switch so well, I don't know what we'd do" then you're reinforcing the wrong things.

Recognize your teams for working themselves out of a job. One of my favorite leaders used to tell me all the time "if you work yourself out of a job, I'll find you a better one" and he meant it. This particular form of recognition is the riskiest for the manager, because if you don't plan to follow through (yes that means you have to actually work at future proofing your team) you will immediately lose the trust of your folks.

Be sure to recognize individuals in the manner that they not you are most comfortable with. Yes, I know its novel, but everyone is different.

You must recognize taking "smart" risk as a positive behavior. Talking about it once a quarter isn't enough, it has to be visible and real (promotions, new job options, mentions, etc).

If you do the above and maintain regular communication (not just monthly 1:1s) with your staff, you're likely to build a strong team who is willing to speak out about waste and inefficiency, even if they're talking about their own function. The hard part is that you'll know you're successful when your team tells you when you've screwed up. When your team no longer believes that their future is tied to knowing a specific vendor technology, or architectural strategy then the natural fear we all have of change will be dramatically reduced.

Now go out there and vaccinate your team. Or if you recognize some weakness in your leader(s), then point them to this blog. You'll know your disease free when you're the first one to say "we should consider reviewing alternatives" with full understanding that the alternatives could affect your role.


Facebook’s New Data Center – What can we learn from it?


When I read about a data center like the new Facebook facility in Princeville Oregon being commissioned and learn of all the innovations I'm heartened at the headway our industry is making, but I'm also forced to think of an analogy. The Facebook facility is very much like the NASA space program, there's lots of great tech created, but it takes a while before Tang is in everyone's fridge. Ewe, I can still remember the taste of that orange colored vile brew.

This blog is in no way a negative on what Facebook has done, quite the contrary. This new facility is an excellent example of how real innovation can occur when you break down the assumptions that most of us operate under. Things like high temp or outside air being a problem, assuming what our vendors told us is "all there is" is true, etc., etc..

I led a team that built a very efficient facility for VMware in Wenatchee Washington two years ago. Many of the basic characteristics of the Facebook facility (not their IT equipment) mirror what we did in Wenatchee. We used outside air, we conserved water through a grey water system, we heated the offices with hot air from the servers and didn't use ducting. We also had hot air containment, no raised floor, and a modular design for build out of the larger pods and the smaller containment units.  I'm no longer at VMware, so I don't know what the PUE is, but during commissioning and first use, we were seeing 1.25 or less as the expected efficiency. However, the point is that Facebook has taken several known opportunities and improved on them and they've pushed the boundaries on equipment design with their partners and suppliers.  

Why won't the Facebook design apply to everyone?

The Facebook design won't apply to everyone, just like it probably doesn't apply for some of Facebook's own IT application environments. The variety of hardware and legacy application and physical architectures in most large IT shops mean that it's a non starter to consider building something that is one size fits all. That being said, it doesn't mean there aren't one size fits all environments, they just aren't designed to the same efficiency ratings being claimed by Facebook.  Also, besides the fact that Facebook can buy large numbers (1000s at a time) of servers with every order, they can also buy the same kind of server. The goal of homogeneity is still extremely elusive to enterprise IT environments.

What are the positive learning's to take away from the Facebook solution?

  • Higher temp in the data center is OK. If you're still running your facility at the standard 68-72 degrees F, you're wasting a bunch of energy.
  • Using outside air is being proved out yet again. After some early adopters began using it as early as 5-6 years ago, we're finally starting to accept it as a fact.
  • You can and should push back on your suppliers to give you gear that does the job without being wasteful.
    • Reduce packaging
    • Eliminate unnecessary additions to servers that don't add to functionality, efficiency or availability
  • Demand higher efficiency power supplies
  • Look for modularity in virtually everything you implement
    • Server design
    • Data Center building design
    • Power distribution
    • UPS capacity
    • Network design
    • Etc.

When you push your suppliers you'll be surprised what you can get. But remember, you have to know what you need and why you need it or others will define what you need for you.

Most of us know what to do, and we just have to decide to do it.  Just remember that even the coolest sounding efficiency benefit can sometimes cost more to implement than you'll get in reduced energy or management costs, so do your homework.

I'd like to close by saying that this Facebook data center generally supports the message in several of my previous blogs (Manufactured Data Center and Cookie Cutter Data Center ). As data center builders, many of us hold on to our creations like they are our personal Frankenstein, it is time to let go. The complexity of building, owning and operating your own facility effectively is just too much risk and overhead for the average IT organization and for the enterprise itself.


Paul Sun Director of Cloud Computing for ITRI Taiwan Joins the Data Center Pulse Board of Directors

Paul Sun Director of Cloud Computing for ITRI Taiwan Joins the Data Center Pulse Board of Directors
Data Center Pulse expands the board of directors to continue the goal of influencing the datacenter industry through their exclusive, global end user community.

UNION CITY, CA, February 8th, 2011 - Today, Data Center Pulse added Paul Sun to the Board of Directors as the Asia Pacific Regional Director 

Paul Sun is the Director of the Cloud Computing Center for Mobile Applications at ITRI (Industrial Technology Research Institute), ITRI is a national research organization that serves to strengthen the technological competitiveness of Taiwan. ITRI's 6,000 employees conduct advanced research and development in Communication and Optoelectronics, Precision Machinery and MEMS, Materials and Chemical Engineering, Biomedical Technology, Sustainable Development, and Nanotechnology.

Mr. Sun is responsible for cloud computing hardware and system infrastructure research and development at ITRI
Paul recently articulated his motivation for accepting the DCP board position. "The Data Center Pulse Board of Directors is made up of an outstanding team of veterans and experts from the data center industry. It is an honor for me to join and help to make DCP into a leading, international organization. Asia is experiencing tremendous growth in the data center industry. My main goal is to focus on the APAC countries, help to foster the exchange of ideas and needs in Asia as the DCP representative."

The Data Center Pulse core membership has reached 1,850 people in 62 countries representing almost 100 different industries. The interest level is increasing and the timing is right to focus more energy on our membership in the Asia Pacific region. With Paul in place, DCP will have a representative in a position to help gather information on local issues, while also representing DCP to new potential members.

Data Center Pulse has the ability to reach a significant population of Data Center customers ranging from a single rack to some of the largest Data Centers in the world. DCP continues to search for candidates to fill the remaining board positions as well as participation in the Industry Alignment Board (IAB) and the Technical Advisory Board (TAB). To become a Data Center Pulse Member, click here. For more information on DCP or local chapter interest, please email


About Data Center Pulse: Data Center Pulse (DCP) is a growing, non-profit, datacenter industry community founded on the principles of sharing best practices amongst its exclusive membership.  Founded in late 2008, DCP is quickly becoming an industry nexus for the explosive datacenter industry's operators and influencers.  DCP's mission is to align end users to share information thereby influencing the industry by defining, adopting and driving best practices and next generation solutions. The DCP members are the individuals that evaluate, recommend and purchase the products and services for the datacenter. They represent billions of dollars of annual purchases that drive the IT economy. Information is available at