Offshore BC & DR

Thinking about Nostradamus predictions for 2012 and all cataclysms that will strike that year? Afraid and developing your bullet proof Business Continuity and Disaster Recovery (BC&DR) plans? Well, if meteors, super-volcanoes, and melting ice caps flatten, burn, and flood most of humanity those BC&DR plans won’t matter much. However, increasingly more powerful floods, hurricanes and earthquakes with enormous toll remind us about vulnerability of even rich nations such as the USA, Canada, or China…

As I pointed out in Force Majeure working with offshore organizations increases risk of substantial losses bring up the importance of having solid BC&DR plans. Of course if you are working with a mature offshore partner they would have their own BC&DR plans. That’s great with one important caveat – will these BC & DR plans work when needed?

It was not long ago when supposedly invincible 365 Main Data Center in SF went lights out for a considerable period of time after a scheduled (!) black out. So there are a number of questions you need to answer:

  • Does you offshore vendor has solid BC&DR plans?
  • How often are theytested?
  • What are the KPIs / metrics associated with these plans?
  • Are those metrics sufficient for your business?
  • Who audits these plans and activities? (You won’t take the vendor word on it, would you?)

Well probably the first set of questions you should address to yourself or your IT team responsible for BC & DR. Interestingly enough for many, especially small companies the answers from internal team are likely to be much weaker than those from offshore. And as a matter of fact for some offshore is the BC & DR plan.

After you covered two main points you need to check the route between them. There are many aspects to connectivity between offshore and onshore. Things can get lost on the way, connectivity may drop (if major cables are damaged for quite some time as it happen with India couple years ago). But even more important is to make sure that was sent to you is indeed what you expected and that it is what you received. Wrong code pushed to production (happens even to Google) is sometime more serious disaster than interruption in service. That gets into an area that deserves a lot of discussions by itself – QA and in particular acceptance testing. I will cover it in a couple of posts in the future.

One more major aspect to consider – what if the relationship between you and the vendor go sour? In one or another way – you did something wrong, vendor decided to rob you of your IP, etc. There are plenty of sad examples. That kind of disaster is most difficult to deal with. And with them being as unpredictable as the Acts of God you should have solid BC & DR plan for that as well. Starting with solid contractual framework, appropriate and frequent archiving, and so on. Yet you can’t be ready for everything – losing key resources, knowledge transfer cost, etc. Not easy to deal with, sometimes practically impossible. However there is a good news, when it comes to these kind of diseases I know very reliable medicine – Disposable Outsourcing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s