A Few Words on UAT

This post continues with the topic I started a few months ago – using QA to prevent serious issues with offshore deliverables. In particular I’d like to cover User Acceptance Testing (UAT).

For many software professional UAT has a very clear definition and lucid goals and objectives, yet this understanding at most foundational level varies a lot between different professionals and organizations. In professional services engagements UAT I had pleasure to participate in UAT used to be a final sign off by buyer of the software deliverable. In my new organization UAT has been playing a key role in SDLC acting as a final gate before release to production. In many organizations UAT is interpreted as a smoke test performed by users at each milestone to make sure that the users’ requirements were properly understood.

Whatever the test performed in your organization with a UAT label it is probably an important part of your SDLC and I am not disputing its value. I am also not an abbreviation fanatic demanding that UAT term is only used its original purpose. I think it would be quite important to cover participation of users in the acceptance of deliverables from offshore, and just for the sake of this post let me call those testing activities UAT.

Continue reading

Offshore QA and a Sly Fox

A fairly common model for working with an offshore vendor for SaaS companies is based on black box model – the requirements collected locally go to offshore team and code ready for production comes back, sometimes in a form of binaries. There are variations to that model with the same common thread – the full responsibility for development of the application and its quality assurance belongs to with the vendor.sly_fox

Can this approach work with an arbitrary software development shop? Absolutely! As a matter of that is the model used by all ISVs that do not employ offshore, so model works for sure. The question is whether offshore components in that model make the difference worth discussing, and unfortunately they do. The fundamental laws of outsourcing (FLO) affect efficiency and reliability of the model to a great extend, often making the model completely unreliable.

There are so many things that can go wrong inside of the proverbial black box turning it more into a Pandora’s Box:

  • Communications issues and information loss at every handover
  • Ever deteriorating quality of the resources
  • Inevitable deterioration of the quality of code
  • Growing blind spots in test coverage
  • And so on – you can continue this list ad nauseum

Continue reading

Offshore BC & DR

Thinking about Nostradamus predictions for 2012 and all cataclysms that will strike that year? Afraid and developing your bullet proof Business Continuity and Disaster Recovery (BC&DR) plans? Well, if meteors, super-volcanoes, and melting ice caps flatten, burn, and flood most of humanity those BC&DR plans won’t matter much. However, increasingly more powerful floods, hurricanes and earthquakes with enormous toll remind us about vulnerability of even rich nations such as the USA, Canada, or China…

As I pointed out in Force Majeure working with offshore organizations increases risk of substantial losses bring up the importance of having solid BC&DR plans. Of course if you are working with a mature offshore partner they would have their own BC&DR plans. That’s great with one important caveat – will these BC & DR plans work when needed?

It was not long ago when supposedly invincible 365 Main Data Center in SF went lights out for a considerable period of time after a scheduled (!) black out. So there are a number of questions you need to answer:

  • Does you offshore vendor has solid BC&DR plans?
  • How often are theytested?
  • What are the KPIs / metrics associated with these plans?
  • Are those metrics sufficient for your business?
  • Who audits these plans and activities? (You won’t take the vendor word on it, would you?)

Well probably the first set of questions you should address to yourself or your IT team responsible for BC & DR. Interestingly enough for many, especially small companies the answers from internal team are likely to be much weaker than those from offshore. And as a matter of fact for some offshore is the BC & DR plan.

After you covered two main points you need to check the route between them. There are many aspects to connectivity between offshore and onshore. Things can get lost on the way, connectivity may drop (if major cables are damaged for quite some time as it happen with India couple years ago). But even more important is to make sure that was sent to you is indeed what you expected and that it is what you received. Wrong code pushed to production (happens even to Google) is sometime more serious disaster than interruption in service. That gets into an area that deserves a lot of discussions by itself – QA and in particular acceptance testing. I will cover it in a couple of posts in the future.

One more major aspect to consider – what if the relationship between you and the vendor go sour? In one or another way – you did something wrong, vendor decided to rob you of your IP, etc. There are plenty of sad examples. That kind of disaster is most difficult to deal with. And with them being as unpredictable as the Acts of God you should have solid BC & DR plan for that as well. Starting with solid contractual framework, appropriate and frequent archiving, and so on. Yet you can’t be ready for everything – losing key resources, knowledge transfer cost, etc. Not easy to deal with, sometimes practically impossible. However there is a good news, when it comes to these kind of diseases I know very reliable medicine – Disposable Outsourcing.