Offshore QA and a Sly Fox

A fairly common model for working with an offshore vendor for SaaS companies is based on black box model – the requirements collected locally go to offshore team and code ready for production comes back, sometimes in a form of binaries. There are variations to that model with the same common thread – the full responsibility for development of the application and its quality assurance belongs to with the vendor.sly_fox

Can this approach work with an arbitrary software development shop? Absolutely! As a matter of that is the model used by all ISVs that do not employ offshore, so model works for sure. The question is whether offshore components in that model make the difference worth discussing, and unfortunately they do. The fundamental laws of outsourcing (FLO) affect efficiency and reliability of the model to a great extend, often making the model completely unreliable.

There are so many things that can go wrong inside of the proverbial black box turning it more into a Pandora’s Box:

  • Communications issues and information loss at every handover
  • Ever deteriorating quality of the resources
  • Inevitable deterioration of the quality of code
  • Growing blind spots in test coverage
  • And so on – you can continue this list ad nauseum

The problem with the black box is that there are no natural forces that would counter advancement of negative trends. Many (alas not everyone) offshore vendors employ different techniques and tools to control the trends and prevent the decay, yet those are external artificial tools while the trends are power by powerful natural forces. FLOs are nature laws akin to gravity to counter them you need rather impressive structures. Just consider a fairly simple example – quality of resources:

Economics of offshore push vendors to seek resources at the lowest cost as long as they can sell them to the consumer and that starts the deterioration spiral. A progressive vendor can start with bringing all star contributors and attempt to market themselves as niche boutique shop. Facing increasing competition, challenge in retaining all-stars and desire to grow (these are the natural forces I mentioned) would continuously push the vendor to consider some percentage of less than top-notch resources. With the first not-a-perfect hire the vendor starts on the proverbial slippery slope… Anyway, I am afraid I am going on a tangent here.

While the vendor is dealing with serious internal issues they still need to deliver the product. One of the obvious methods to deal with it is installing perimeter control – quality assurance that doesn’t allow product with a quality below some predetermined benchmark to flow out of the box. And the key here is “predetermined benchmark” or standards of quality. Who defines what these standards are? If that is the team placed inside the black box you get a classical case of Fox Guarding the Henhouse (FGH). The results could be pretty dramatic, however if the issues are below the surface or well covered it will take some time for them to be discovered.

There are a few steps that I would recommend to consider to mitigate FGH phenomena:

  • Put QA in hands of another or even competing vendor. Just a couple days ago it talked with a friend of mine who outsourced his development and QA activities with two companies, both based in Latin America (Argentina and Brazil), both companies are doing development and QA; QA activities for the code produced in Brazil are done in Argentina and the other way around. That process has been operation for over a year and is working quite well.
  • Put QA into your own hands. Interesting enough many companies outsource QA first, that’s a very dubious practice take a look at some of my thoughts in Pros & Cons of Outsourcing QA. I believe that strong QA skills are less of a commodity than development skills. Taken in perspective of higher need for communication skills, understanding of the domain, and other attributes important for QA team you can see why keeping QA in-house could be quite meaningful.
  • As minimum consider rigorous acceptance testing performed by business users. That’s the least preferred option; there too many reasons why UAT fail to deliver on its objectives.
  • Perform QA at every major milestone and on all significant artifacts. QA in this case is in its true meaning rather than trivial interpretation (“testing”). QA should include ambiguity review of requirements documents, architecture review and analysis, code review, audits, etc.
  • Control vendor performance via variety of metrics. I covered some of them in an earlier post []. Of course metrics are complimentary to other steps you need to take such as those above.

Interestingly enough, taking the fox out of the henhouse may accomplish something rather unusual – keep chickens alive and fox well fed and happy…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s