“Outsource it!” is now in beta

A couple days ago my first full size book went into beta and is now available at the publisher website – http://pragprog.com/book/nkout/outsource-it. I feel very happy and relieved that the book is finally out, writing it was far more challenging than I’ve ever anticipated. At the same time I feel happy and proud, proud to be one of the authors of the pragmatic bookshelf, the group of technology writers that earned respect across very broad and demanding technical audience.

It will take a little while before the book hits the shelves of Amazon and other bookstores, but you don’t have to wait and get your e-copy of it today. While the book is in beta your comments and suggestions would be taken quite seriously and could result in changes and additions to the content, hopefully making the book even better. I am not sure how long the beta would take but hopefully much less than it took me to get here –

Roughly two and a half years ago I came up what seemed a great idea at the time – compile my blog material into an easy to read eBook. In a couple months I produced the first volume that was dedicated to making decisions on whether and how to outsource. In a short order I received substantial feedback that made it apparent that just recompiling the blog and doing surface level clean up won’t add too much value, and probably was not worth the effort. Continue reading

Charting a Map to Disposable Outsourcing

Have you heard about PMBOK? In case you did not – the acronym stands for Project Management Body of Knowledge. PMBOK is a very comprehensive document that covers PM processes, procedures, methodologies and techniques promoted by Project Management Institute (PMI), a very well respected organization…

A few months ago I asked two PMs on my team to develop a road map of ensuring that offshoring relationships we have in place are indeed 100% disposable. A very aggressive goal considering a small size of our on-shore organization in particular juxtaposed to the size of our offshore operations. Almost instantly the project was nicknamed OCBOK – offshore contingency body of knowledge. Of course building a real OCBOK would take much more than two part time project managers (even exceptionally good ones) and a couple of offshore teams. So we can only start outlining its structure and put some preliminary content in a few chapters. It would be great to get it to the PMBOK level at least some approximation. To that I need all help I can find – if you have any resources / ideas / suggestions that you think could be applicable please send them my way. Feel free to comment on this post or email me at – krym2000-po@yahoo.com

In meanwhile here are a few interesting observations from writing the first chapter of OCBOK which would be called “You are more dependent than you think” –

Continue reading

Path toward Disposable Outsourcing: QA

Is there a better way to start a new year than writing a blog post? Of course there are plenty, but it just happens that I have one ready just in nick of time. So happy New Year, may it bring you success, prosperity, health, and happy outsourcing…

There is probably no easier way to introduce outsourcing in a software development organization than QA augmentation. Simplicity of it is actually deceiving and many companies pay high price for it. Check out my earlier post Pros and Cons of Outsourcing QA for more thoughts and tips. Despite its cons outsourcing QA remains extremely popular and thus should be considered for operation under DOM. Also in my view DOM is relatively easy to implement in QA Outsourcing engagements and thus might be considered as the best first step of embracing the DOM.

Path towards DOM in QA has many steps similar to those covered in Path toward Disposable Outsourcing: S/W Development. There are of course some subtle differences and specific steps important for Quality Assurance. Here are a few most important items:

  • Strict rules on bug submission / documenting. Some of the rules are enforced by bug tracking software. Depending on the sophistication of the tool you use you may find a substantial room for creativity – which is often not a good thing. I recommend using well defined rules and templates that spell out all components of the bug report. For example the bug title has to clearly identify the issue.

How many times do we have to say that, and yet, “Problem when loading the app” keep showing up… The standards need to be spelled out, delivered to the team and rigorously enforced. One approach of dealing with problems is to a put a bug in “Feedback” or similar status and require submitter to deliver appropriate content before the bug is put in the rest of the workflow.

Considering the abundance of QA workforce being brutal could be the way to go. “First time it’s your fault, second time it’s mine, and third… well, there is no third time”. Control can be applied to every bug or less reliably on spot check basis.

  • Regression and other functional test case suites should be developed at just the right level of details. Executing testing should engage testers’ brain not just fingers. That ensures quick learning of the application while maintaining knowledge. Producing large volume of testing documentation is not necessarily going to speed up transition, as a matter of fact it often rendered useless and abandoned by the new team. A simple rule of thumb is QA engineer should be able to execute existing test cases in two – three days and should be fully productive in not more than two weeks.

One of techniques that worked well in my experience is producing test cases at two levels. The first is high level that is typically linked to a single use case, the second level spells out details in a traditional test case format. This approach allows more experienced QA engineers use high level test cases and keeps them engaged, while detailed test cases provide step-by-step instruction of new team members.

  • Test data must be stored in a source control system. If produced in some automatic way only the data generation scripts should reside in the source control. This is critically important. You should be able to generate entire suite of test data out of the source control system for a specific version of the application, just treat test data as part of the source code. I have to note here that this task might require some of the top developers on the team as it requires in-depth understanding of schema / object model as well as solid coding techniques.

Getting test data to that level late in the project cycle appears as a daunting task. It is however important and should be done even if it impacts schedule. Savings down the line will more than pay for immediate loses, even if you never need to execute on DOM.

Test automation combines all software development and testing techniques. Developing test harnesses, frameworks and test cases is one of the most challenging tasks in application development. Unfortunately, labeled with “QA” it rarely gets the attention it deserves.

Path toward Disposable Outsourcing: S/W Development

There are many very important aspects of SDLC related to s/w development activities which should be implemented whether you outsource or not. Some of them are essential to DOM. Your intermediary whether internal or external must verify that these steps are taken and not just as a checkmark on a  SDLC compliance list, they have to be made consistently and to a degree that satisfies the intent.

The first is the code standards. Of course following language naming conventions goes without saying; there are a few more standards that have to be diligently followed:

  • All names are in English (classes, variables, methods, etc.) ALL
  • Sufficient level of comments, of course in proper grammatically correct English. Developers must understand that they are not required to write essays; they just have to get comments to unambiguous level.
  • Same applies to headers, check in notes, etc.

Next is the documentation. Creating the documentation that could be used to learn about the code and its intent, that doesn’t lose concurrency and go stale, and that doesn’t cost you an arm and a leg is not a trivial exercise. As a matter of fact a detailed design / technical design documentation is one of the most controversial topics in s/w development methodology. In a large degree the documentation’s level of detail depends on the SDLC model employed. In particular the level of documentation details digresses considerably with level of agility of the process. That is often exacerbated by a low level of maturity of organizations electing agile methodology. I do not want to get too deep into this topic at this point, just want to point out several mandatory elements:

  • High level functional and technical design documentation.
  • Functional and technical design documentation at detail level, specific artifacts depend on SDLC methodology, type of the project, rate of change and many other organization specifics.
  • Comments in the code written in a standard way that allow JavaDoc or similar tools to generate meaningful documentation is one of the most important steps. Same goes for DB schema.

A couple relevant notes here:

  • Waterfall style processes with their high degree of details in documentation, staged delivery and isolated hand-offs work naturally with DOM, in particular when the vendor offers a higher level of CMMI maturity.
  • Agile methodologies work exceptionally well DOM unless they are taken superficially. That becomes particular clear in attitude towards documentation. It’s amazing how many times I heard things like “we run agile development process, so we do not do the documentation”, never from anyone who understands agile though.
  • In order to define an appropriate level of documentation for your process you need continuously evaluate value of documentation for the process and for execution on DOM vs. the cost of producing and supporting it.

Next, in no particular order some of great development practices that have been proven to work under broad range of models from clean room waterfall to XP:

  • Unit Test written before the code, at best taken all the way to Test Driven Development. Take a look at www.testdriven.com a site with a lot of good references by Eric Vautier and David Vydra.
  • Continues Integration. There is a plenty of info on CI and supporting tools. In CI builds I strongly recommend include smoke tests, subset of unit test suite, and a number of management reports. CI scope is typically different for check-in runs and nightly builds.
  • Code Review. Somewhat controversial technique which might backfire if not performed properly; I would strongly recommend using tools to facilitate code reviews, in particular I suggest crucible.
  • Frequent progress reviews with live demos; I recommend at least bi-weekly.
  • Collective Code Ownership (CCO). There is however a plenty of controversy associated with this practice, in particular accountability. I see huge value in eliminating blind spots which CCO offers and recommend introducing “feature” or “area” lead. Under that model CCO still is taken to full extent when it comes to work unit allocation and yet there is a single point of contact for each “area”.

Steps towards Disposable Outsourcing

If you imagine a graph representing extended cost of outsourcing (cost that includes your out-of-pocket expenses and the internal cost associate with enabling outsourcing) over time you will see substantial spikes in the beginning and end of the relationship. Minimizing or eliminating these spikes is one of the goals of DOM, the first spike of course can not be avoided. The cost could be minimized or eliminated for the first spike on consequent engagements. There other benefits of the model related to quality of work, impact of staff turnover, and so on. DOM can be applied to many engagement models and by all means worth efforts you put in getting to it. Below are some of the best practices that help immensely in building outsourcing relationships in DOM manner.

General. These are some of the best practices which apply to outsourcing engagement independently of the content of the engagement, technology, and scope.

  • Multisourcing. The term is typically used when multiple vendors are used across engagements. The first, most important benefit of multisourcing comes from “the best tool for the job” model. A particular vendor can offer great services in .NET development while another company has expertise in localization, and another in security monitoring. In terms of DOM I recommend taking multisourcing to another level, specifically using multiple vendors for the same task, of course if the size of engagement allows. For example QA team of 20 people can be easily outsourced to two vendors.
  • Cross-sourcing. I have to admit – there is no such term and I can use some ideas on how to name the following best practice. The idea is to use different vendors on different stages of SDLC, for example development is done by one vendor and QA by another. The objectives here go beyond “the best tool for the job” model aiming for vendors keeping each other honest.
  • DOM Planning. You need to develop your plan for execution of DOM, in particular for engagement termination and switching vendors. Even very large initiatives and comprehensive long term engagements sometimes have to undergo drastic changes. The level of planning and complexity of organization that supports it depends on the size of the outsourcing initiative.
  • Testing. I mean testing the DOM itself, verifying whether your outsourcing partner is indeed disposable. It’s impossible to overestimate the importance of testing DOM. Compare it to a Disaster Recovery planning. You may have a great plan, offsite storage of backup tapes, backup generators, etc. and yet if you never tested the process when disaster strikes that will be a disaster. The generators won’t start, the backup tapes won’t restore, etc. An example of efficient testing could be using two teams on the same engagement (see multisourcing above) and switching the teams. And that can happen to anyone.  See Satyam banned from offshoring work with World Bank, and consider what World Bank had to go through.

There inherit inefficiency in each of these practices. This inefficiency is of the same nature as inefficiency of Disaster Recovery Planning. Why would someone invest in second data center, backups, etc.? Sorry for one of those “blindly dumb questions”. Just consider the expense and impact on the business a large outsourcing initiative gone sour could cause.

Methodology. There are many best practices that apply to the methodology and/or the process of product / service development which also contributes to building DOM. Using software development lifecycle (SDLC) as an example I would like to point out just a few:

  • Project Management Office and many PMBOK style project management techniques
  • Pragmatic approach to documentation – producing just enough of it
  • Templates and style guidelines across all development artifacts, e.g. naming conventions
  • Automated Build & Deploy, Continues Integration and QA Automation
  • Many techniques from agile methodologies such as Test Driven Development
  • Comprehensive for SDLC management such as requirements management tools
  • Wiki, MS Sharepoint, and other project collaboration and knowledge management tools

This list in a large degree falls out of the scope of this blog as it relates more to building solid sustainable software development process independently from outsourcing. I will cover some of the most important elements of it in a separate post(s).

Infrastructure. Similar to the methodology above there are many best practices that apply to creating efficient infrastructure for supporting outsourced activities that also cater well to DOM. And similar to the list above it in a large degree falls out of the scope of this blog. I will cover some of the most important elements of building outsourcing infrastructure in a separate post(s).

Disposable Outsourcing: Caveat Emptor

As any other outsourcing model Disposable Outsourcing has its pros and cons and there are a few caveats worth discussing. Probably the most important one is a role of intermediary.

First, intermediary is not mandatory it is one of the best practices. There are a few tangible benefits that middleman offers here, the most important being isolation: Establishing working relationship with each vendor is a unique process, take for example MSA, the likelihood is it will be dramatically different from one vendor to another, and even if you use your own template the chances are you would have to negotiate on different clauses. There are plenty of other activities such as setting up environment, associate interviewing / screening, security, etc. that would require a significant investment. Using intermediary shields you from many of those. Isn’t it very similar to using some of well-known design patterns (adapter, façade)?

Another important reason for having the intermediary is a conflict of interests: It is unreasonable to expect that a vendor would make itself dispensable, as a matter of fact most of outsourcing companies are known for being extremely skilled in making themselves indispensable. Intermediary specifically charged with a task of enabling DOM has a responsibility and vested interest in making it happened.

There are more reasons I’ll mention just one more– intermediary is critical in supporting you through the offshore enablement and transition. These activities even with help of the third party are very challenging. In many cases that support alone justifies using a third party.

Now back to the caveat: Finding a good intermediary is absolutely critical for making DOM work and it is not at all easy. There are a few big names in the industry which play this role, in particular IBM that offers resources from smaller companies all over the world via its outsourcing wing. In a large degree it appears to be the best possible combination, unfortunately it comes with IBM price tag, as well as with many liabilities of a large company. Finding a properly sized, reliable and competent intermediary is the most significant challenge of the model.

In case you can not find a good intermediary the best approach is to build the intermediary team in-house. That could be less expensive than using 3rd party. The main challenge would be finding people with appropriate skill set, experience and knowledge. Another challenge would be setting up the group in a way that it has the authority and flexibility to do the right thing.

Another caveat of the model is a very significant challenge of creating the processes and procedures that would enable DOM. That is especially serious challenge for small to mid-sized companies and companies with low maturity of the processes in CMMI or ISO sense. I will cover the main dimensions and steps of what it mean “get it right” in a separate post. To some degree the intermediary can be a great help in this process yet still the bulk of the load falls on the customer.

One more important caveat or should I say trap associated with DOM is complacency. As I mentioned running DOM creates positive energy, has many benefits, and gets vendor to perform better. So inevitably you see the vendor as a solid partner, not someone who ever needs to be replaced… and the model starts to deteriorate: cutting corners gets more aggressive, little issues accumulate (entropy always increases), and before you know you are tethered to the vendor and eventually at their mercy. The disposability is gone along with its benefits and the only thing you have left is a higher cost of the resources that you could have got in the first place. Again, to some degree the intermediary can be a great help in preventing this from happening yet still the bulk of the load falls on you.

Disposable Outsourcing

The objective of Disposable Outsourcing Model (DOM… and it has nothing to do with XML) is minimizing risks associated with outsourced elements of the engagement. The idea is quite simple: design the engagement model in a way that the offshore partner could be quickly replaced, say in a matter of weeks, without significant impact to the engagement. Plug and play so to say.

That is of course is easier said than done. However every step towards DOM is a step towards reducing risk. DOM positioning also promotes much smoother execution of the contract, cleaner transition between phases, uneventful termination, etc. In a large degree having a DOM mind set pushes you to do a lot of things right and that positively affects your projects far beyond outsourcing components. Since the initial introduction to the concept I have been perfecting my DOM approach and finding its multiple benefits along the way.

Anyway, let me start with an example of DOM implementation in my company: About two years ago I decided to build a small QA team on DOM basis. I started with defining the objective of the project and parameters of what DOM would mean. In a few words my position was following:

  • I am open to outsource ~10 FTE in QA field under Classic Augmentation model.
  • I do not care where the services are coming from. As a matter of fact I am not particular interested in specifics of the companies that would provide the services.
  • I do care about the quality of individual contributors and quality of service they provide.
  • In case I dislike someone I want him/her to be taken of the project immediately. In case offshore team fails to deliver the quality of service I want them to be replaced in a very short time.
  • I have clear understanding of rates that I am prepared to pay for these services; and I am not about to consider any early termination penalties / etc.

The list of my “wants” (just a bit longer than the list above) did not seem ridiculous – I was not planning on getting rid of anyone just because of mood swings. However, realistically speaking, there were not too many companies in the world that would accept it. So the approach was to use an intermediate party that would take the contract, would not be invested in specific resources and could shield me from all the issues I did not want to deal with. Finding such a partner turned out to be not a complex task. A few months later I negotiated and signed a contract with a company which would act as an intermediary between my team and offshore provider(s). My partner would take on the risks / liabilities of maintaining the relationship, dealing with sourcing and termination, etc.

A few weeks later I was interviewing teams in China, India, Vietnam, Uruguay, etc. A few weeks later I settled a team. And we moved on.

At this point the main challenge was building a process of making offshore team dispensable. Of course I was not planning to get rid of the team, I just had to be prepared to it and be able to execute it well. That turned out to be a fairly involved exercise as we had to make sure that everything we do with the team to get them enabled / ramped up on our product is documented. Of course we made producing the documentation an integral part of the process itself and tasked the offshore team with it. We did the same with developing test plans, all SOPs, etc. The mindset “what if this partner is no longer here tomorrow?” was pushing us to do things right. My partner’s oversight and governance was an additional enforcement mechanism – basically keeping my team and the offshore provider honest. My partner was deeply vested in getting it right as the cost of the transition would go straight out of their pocket.

Shortly after start of our initial QA DOM exercise we kicked off another team working on development project under Project-based Augmentation model.

We first realized the benefits of the approach when facing inevitable turnover and “hiring mistakes”. It was surprising how quickly we were able to add new members to the team to replace losses or poor performers.

Our biggest pay off came in when we had to replace one of the offshore teams, it was by all means a non-event. Well, far not everything went right, but comparing to my previous experiences, the difference was dramatic.

While two of our offshore teams were operating under DOM paradigm we had another two teams involved in multiple aspects of our SDLC working under similar augmentation models without DOM ingredient. That gave me some limited material for the comparison / analysis of the approach. Here are a few interesting observations characterizing our DOM experience:

  • Offshore vendor was constantly on their toes; that is of course a double-edge sword; our assessment that it was much more positive than negative.
  • The communications in general were better with the team working under DOM model. The number of project conflicts and issues to be resolved between onsite and offshore teams was substantially lower under DOM model.
  • DOM mindset turned out to be contagious in a good sense; attention to documenting the processes and appreciation of its value went up across the team including parts of the organization that had little to do with outsourcing.
  • DOM notably increased our overhead in the initial stage of the project (comparing to a regular outsourcing model); the overhead went substantially down after ~3 months. Comparing that to a regular project I would say that overhead was lower under DOM over long time.
  • Attitude toward our DOM team was substantially better than towards the team operating under regular Classic Augmentation. In particular there was substantially less apprehension towards offshore team members from the onsite employees, and I would say higher sense of security.
  • Interestingly enough we found that the offshore team overtime started seeing some benefits of the DOM and did not particular perceived it as making them disposable; to some degree it came down to doing things right.

There were a few things that fall in category of cons of the DOM. Here are the most significant:

  • Very significant increase of the overhead during initialization of the project.
  • Cost of the resources / rates are higher given other things being equal.
  • Working with / through intermediary could be a real pain in the neck.

I would say the success and value of benefits that could be realized from the DOM model depends in a very high degree on the intermediary. Finding a good company to play that role is not a trivial exercise. There are a few other challenges worth discussing on the DOM and that deserves a stand alone post…