Agile techniques and approaches are marching across the software land conquering more and more territories. It was not a blitzkrieg forefathers of agile has dreamed of but it is a successful invasion. Along with true agile aficionados with their well thought out and understood processes the crowds of software anarchists disguised as agile evangelists are capturing organizations by storm. Picking selected items from the agile menu they bask in glory of self-respect enjoying coding / refactoring dance leaving aside schedules, deliverables, and milestones. While religiously following 40-hour work weeks and iteration retrospectives they enjoy weeping sounds of deadlines as they fly by.
While agile made a huge positive impact on software development as any other broad initiative it created some serious problems and could not avoid significant collateral damage. One of the first victims to the new brave world were the software metrics. Indeed, why do you need metrics in a self-monitoring organization with a clear measure of success – the working product? Even Tom DeMarco backed off from some of his ideas, why should not you? … And maybe that’s why it took agile community such a long time to figure out that test-driven development is ridiculously expensive…
Combining agile and offshore is not at all impossible. It is however complex and requires a plenty of prerequisites, including serious agile maturity of the on-shore team. You should think twice before eliminating “overhead” of the waterfall on your offshore projects. And specifically, forgoing the metrics can be detrimental / border line juvenile delinquency.
Of course the topic metrics is also complex and rather controversial. What to measure? How to measure? What to do with the results? Here are a few tips to consider:
What to measure?
For starters I recommend three areas – Productivity, Quality, and Compliance. There are many metrics you can suggest for each of these areas, in a large degree they depend on what data you can reasonably collect. Assuming that you 1) do not have comprehensive processes, 2) do not have sophisticated tools, and 3) using offshore in some form of T&M you can consider the following metrics:
Offshore Productivity Ratio (OPR): OPR = Productivity of the offshore team / Productivity of the local team. Assuming that you have a good team locally you OPR should be at least as high as ratio between fully burdened cost of offshore resources over local cost. Let’s say that your local developer costs you $60 an hour, plus 20% benefits, and 20% management overhead. Your offshore team offers you $25 blended rate and your overhead of managing offshore is 25%. That gives you a target of 25*1.25 / 60 * 1.2 * 1.2 = ~36%. That would give you a break even scenario which you probably won’t consider satisfactory, but it’s a good place to start.
Measuring quality is a very complex and controversial topic if quality is taken in a general sense; it becomes substantially simpler if you measure quality of specific artifacts / deliverable in a very specific dimension. For example, measuring the quality of documentation by number of bugs found during an ambiguity review, measuring the quality of code by number of bugs found during regression testing, etc. Thus some of a fairly common metrics for Quality are escape ratios (escape to staging, escape to production, etc.) Another common metric is reopen rate which measure quality in a sense “do it right from the first time”.
Compliance is an important area that covers multiple aspect of vendor’s performance against contract. To measure compliance I suggest establishing a few KPIs in a contract and tracking them against actuals. For example you may want to track turnover rate or team ramp up speed. If metrics are new to your organization you should avoid tracking aspects that are complex to measure or/and have strong intangible components, e.g. satisfaction, knowledge transfer effectiveness, etc.
How to measure?
I find measuring productivity to be the most challenging task, yet it could be done in a few manageable steps:
Start with task tracking, both internally and externally. You can use a sophisticated system or something as simple as slightly modified Bugzilla. What you want to get to is that every activity performed by your team can be recorded against a specific tracked item (bug / task / etc.). You may allow for some exceptions such as admin time, meetings, etc. which could be recorded against several categories or one catch all category.
Next is time tracking, again internally and externally. That will immediately highlight a lot starting with organization buy in to individual offenders. In my experience introducing time tracking is more difficult with a local team, especially if it is an “engineering” not “services” organization. I have seen wide spectrum of response –sabotage, malicious compliance, general acceptance, and rarely enthusiasm. Most of offshore teams nowadays will have time tracking already in place. It is important to find a balance between details your system captures and effort it take to keep it current. It is also great if you can link time tracking with task tracking.
Internal time tracking is essential for many reasons, one particular relevant to offshore is establishing productivity benchmarks. Overtime as you collect sufficient actual data you can rely on these benchmarks much more than on estimates developers would produce.
I like idea of linking tickets in a tree manner. For example a feature can have tasks as children, bugs could be marked as children of a task / feature and so on. That allows you to get calculate an overall cost of a feature in its lifetime in terms of time spend on it and resolving issues around it.
Now if you can weigh the tickets by complexity you can determine productivity and intern determine the OPR.
To give the ticket a weight you can use multiple estimating techniques or you can define the estimates post factum, i.e. “this ticket should have taken XX hrs”. Defining weight of the ticket is the most sensitive item and it should be handled by most objective and senior contributors or by the team. I found it rather meaningful to do it during iteration planning and adjust the number if something is discovered in the process of development. I also found that teams can quickly develop some rough guidelines on what institutes a fair estimate.
Productivity for any particular ticket is weight of it over its cost in hours.
Well, after I wrote these few “simple” steps it appears to be quite intimidating, in fact it sounds more complex than it is in real life. You just need to get the team buy in, give the process some time to settle, and write a few reports…
Quality metrics are much easier to deal with, as long as you pick something you can measure.
Escape ratios are a good start. You may define several escape ratios. For example, escape to development (measures quality of requirements), escape to testing (measures quality of development), escape to production (measures quality of testing). The ratios are defined by number of bugs found at a particular milestone over volume of work. For example escape to calculate escape to development ratio you divide number of bugs found in requirements specs during ambiguity review over effort to develop those specs in man*days.
You have to keep in mind that escape ratios are tightly linked to discovery rates that could vary greatly depending on specifics of the process and usage patterns. For example if a feature deployed in production is not used by the customers the chances of finding bugs in it are low, while another feature possibly developed with much higher quality and extensively used by end-users will generate high number of issues.
Similar to measuring productivity you might want to introduce relative ratios for example Offshore Escape Ratio (OER) which calculated as escape ratio by offshore team over escape ratio by local team. Target for should be 100% or higher.
Measuring compliance should be a straightforward activity if you develop a clear and easily traceable metrics. For example you want to track turnover ratio against the contracted number. You should discuss the vendor how the number is calculated for your team and perform the calculations on agreed upon frequency.
What to do with the results?
Developing and tracking metrics should not be done for purely punitive purposes. The goal is to find inefficiencies and understand a direction of change. Metrics may highlight problem with processes, resource allocation, individual poor performers, lack of expertise in specific area, and so on. In that light trends in metrics are more important than specific measures. If you see the quality increasing as you remove barriers / improve process you do not need to make any drastic measures. Of course if productivity stays at a level you find unacceptable for some time you may want to start looking for an alternative provider.
When it comes to Compliance the best thing to do is to put contractual clauses that associate financial penalties for incompliance, at the same time you may want to reward performance that exceeds the expectations.