This post continues with the topic I started a few months ago – using QA to prevent serious issues with offshore deliverables. In particular I’d like to cover User Acceptance Testing (UAT).
For many software professional UAT has a very clear definition and lucid goals and objectives, yet this understanding at most foundational level varies a lot between different professionals and organizations. In professional services engagements UAT I had pleasure to participate in UAT used to be a final sign off by buyer of the software deliverable. In my new organization UAT has been playing a key role in SDLC acting as a final gate before release to production. In many organizations UAT is interpreted as a smoke test performed by users at each milestone to make sure that the users’ requirements were properly understood.
Whatever the test performed in your organization with a UAT label it is probably an important part of your SDLC and I am not disputing its value. I am also not an abbreviation fanatic demanding that UAT term is only used its original purpose. I think it would be quite important to cover participation of users in the acceptance of deliverables from offshore, and just for the sake of this post let me call those testing activities UAT.
The first step is to identify the users of the offshore deliverables. Typically the list of users includes end-users of the application and many internal users that unfortunately often forgotten. Here are just a few categories to consider:
- Onsite developers, in particular sustenance team
- Onsite quality assurance
- Release management team
- MIS / IT team
- Production and Technical Support team
- Training department
- Professional services
For each of the user groups UAT will have its own cycle and its own artifacts. For each of the UAT instances I see a few common elements:
- Binary nature. The system either passes or does not pass the UAT. There is a potential for setting a coupe items aside in a form of a fix-it ticket; that is not a great practice though and should be treated carefully.
- High importance. For example, a failure to pass UAT could be a blocking event in engagement flow, result in significant financial penalties, etc.
- Execution of tests against previously defined / agreed upon criteria. Seems like a no-brainer yet I’ve seen times and times again that not being the case.
The last item in the list above is one of that typically have the most serious implications. If the acceptance criteria are not clearly identified and agreed upon by both parties disputes can be significant and derail otherwise perfect engagement / relationships. This item is the most complex / laborious due to complexity of defining acceptance criteria. Acceptance criteria should be identified for each instance of UAT; here are some ideas on what they might cover for different UAT groups / aspects:
- Set of UAT test scripts to be executed at the time of acceptance
- Overall definition of UAT process
- Pass / fail guidelines
Acceptance by developers
- Architecture and design artifacts
- Code and compliance with coding standards
- Check-in and other comments
UAT for onsite quality assurance
- QA artifacts (scripts, plans, strategy, etc.)
- QA Automation artifacts
- QA coverage reports
Release management team acceptance
- Release notes / read me / build instructions
- Build and Release automation artifacts
- Source control tagging
Acceptance by MIS / IT team
- Deployment diagrams
- Installation instructions
- Server specifications
A detailed definition of specifics should be agreed upon by the vendor and the customer for each category of acceptance criteria. For example acceptance criteria should state which UML artifacts should be included in the design documentation and what tools should be used to create these artifacts.