One of the keys to running a successful data outsourcing partnership is synchronized quality control processes. That means making sure your internal QA processes (spot checks, random reviews of a % of records, targeted reviews of specific cohorts, outlier analyses, etc.) are complementary and not duplicative with the efforts undertaken by your offshore partners.
This shared analysis of the QA processes performed by both parties is often one of the most insightful ways to spend your production management time. The focus should be on:
- Who is measuring what? (productivity numbers, quality ratings, time spent per record, etc.)
- Are they measuring the right things?
- Are there gaps in what’s measured?
- Are the correct percentages of overall records or specific cohorts of records being reviewed?
- Are you using all available effective QA methods in your joint review (listening to audio calls, random spot checks, outlier analyses)?
Coordinating QA efforts starts with making sure that the right folks (externally or internally) are doing the review. If the point is to identify team members with the lowest quality ratings (an area of joint concern) you might want to compare your offshore partners’ internal quality rankings and methodologies with the data from your internal reviews to see if they match or are complementary. They should be but, if they’re not, then that can inform where there may be gaps in your quality reviews that require an update to your joint QA processes.
In general, your in-house quality assurance process should bring your in-house team’s subject-matter expertise to bear by focusing on records with hard-to-identify errors, older records, and high-priority records. Your external partners’ highest QA priority should be focused on all records processed by new employees or by employees on the lower end of the quality spectrum and large percentages of records should be reviewed in known problem areas.
Working hand-in-hand with your partners you can build on this foundation to get more and more granular about how errors are defined and identified. Over time this creates a web of overlapping checks that will ensure that you will catch errors long before your end-users ever get a chance to encounter them.