Saturday, April 29, 2006
Tuesday, April 18, 2006
The QA Guy is Moving!
Thursday, April 13, 2006
Put Some Teeth in Your QA Program
I'm spending the week on-site with a client, shadowing and mentoring their quality assessment (QA) coaches. Yesterday, one of the coaches was evaluating an e-mail. The QA software clearly showed that the agent surfed a soap opera forum for seven minutes before routing the customer's e-mail back into queue for someone else to handle. Not good.
The first of Jam's suggestions on this subject was "roll out a consequence management process." BINGO!
Our group has audited the QA program for this client for the past few years. One of the issues we have continually addressed with them is the fact that their QA process has no carrot and no stick. There is no incentive for the CSR to do well, neither is there a consequence if the agent performs poorly. The result is that the entire QA program is an expensive "FYI" for their agents. The CSR who avoided work while keeping up on her favorite soap may receive a verbal reprimand, but the agents on the floor know that there is no real consequence other than - maybe - a stern lecture. I seriously doubt her behavior will change.
If you're going to spend the time, energy and money to have a QA program, make sure that it effectively impacts the behavior of your CSRs. Behavior change happens when the process has some teeth. That is, when agents are held accountable and motivated to improve through both positive and negative reinforcement.
Technorati Tags: QA, call center, evaluate, CSR, accountability
flickr photo courtesy of gowest1230
Wednesday, April 12, 2006
"Would You Like Some Fries with That?" Goes Long Distance
I firmly believe that companies who understand how to effectively measure customer satisfaction, analyze service delivered in calls, and train their agents accordingly are going to be a step ahead of the competition.
Technorati Tags: customer service, call center, fast food, QA
Flickr photo courtesy of Little Hobbit Feet
Monday, April 10, 2006
"Not Applicable" is Definitely Applicable
If you're not already doing so, here's why you should immediately alter your methodology to include an "NA" option:
- Because it's accurate. Typically, when the NA option is not given, the Customer Service Representative (CSR) is given credit for the element even though it doesn't apply. So, the resulting score doesn't accurately reflect what happened in the call. Some elements truly aren't relevant on a given call. If your QA program is going to have integrity, it needs to accurately reflect what actually happened on a phone call. If an element wasn't a factor in the phone call, it shouldn't be a factor in the score.
- Because it's fair. Some CSRs would argue that it's not fair (especially if they're used to receiving falsely inflated scores), but the NA option is fair is because only those elements that do apply had an impact on the customer's satisfaction on that call. It's fair that you are only held accountable for the elements that were relevant to the call, no more and no less.
- Because it raises the level of accountability. Let's give a hypothetical. Let's say you had twenty elements on your QA scorecard and, on a certain call, only ten of them really applied. (I feel like I'm writing a story problem) The CSR missed two of the 10 applicable elements. Without the NA option, the CSR gets credit for all ten non-applicable elements. The result looks like he missed two out of twenty (90%). If you take out the ten elements that didn't factor into the call he now has eight out of ten (80%). Which is more accurate?
When the NA option is not given, it's common to find poorly performing CSRs sitting back on their laurels, confident that they are doing well when their scores don't reflect their true performance.
It's vital that you make the "not applicable" option applicable in your scoring methodology!
call center, quality, assessment, QA, metrics, methodology
Thursday, April 06, 2006
Eeny-meeny-miny-moach, Which Call Do I Choose to Coach?
Unfortunately, this is not an uncommon practice. There are a couple of problems with this approach:
- You are not getting a truly random sample of the agents performance. If you are simply coaching an occasional call, it may not really not a major issue. If you are using the results for bonuses, performance mangement or incentive pay, then your sampling process may put you at risk.
- You are ignoring real "moments of truth" in which customers are being impacted. Customers can make critical decisions about your company in thirty second calls and thirty minute calls. To avoid listening to these calls is turning a blind eye to, what may be, very critical interactions between customers and CSRs.
- You may be missing out on valuable data. Short calls often happen because of misdirected calls or other process problems. Quantifying why these are occuring could save you money and improve one call resolution as well as customer satisfaction. Likewise, longer calls may result from situations that have seriously gone awry for a customer. Digging in to the reasons may yeild valuable information about problems in the service delivery system.
Capturing and analyzing a truly random sample of phone calls will, in the long run, protect and benefit everyone involved.
sampling, call coach, QA, quality assessment
flickr photo courtesy of lotusutol
Tuesday, April 04, 2006
Too Many Call Coaches Spoil the Calibration
QA scales are a lot like the law. No matter how well you draft it, no matter how detailed your definition document is, you're going to have to interpret it in light of many different customer service situations. There's a reason why our legal system allows for one voice to argue each side and a small number of people to make a decision. Can you imagine the chaos if every court case was open for large scale, public debate and a popular vote?
One of the principles I've learned is that calibration is most efficient and productive with a small group of people (four or five max). If you have multiple call centers or a much larger QA staff, then I recommend that calibration have some sort of hierarchy. Have a small group of decision makers begin the process by calibrating, interpreting and making decisions. If necessary, that small group can then hold subsequent sessions with a broader group of coaches (in equally smaller groups) to listen and discuss the interpretation.
Like it or not, business is not a democracy. Putting every QA decision up for a popular vote among the staff often leads to poor decisions that will only have to be hashed, rehashed and altered in the future. Most successful QA programs have strong, yet fair, leaders who are willing to make decisions and drive both efficiency and productivity into the process.
calibration, call scoring, quality assessment, QA, leadership, management