Call Center QA Guy

Saturday, April 29, 2006

New Address - Same Great Content

For those of you who are a little behind, the theQAguy has moved and the blog has gotten a face lift. The Call Center QA Guy is now QAQNA. Please stop by for a visit and update your feeds! Thanks!

Tuesday, April 18, 2006

The QA Guy is Moving!

Hey everyone, I'm moving! Thanks to the great response I've received from readers, I'm upgrading and moving to my own domain. From now on, you can find "the QA guy" blog at www.qaqna.com (as in Qaulity Assessment Questions 'N' Answers). If you've already subscribed to the feed, things should automatically transition for you. If you have any problems, please let me know!

Thursday, April 13, 2006

Put Some Teeth in Your QA Program

I recently read a post by Jam Mayer in Call Center Scripts addressing the issue of customer service representatives (CSRs) who spend work time surfing the net. The post struck a chord with me because of an experience I had yesterday.

I'm spending the week on-site with a client, shadowing and mentoring their quality assessment (QA) coaches. Yesterday, one of the coaches was evaluating an e-mail. The QA software clearly showed that the agent surfed a soap opera forum for seven minutes before routing the customer's e-mail back into queue for someone else to handle. Not good.

The first of Jam's suggestions on this subject was "roll out a consequence management process." BINGO!

Our group has audited the QA program for this client for the past few years. One of the issues we have continually addressed with them is the fact that their QA process has no carrot and no stick. There is no incentive for the CSR to do well, neither is there a consequence if the agent performs poorly. The result is that the entire QA program is an expensive "FYI" for their agents. The CSR who avoided work while keeping up on her favorite soap may receive a verbal reprimand, but the agents on the floor know that there is no real consequence other than - maybe - a stern lecture. I seriously doubt her behavior will change.

If you're going to spend the time, energy and money to have a QA program, make sure that it effectively impacts the behavior of your CSRs. Behavior change happens when the process has some teeth. That is, when agents are held accountable and motivated to improve through both positive and negative reinforcement.


Technorati Tags: , , , ,
flickr photo courtesy of gowest1230

Wednesday, April 12, 2006

"Would You Like Some Fries with That?" Goes Long Distance

Call Centers are becoming a larger and larger part of business and our everyday lives. Alert reader, David Eick, sent me an article from the New York Times which reports that fast food restaurants are now beginning to take drive-thru orders in contact centers hundreds, even thousands of miles away. It's more evidence that the service skills employed by Customer Service Representatives (CSRs) in call centers will play an increasing part of our everyday service experience.

I firmly believe that companies who understand how to effectively measure customer satisfaction, analyze service delivered in calls, and train their agents accordingly are going to be a step ahead of the competition.
Technorati Tags: , , ,
Flickr photo courtesy of Little Hobbit Feet

Monday, April 10, 2006

"Not Applicable" is Definitely Applicable

When auditing a quality assessment scale or QA scorecard in call centers, I commonly find that there's no allowance given for an element to be "not applicable" (NA). For those experienced in quality assessment, this may seem like basic common sense, but my experience has proven that it is a frequently overlooked element when scoring or analyzing phone calls.

If you're not already doing so, here's why you should immediately alter your methodology to include an "NA" option:
  • Because it's accurate. Typically, when the NA option is not given, the Customer Service Representative (CSR) is given credit for the element even though it doesn't apply. So, the resulting score doesn't accurately reflect what happened in the call. Some elements truly aren't relevant on a given call. If your QA program is going to have integrity, it needs to accurately reflect what actually happened on a phone call. If an element wasn't a factor in the phone call, it shouldn't be a factor in the score.
  • Because it's fair. Some CSRs would argue that it's not fair (especially if they're used to receiving falsely inflated scores), but the NA option is fair is because only those elements that do apply had an impact on the customer's satisfaction on that call. It's fair that you are only held accountable for the elements that were relevant to the call, no more and no less.
  • Because it raises the level of accountability. Let's give a hypothetical. Let's say you had twenty elements on your QA scorecard and, on a certain call, only ten of them really applied. (I feel like I'm writing a story problem) The CSR missed two of the 10 applicable elements. Without the NA option, the CSR gets credit for all ten non-applicable elements. The result looks like he missed two out of twenty (90%). If you take out the ten elements that didn't factor into the call he now has eight out of ten (80%). Which is more accurate?

When the NA option is not given, it's common to find poorly performing CSRs sitting back on their laurels, confident that they are doing well when their scores don't reflect their true performance.

It's vital that you make the "not applicable" option applicable in your scoring methodology!


Technorati Tags:
, , , , ,

Thursday, April 06, 2006

Eeny-meeny-miny-moach, Which Call Do I Choose to Coach?

I was shadowing several call coaches today as part of a call coach mentoring program for one of our clients. It was interesting to watch these coaches select the calls they were going to analyze. Most often, the coach quickly dismissed any call shorter than two minutes and any call longer than five minutes, gravitating to a call between three and five minutes in length. The assumption was that any call less than two minutes had no value for coaching purposes. Dismissing longer calls was done, admittedly, because they didn't want to take the time to listen to them.

Unfortunately, this is not an uncommon practice. There are a couple of problems with this approach:


  • You are not getting a truly random sample of the agents performance. If you are simply coaching an occasional call, it may not really not a major issue. If you are using the results for bonuses, performance mangement or incentive pay, then your sampling process may put you at risk.
  • You are ignoring real "moments of truth" in which customers are being impacted. Customers can make critical decisions about your company in thirty second calls and thirty minute calls. To avoid listening to these calls is turning a blind eye to, what may be, very critical interactions between customers and CSRs.
  • You may be missing out on valuable data. Short calls often happen because of misdirected calls or other process problems. Quantifying why these are occuring could save you money and improve one call resolution as well as customer satisfaction. Likewise, longer calls may result from situations that have seriously gone awry for a customer. Digging in to the reasons may yeild valuable information about problems in the service delivery system.

Capturing and analyzing a truly random sample of phone calls will, in the long run, protect and benefit everyone involved.

Technorati Tags:
, , ,

flickr photo courtesy of lotusutol

Tuesday, April 04, 2006

Too Many Call Coaches Spoil the Calibration

I'm often asked to sit in on client's calibration sessions. Whenever I walk into the room and find 20 people sitting there I silently scream inside and start looking for the nearest exit. It's going to be a long, frustrating meeting. Each person you add to a calibration session exponentially increases the amount of time you'll spend in unproductive wrangling and debate.

QA scales are a lot like the law. No matter how well you draft it, no matter how detailed your definition document is, you're going to have to interpret it in light of many different customer service situations. There's a reason why our legal system allows for one voice to argue each side and a small number of people to make a decision. Can you imagine the chaos if every court case was open for large scale, public debate and a popular vote?

One of the principles I've learned is that calibration is most efficient and productive with a small group of people (four or five max). If you have multiple call centers or a much larger QA staff, then I recommend that calibration have some sort of hierarchy. Have a small group of decision makers begin the process by calibrating, interpreting and making decisions. If necessary, that small group can then hold subsequent sessions with a broader group of coaches (in equally smaller groups) to listen and discuss the interpretation.

Like it or not, business is not a democracy. Putting every QA decision up for a popular vote among the staff often leads to poor decisions that will only have to be hashed, rehashed and altered in the future. Most successful QA programs have strong, yet fair, leaders who are willing to make decisions and drive both efficiency and productivity into the process.


Technorati Tags:
, , , , ,