100%, Guaranteed On-Time… Sort Of!


When I have discussions with my peers about the metrics they collect to run their operations, or at least measure how well their operations perform, the most confused and controversial metric is… On-Time-Delivery (OTD). Many operations feel that they have ODT under control because their monthly management reports say that they are meeting their goals, usually between 90% and 95%. However, in almost every single case when I’ve gone through the OTD numbers with a center’s manager it becomes very clear that the number that they are reporting is wrong and is over reporting the success of on-time-delivery.  Centers that report 95% or better OTD are usually performing at between 60% and 80%, sometimes even lower. How can a service put a measurement system in place, test it and report data for some time and produce numbers that are this far off? The answer isn’t that a single mistake has been made. Far more often, a series of errors and misunderstandings has crept into the reporting process. Let’s take a look at each: 

  1. Standard estimates: In order for deadlines to make any sense, you need to have a standard method of estimating how  it takes to do work. If three different people bring you the exact same work on three different shifts, or on three different days, it should take the same amount of work time. Yes there are nother time segments that go into the deadline calculation (more on that below) but the estimation of work time cannot vary between shifts. Imagine going to a restaurant with 20 tables, served by 4 waitresses and only 2 tables are occupied. On your first visit it takes 10 minutes for the waitress to take your order, and bring your meal. The next time it takes an hour. The time after that, 30 minutes. You would think, “Why can’t they get their act together and serve me in a predictable way?” Different results every time you use a service is a frustrating and negative experience for  clients. Standardize your estimates.          
  2. Standard queue: Aside from work time, there are other factors for the deadline: how much work is ahead of you, how busy the service is today, how many production workers are available, etc. However you estimate the queue, do all shifts estimates according to the same instructions? If not, this is another reason why clients are experiencing variable satisfaction. Follow a standard.    
  3. Variability: Yesterday we discussed variation (“Variations On A Theme: Why Do I Need All This Math?”)? OTD is one of the most variable metrics, because it is directly affected by changes in volume, and voume always changes… from day to day and from hour to hour. Services have “rush hours” where the majority of the work is performed in an hour or two. This variability shades the client’s opinion of OTD.   
  4. Standard time: Where does the delivery time come from? If it is automatically generated and entered into your reporting system, great! For most operations this number is entered manually. Which is a big problem. Do I use my watch or a clock on the wall; does everyone follow the same procedures or do they interpret; do I enter data to the second or do I round to the nearest 5 minutes; did I enter the time immediately or estimate after the fact? Depending on how this is done, there could be a lot of variations. So, if you can use a standard and automated measure. For example, if you send an email to the client to inform them that the work is done, use the time generated by the email. If the email is not reporting the right time… tell your IT department to fix it! If they won’t and it remains 2 minutes off, adjust your estimation system by 2 minutes. If you can’t enter time automatically, put ONE clock on the wall  (preferably digital, so there is no interpretation about the exact minute), tell the timekeepers (whoever is enters time info) to enter the minutes and ignore the seconds (let’s not bother adjusting 12:31 to 12:32 if it’s 31 seconds past the minute). And NO adjustments, alterations, or formulas in the management report to adjust the time!
  5. Renegotiation: This is a big one! When you set up your metric definitions (or perhaps in a revision that you may or may not have been aware of), OTD MUST reflect the originally agreed to deadline. However, every tracking system I’ve ever seen starts out by using the last renegotiated deadline. Most change this back to the original OTD at some point, but these renegotiations almost always get out of hand, and are a key reason why the client’s opinion of your service can severely vary from what your management reports show. Yes there are reasons why deadlines are missed, but using the renegotiated deadline hides the data, preventing you from seeing or fixing the underlying problems. You may want to retain both the original and final deadlines (to see just how long a deadline can be moved) or even who many renegotiations have happened (if a lot of jobs are renegotiated 3 times, how much time is being wasted with renegotiations rather than killing the cause of renegotiations?).
  6. Overnight: Another adjustment is frequently made on overnight or weekend shifts. During the day clients want a specific deadline. But when they head home at night they may say… “just get it done by morning” (or by Monday, if it’s late on a Friday). For these jobs work may not get a deadline, or the deadline is the next morning or Monday morning. Work that should have a deadline of 2 hours is given a deadline of 12 hours or 48 hours. This paints a falsely positive view of weekend and late night OTD, and weakens management skills on these shifts. On-time delivery of work is a responsibility of each shift manager. If you take that away, should you also reduce the pay of weekend and late night managers, since they have fewer responsibilities. Make deadline work the same way on all shifts… although you might want to track these “special jobs”. If someone gives you two days to do two hours of work, even if it is a seemingly minor piece of work, expect the client to be outraged if it is just a few minutes late or has errors.
  7. ASAP: This is the same issues as overnight. ASAP is a comment, not a deadline. If work needs special treatment (such as more production workers than normal), or is at high risk for problems, note and track that separately from the deadline. Not surprisingly, you may find that ASAP (and overnight) generates a disproportionate number of client complaints. Trtack ASAP, but don’t alter the deadline.
  8. Calibration: Do you regularly verify that ALL intake staff follows the same estimation instructions? This is especially important if you have multiple shifts, where more variations may be affecting estimates. To ensure conssitent estimates, calibrate the individuals who provide estimates. Take some work samples, write out a hypothetical state of the service for the estimator (how busy, how much staff available, any special conditions, etc.) and let everyone estimate work and queue times. If you’ve never done this before, expect a wide range of estimates. Correct whatever appears to be a mistake and have outliers repeat, until their estimates are within 5% of each other. Do this monthly for 2-3 months, then move to quarterly, and possibly to semi-annually after that. You will be amazed by how much of an improvement this will bring.  
  9. Consistently early: This one is a shocker. If you are consistently early, if you consistently beat the deadline… you need to be penalized. At first, this doesn’t seem to make sense. Think of it as the other side of variation. Work can be late or it can be early. Either is a missed deadline, or an inaccurate estimation. If you are frequently early, rather than indicating great performance it indicates an  estimation system with an overly large buffer built into the process. This inflates OTD success, and it infuriates clients. Why? Because clients often have tightly scheduled days; knowing that work is likely to arrive early, I might have moved other  items around on my schedule so that I coud read/use/edit etc. this work. Instead, I have something sitting on my desk that I cannot  work on just now (or schedule a meeting, prepare to send to a client, etc.)  How do you fix this? Allow for some early delivery (say, up to 25% early, or 30 minutes on a 2 hour deadline), and reduce the percentage of early deliver over time. Anything earlier than that is report in your monthly management report.

As you can see the cumulative effect of all of these adjustments and assumptions can be huge. Clean these up and you will have much greater visibility into your operations, and much greater alignment with the opinions of your clients. And that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Delivering Services, Improvement, Continuous or Not | Tagged , , , , , , , , , , | Leave a comment

Variations On A Theme: Why Do I Need All This Math?


When you read about the different quality systems out there, you quickly hit the wall when they start throwing statistics at you. Once you move into the heavy math, the interesting idea start to look like a very complicated and hard to accomplish project. However, it doesn’t have to be that way. If you have all the powerful tools, and the experience in how to use them if take you further, but for most corporate services just a tiny amount of math and some common sense can get you pretty far. If your operation produces millions of products a month, and your service level targets that look like “99.999”, you may need the math. If you produce between hundreds and tens of thousands or products a month, and have service level targets more like “95%”, then we can use a simple process. However, if you produce just 5 or 10 products a month, then there won’t be enough samples for this process to work easily for you. But for the vast majority, this will work and it will show you something very interesting about how your services really work.       

Let’s start with where you are today. I will assume that you have at least minimum metrics in your organization. By this I mean: you are capturing some metrics, including the “the big four” (ex.: 1. On time delivery; 2. Units produced or hours worked; 3. Production level or utilization or how much capacity is available vs. actually used; 4. Level of quality or number of errors or number of complaints). For now let’s assume that you have targeted the right metrics, the data is being collected correctly, and you’ve done enough testing to be sure that it’s all working. Now, here’s the problem. You look at your monthly management report. You’ve set a specific target (let’s say 93%) for your most important service level (let’s say  on time delivery) and after months of striving, your report shows that you have at last reached your 93% goal. You speak to your team and congratulate them on this achievement. You then go to your clients to update them on your progress, but they are mysteriously unimpressed. Some even say, “But there hasn’t been any improvement! In fact, things may be worse!” This isn’t what you expected, and you’re not sure how there can be such a discrepancy between your metrics and the clients experience. What’s going on?

In this case, the culprit is variability. The average, at the end of the month, may be 93% on time, but as human beings we do not “experience” anything in this type of timeframe. We experienced things “now” in a single instance. Although we might look at something in an hour time frame; the daily “rush hour” is a good example. Traffic on a road may be backed up for 2 hours during the rush immediately after 5PM when everyone is headed home. That makes it a terribly crowded road for the rush hour traveler. However, someone else who only uses that road late at night on a Saturday may be confronted by a completely empty road. And a city planner may look at utilization for the road on a monthly basis and say that the road is only 35% utilized.. busy, but not crowded… because he is averaging out the times the road is empty and the rush hours. Which view is correct? The answer is that each view is  right and wrong. It depends on how you’re trying to use the information. In your center, there may not be a single day in a month where the on time statistic was exactly 93%. To answer our questions, we need to look at look at a subset of data. We might look at the month’s information by day (what was the on-time number for January1st, 2nd, 3rd, etc.;)  or we might look at the hours of the day (the month’s data for 8am, 9am, 10am, etc.) or we might look at each product (the March on time percentage for: contracts, memo’s, sales presentations, etc.). Each of these analyses might yield different areas for examination and different targets for improvement. But let’s start with the variation by day.  When you have the daily on time data, you will see one (or more) of the following: 

  1. Variation, but very limited: If 93% is your goal, you may see that in a given month you never have a day where you perform below 92% and never a day where you perform above 94%. In that case, you would need to look elsewhere for the problem. If this is the first time that you are looking at daily variability it is REALLY unusual to see very tightly grouped numbers like this.    
  2. Variation, but explainable: Much more likely, when you look at your numbers you will see days that are WAY off. You might have a month with 93% on time delivery and still have one or two days that drop to 50%, or 40% or even lower. However, in a given month there could be one or two “one-shot” problems: There was a network crash; the firm was hit by a virus; a broken water main delayed the arrival of your staff that day; the city had a transit strike; there was a snow storm, etc. A network crash or a problem with a virus might show that a department (other than yours) needs to improve their emergency or security processes. A broken water pipe is outside of the control of your firm. But a strike or a snow storm usually provide some advance notice and can happen again at any time. These last two may identify days with very poor on time delivery and highlight the need for an improvement program (planning for emergencies).      
  3. Variation, but unexplainable:Here you could see almost anything, but might typically see days when on time delivery drops to 70% (it could be any number)… but where no one has any theory as to why this is happening. Well, if 27 times a month you can reach 93% and on just 3 days a month you can’t, this is probably something you can fix. Digging down a little deeper, you are going to find one of two things:
    1. There is a regular pattern: Something like… every 3rd Tuesday has a low on time delivery number.  This is very convenient, because you just assemble your team and have them sniff around on Tuesday and see what’s different compared to any other day.
    2. There is an irregular pattern: The problem happens 2 or 3 times a month, but there doesn’t seem to be any pattern to predict the next occurance. Here, tell your managers to immediately call Quality Control when the problem arises, but also have your Quality Control staff check numbers daily (or as soon as they are available), since the production staff may not realize when the problem occurs. If QC only finds out after the fact, that’s OK. Just get to managers and ask them what happened while the details are still fresh in everyone’s mind.  

Now, you know why understanding (and identifying) variation is the key to making significant improvements in your services. However, you don’t yet have a  tool or a specific set of steps to consistently identify variation. Well, it’s a good thing that you read this blog, because that’s exactly what we’re going to cover tomorrow. But for today, that’s my Niccolls worth!

Posted in Best Practices, Improvement, Continuous or Not | Tagged , , , , , , , , , | Leave a comment

The Customer Is Always Right… Sometimes


Yesterday we talked about the Baldridge Award and how one firm won the award, but shortly afterwards went bankrupt. This case raised questions about the value of quality control, or at least of the Baldridge award. My contention is that the ability of today’s quality tools and techniques are not in question. They work (really [I mean it, really {REALLY!}]). The real question is if the project has identified the right quality measures to improve. Who dictates which measures of quality are right or wrong? Quality must always be determined by the client! Remember, quality is the set of factors that makes your products and services attractive to your clients. You need to talk to your clients, and be sure that the changes that you have targeted are the changed the client wants. Of course, there is one little problem. Not every client knows exactly what they want. And client needs will change over time. This is especially the case for corporate services, where the client has never had a chance to choose between different providers. They don’t necessarily know what the options are. With all of these uncertainties, how can you be sure that improvements stay on course and stay aligned with client desires?

Let’s look at the car industry for some relevant examples. In the 60’s the Japanese entered the U.S. car market. Big U.S. car companies didn’t react very strongly to this because the imported cars were small and not very feature rich. A college student might buy one of these funny little cars, but Big Car firms knew in their hearts that this wasn’t affecting new car sales; at most, students were buying a new Japanese car instead of a used American car. This didn’t look like much of a threat, and it would probably go away if Big Car firm just stuck to their guns and produced the kind of car that they knew that buyers should want. And maybe that’s what would have happened, except for two important quality events.

When the college students of the 60’s grew into the workforce of the 70’s, they were ready to trade up to better cars. But what was a better car? They outgrew the toy cars of their college days, but U.S. made “upgrade” cars weren’t as reliable. And Japan was beginning to move their cars upmarket. Even more importantly, over the last couple of decades the oil industry moved production to “higher quality” oil sources (easier to drill, higher grade, larger reserves, lower cost, etc.), mostly in the Middle East. No one realized at the time that these oil sources had another quality… volatility! This time the volatility took the form of a group that wanted to improve profits for oil producing nations by reducing the amount of available oil, thereby increasing the price. While the Organization of Oil Producing Countries was only know within the oil industry then, today we all know about OPEC. Through their actions and U.S. counteraction, the oil crisis of the 70’s kicked off with skyrocketing oil prices. Just a few years earlier cheap and available gas was a given. Now that he price was rising car buyers, especially car buyers who had moved to newly built communities that required long car drives to work and to shopping. U.S. cars lagged behind in efficiency, costing them additional car sales. By the 80’s gas prices eased, and US car firms created SUV’s and other new large car categories. Here the U.S. firms not only read the consumer correctly, by creating the SUV category they anticipated consumer desires. Big U.S. cars got bigger, while European and Japanese cars grew more slowly (take a trip to any large non-U.S. city and you quickly realize how large our cards are). But of course nothing lasts forever, and today’s record gas prices once again have consumers looking for smaller, more efficient (and less profitable) cars.  

Whatever seems like a “permanent” client demand can suddenly evaporate (perhaps due to changes in technology), or client populations can turn over rapidly (mergers, creation of a new revenue earning group), or macro-economic factors can change your business (business shifting from the U.S. to Europe). So work carefully with your clients to understand what they need., and keep refreshing your understanding of client needs. Still, there are going to be times when you see a new opportunity or a newly emerging type of service that you need to invest in before the demand exists. After all, a manager is responsible for anticipating new trends, not just following them. It’s a balancing act, where you always need to respect the voice of the client, but you sometimes need to pay attention to your own voice. And that’s my Niccolls worth for today!

Posted in Best Practices, Delivering Services, Expectations and Rewards | Tagged , , | 2 Comments

Pipe Dreams: Pursuit Of The Right Quality Metric


Quality can mean different things, often many things, to any individual. The term “quality”, by itself doesn’t necessarily clearly spell out what you mean. Identifying what quality means (in the context of a specific project) is vital to any improvement effort. It’s very important to move away from the term “better” because better doesn’t mean that the qualities you are improving are necessarily the ones that you would pay for. Certainly, there are “better” hotels that I wouldn’t necessarily visit. For example, I would agree that if two hotels were essentially identical, but one had a larger pool, that hotel could legitimately call itself better; but if I did not swim, it wouldn’t be better for me. I wouldn’t go out of my way to visit the better hotel and I certainly wouldn’t pay a premium to stay there. In providing corporate services your clients probably have a similarly “me-centric” view of quality.

For more than 20 years the U.S. Department of Commerce has sponsored the Baldridge Award for Excellence. This award was created to increase the focus on Quality in U.S. produced goods. At that time Japan and other countries were perceived as producing products of higher quality, resulting in more and more foreign products being purchased by Americans. Ironically, foreign competitors (especially in electronics, automobiles and complex goods) were outpacing the U.S. in quality because they were faster in adopting measurement and quality techniques developed by US corporations and universities.  The Baldridge Award was intended to raise the visibility, and the number, of quality initiatives in the U.S. When the award was created in 1987 it almost immediately caught the attention of the Wallace Company, a Houston Texas based pipe and valve supplier for the oil industry.

The leadership of the Wallace Company wanted to make their firm a beacon of quality. Cutting edge Quality Control techniques would provide the improvements, and the Baldridge Award would both validate and publicize their progress towards perfection. In 1990, they won the Baldridge Award. In 1992, The Wallace Company filed for bankruptcy and was acquired a short time later. Needless to say this is not the reward that Baldridge winners expected! Many pieces have been written on what went wrong, and many mistakes can be found. Some writers found the enthusiasm of Wallace’s executives to win the award resulted in inappropriate gifts and perks offered to Baldridge evaluators (which are no longer permitted under “improved” Baldridge Award rules). Others thought the bankruptcy resulted from macro changes in the economy. For me, the most convincing argument is that “quality” was not correctly defined. What do I mean? From what I’ve seen and read, the Wallace Company had an overly broad interpretation of quality. They wanted to produce a product that was better in every way compared to their competitors. Wallace anticipated a coming tightening of the market, and therefore wanted to be the “best” to avoid a shrinking market in the future.  That’s a worthy goal, but by simultaneously raising a large number of quality indicators other factors might also change, such as cost. Somewhere along the line, there was a misalignment between what the executives and what the clients valued. The additional quality created by their improvements did seem to be acknowledge by their clients, but the specific value that clients saw in Wallace being “better” did not align with their product pricing.  The wrong targets were put in place. The targeted level of metrics was achieved, but these wrong targets led to the wrong decisions, and the wrong results.

From this I think we have one very important takeaway… quality can’t just be a “make it better” exercise; a quality improvement project needs to be more than quantifiable, the change you’re creating must be a change that is valued by your clients. The tools and techniques that are available to you today will unquestionably drive change. It will move your product from where it is to somewhere else. You have to be sure that these changes move you towards the right goal. In the case of the Wallace company it appears that the quality change efforts were never aligned with client expectations, or they began with alignment and either the direction of the improvements changed or the needs of the clients changed and the improvement program did not make appropriate adjustments. By incorrectly identifying and tracking the client’s definition of quality the Wallace Company learned a very painful lesson. However, Wallace’s pain is our gain. It created a very power case study about how even effective change, can be dangerous. Remember, tools like Six Sigma are REAL! They will change the way that your operations work. Powerful tools can build or destroy value, depending on which changes you bring about. Choose carefully, and keep checking with the client! And that’s my Niccolls worth for today.

Posted in Best Practices, Decision Making, Delivering Services, Improvement, Continuous or Not, Learning and Development | Tagged , , , , , , , , | Leave a comment

The Mind Muddle Of Multi-Tasking


Multi-Tasking. We’ve all used this term for doing more than one thing at a time. As in, “Look at me Ma… I’m doing the laundry and watching TV… I’m Multi-Tasking!” Most of us have even been told to do more of it, “For this year’s improvement goals… improve your Multi-Tasking, learn to Multi-Task better!” Sounds like a great idea… do a lot of things at the same time and get more done every day! And the great thing is that Multi-Tasking is FREE! That’s right, just by Multi-Tasking you can do more, get it done faster, improve your quality, and… ummm… it’s free. I mean it is free, isn’t it? We’re not actually sacrificing anything? Every  teenager and college student will tell you that they can talk on their cell phone, write email, IM friends, AND do their homework at the same time. It is just the grown-ups that have been left behind; maybe we’re just too 20th century? I wonder what the research says?

Interestingly, there’s not a lot of research on Multi-Tasking, and a lot of what’s out there is very recent, so there hasn’t been a lot of time to get rebuttals from the scientific community. Eyal Ophir was a the lead author of a study from Stanford on Multi-Tasking. The study found that multi-taskers performed poorly, produced low quality work and had poor memory retention of what they did.  According to Eyal, “We kept looking for what they’re better at, and we didn’t find it.” My own reading of the results is that multi-taskers are training their brans for a short attention span and easy distraction. Not the skills they thought they were getting. One of the other researchers, Clifford Nass summed it up this way, “Heavy multi-taskers are often extremely confident in their abilities, but there’s evidence that those people are actually worse at multitasking than most people.” Multi-tasking even makes you bad at… MULTI-TASKING! It’s just not the way that brains work. On a related note, back in the 50’s studies at Harvard indicated that the brain can handle a maximum of seven data points at the same time, before the brain started to break down. One of the practical results of these studies was standardizing telephone numbers with  seven digits, which everyone used to be able to do.  

A study at UCLA compared memory when subjects were and weren’t actively multi-tasking. When they multi-tasked they performed more poorly at memory tests, and what little they did remember was more difficult to use in a new context. One of the study’s researchers (Russell Poldrack) concluded that some multi-tasking can work together (listening to music while exercising), but when you multi-task while learning (study, on the job training, etc.) the part of the brain that processes the learning and the part that stores the result… shifts. Different structures are used, resulting in greater memory loss over a shorter time and a more limited ability to re-use and generalize what you’ve learned. In other words, even if you remember something you don’t “learn” in the traditional sense. This learning does not become a foundation for future learning.  

There is one area of multi-tasking research has been pretty active: the use of cell phones while driving. There are studies that tell us that using a cell phone while driving reduces your reaction time, judgment and general intelligence is so reduced that it looks like you’re “under the influence”. Multi-Tasking really does dumb you down. It’s a serious enough issue that states have passed laws to limit cell phone use while driving. At first laws focused on getting cell phones out of your hands (using hands free technology), but later research said that the real conflict was not about hand-eye dexterity, it was more a function of how the brain works. And when you’re using a cell phone, your brain just doesn’t work as well.

What does all this mean, and how does it apply to the workplace?  Think of all the individuals that work in your group. Is there a lot of on the job training? Are you finding that intelligent, high potential people are making mistakes that they shouldn’t? Do you send individuals that make mistakes back to training, but find they may seem to be cooperative but are not correcting their mistakes? Your organization may be suffering from the effects of Multi-Tasking!

  1. Multi-Tasking kills productivity. When we Multi-Task we think we are creating value, but we’re actually destroying it, producing low quality work… and not that much of it.  
  2. Some Multi-Tasking is less damaging. The studies are preliminary, but tasks requiring little concentration or learning can be effectively Multi-Tasked.  
  3. Multi-Tasking interferes with development. Multi-Tasking reduces the amount of newly learned knowledge and it reduces the flexibility and reusability of that knowledge for future learning, impacting development.
  4. Technology is the “gateway drug” of Multi-Tasking. Cell phones, email, instant messaging, social networks… are at the center of multi-tasking studies, because these are the technologies that interrupt us every few minutes.
  5. Get Fast, Fast relief:If your staff has become too distracted, if potential managers are not developing quickly enough, consider a few experiments in your operations. Select a small group and cut back on their Multi-Tasking for two weeks. At the end, ask them how they feel, if they think they work better. Measure their productivity and see if it backs their self-observations. Here are some options:
    1. Email: Does everyone’s email give them a ”pop-up” every time a new email arrives? Turn off the pop-up.
    2. IM: Do your workers get instant messages on their desktop? Turn this off.
    3. Internet access: While this doesn’t necessarily “ping” you when it’s off, the ease of answering questions through Google can create a “need” for searches that doesn’t really exist.
    4. Cell Phones and iPods: AH, the crack-cocaine of multi-tasking! Ask everyone to check in their phones during the day; they can change their voice mail asking people to call their supervisor in case of emergency. Some staff may not be able to comply due to a sick child, looking after relatives who are in town or some other issue.  If so, you can always substitute another candidate.

These are just some suggestions, but you may find out that after some initial anxiety about separation from their technology, workers are happier without all the distractions. The next time you have a staff meeting this might be a good item for a few minutes of discussion. Does everyone think they Multi-Task well? Are some people stressed by too much communication?  It’s with asking a few questions… at least that’s my Niccolls worth for today.

Posted in Best Practices, Decision Making, Delivering Services, Improvement, Continuous or Not, Learning and Development, Unique Ideas | Tagged , , , , , , , , | 2 Comments

Psychics Are Standing By To Install Your Laptop!


What do your clients really want from you? Sometimes a client will instruct you to do something, such as editing a document. When you fail to follow their instructions (because it violates a rule, such as English grammar) you can be called to task. Similarly, you may follow their instruction exactly, you can still be called to task (because you violated a rule, such as English grammar). With seemingly random exceptions of what is right and wrong, how can run a metrically driven group? I mean, really!  What are you supposed to be, some sort of telepath that automatically knows what every client wants, even when they don’t tell you? Hmmm…. Telepathy. Unspoken communication. Directly reading the intentions of another through mind to mind contact! Well, I suppose it could be done… you know with enough wiring and some Really big catheters, but… wait… there may be another way!  I see an image! It’s somewhere back in time… we need to go BACK, Back, back…

One day, when I was a young IT manager, I needed to talk to an investment banker who was tied up with a client call. While I waited outside of his office, I started to talk to his secretary about another project. The banker then nodded in his secretary’s direction and said, “Coffee!” I followed her to the pantry. She poured out some coffee, found a container of cream (not milk, not half and half), carefully measured out a tablespoon of cream and added half a teaspoon of sugar and wentback to her  banker. She gave him his coffee and he drank it while continuing his call. I thought to myself, “He didn’t tell her how he wanted his coffee, yet she prepared it in a very specific way and it was exactly what he wanted. How could she know?” I came to the only possible conclusion… she sold her soul to Satan for coffee clairvoyance. I asked her for details and she said, “How do I know what he wants? How do I know? Because he’s been ordering his coffee that way every day for the last 17 years. That’s how I know!” 

Hmmm… let’s call her version theory “B”. Either way I sense a pattern,  possibly a best practice, in this story. We all have habits, patterns, that predict preferences and future actions. Before corporations produced services through centers, the secretary was the universal widget: message taker, memo writer, dictation machine, CRM interface, answering machine, form filler, and much more. Some secretaries supported one executive and some supported a few. But it was a relatively intimate relationship. When a new secretary joined a group, there was no formal knowledge transfer process; instead, secretaries would arrive with a standard set of skills (typing, answering phones, shorthand, etc.) and learn by doing. Then it all changed. In the 80’s corporations began deploying computers to the desktop. For secretaries it began with Word Processing, but was quickly followed by many other applications. Few secretaries had the training or experience to handle all of these tools, and products (such as voice mail) began to automate secretarial functions. Applications such as Power Point required more training than most secretaries possessed, and quickly grew in scope (color, layout, graphics, automation, sound, storyboarding, etc.) positioning these tools well outside the scope of secretarial services. Corporations responded with a wave of “centerization”, creating specialized groups that were capable of creating reliable, high volume, technically sophisticated products. As is usually the case, this transition came at a cost… the personal connection, the telepathy. You may now provide a great service, but you’re not part of the “real” team.  How do we keep the efficiency and bring back the telepathy?

I think the answer can be found in documented cases of “telepathy”. In the earlier part of the last century, much was written about telepathic animals: horses, dogs and other domestic animals that could count or read your mind. Conveniently, this is a well-researched subject. It turns out that most of the telepathy is really an example of unusually focused animals that closely observe their masters. They are so attentive, that they follow subtle non-verbal cues that their masters (and other humans) didn’t notice… a slight nod, the movement of the eyes, a twitch in the hands. If animals can pick up on these cues, why can’t we? Well, we do pick up this information. When we’re exposed to it.  Have you ever heard that most communication is non-verbal? Just like these animals, when we are face-to-face with other people we hear the tone of their voice, see their body language, maybe even see clues in their office (a memo on someone’s desk, an award on the shelf, a picture of a child’s graduation) that explain a change in mood or behavior.  Our psychic animals lost their special abilities when their master was removed from the room or  a screen was put up to block physical cues. In a way, that’s what happened when services moved into centers. The literature says that email misses 70-90% of potential communication, talking on the phone adds back 10-15% of the information (tone of the voice, little pauses), but if you want the remaining 50% or more of the information (body language, facial expressions, new elements in the environment) you need face-to-face interaction.   What are your options? 

  1. Artificial telepathy: Many Point Of Sales (POS) applications try to duplicate telepathy by providing information on client preferences. For example, in a beauty parlor the system might keep data on the times and days you like to have your hair cut, who you prefer to work with, other services you typically want during a visit, etc. By having your history available when you call. For the services that you provide, would it help to know individual preferences (always proof-read my work, I prefer the following research sources, never send a tech to my desk between noon and 3pm on Tuesday’s). If you use service tracking or appointment scheduling software you may already have (or could add) preference tracking. What are the critical preferences that are relevant to your service?  
  2. Walking the floor: All managers can get stuck in their office. You get so focused on tasks, so heavily scheduled, that you never leave your office. When you become a prisoner in your own office, you lose an important connection to everyone out on the floor. You slowly lose your telepathy and sometimes your empathy. Get on the floor where your clients work. I don’t mean that you need to make more appointments with specific people… just get up from your desk and spend up to an hour every other day walking around the floor(s) your clients work on. Wave to people that regularly use your service. Say, “Hi… haven’t seen you for a while. Anything my group can help you with?.” Or, “You had a lot of work last week… how did it turn out”.  You will be surprised to see how quickly your telepathy improves. Which of your direct reports walk the floor? What have they learned?
  3. Liaisons: Telepathy works best in person. If you put some of your staff where your clients physicaly reside… telepathy naturally follows. The  downside is that this is expensive, both in terms of staff cost and real estate, and needs to be very targeted.  Perhaps your services are mostly consumed by just a few users or groups, and that’s where you can focus. Surprisingly, Liaisons can work very well with outsourcing. If you’re reducing your overall cost through outsourcing, you may be able to redirect a portion of savings to increase telepathy. By addressing client satisfaction, you can increase the likelihood of successful.  

Telepathy doesn’t just happen on it’s own, you need to make it happen. When you do, I predict: fewer complaints, better client satisfaction, and more management time to focus on your services. And it certainly doesn’t take a psychic to know… that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Delivering Services, Improvement, Continuous or Not, Unique Ideas | Tagged , , , , , , , , , | Leave a comment

The Truth About Six Sigma (Part II): Too Many Sigma’s Spoils The Service!


Where was I? Oh, now I remember! All the talk about Six Sigma… it’s the latest thing, it’s just a bunch of old techniques, it can perform miracles, it can do a lot of damage if you’re not careful.  There’s a lot of meaningless chatter on this subject, but fear not… I’m here to help you! When you sit back and put all the information together, the answer is pretty clear. Of course to fully understand the issue you need to also read yesterday’s Blog, but when you but the two together I know that you’ll agree with the conclusion I’ve reached. And what is that conclusion? Simply put, you NEED to question if Six Sigma is right for your operations! Let’s take a closer look, and you’ll see why!

Every system or philosophy in existence has problems. Your current operation has problems. The real question is, “Will a new system work in my  environment?” In almost every case, Six Sigma only woks in PART of your environment. Why? Well, the problem is built into the name… Six Sigma. That means a goal of 99.999 perfect. While improvement is a good goal, this goal is neither realistic nor desirable in a service organization. When Motorola created Six Sigma, the organization was already producing at 3 Sigmas (99.7% perfect). In corporate service groups, some management reports may show performance levels of 95%, but most of these reports are… unfortunately… wrong. Real service levels are somewhere in the 60%-80% level. Scattered around you may find a few services that truly perform at 97% or 98%, but they are very rare and they may not really be true “service” groups”.  

The difference between an industrial function and a service function (at least for this discussion) is the role of the client is in the production process. To understand this, let’s look at one of the success stories of Six Sigma… microchip  manufacturing. The microchips in our computers (CPU, RAM, etc.) are marvels of modern industry. In a device that’s smaller than your fingernail, there could be more than a Billion transistors, and every single transistor needs to work reliably for the life of the chip. This industry couldn’t exist at Three Sigma; every chip would have so many flaws that they would not function. When Motorola began  to look into Six Sigma, typical microchips had a few thousand transistors. By the next decade, chips had moved to a million transistors. In order to increase the complexity of a product by 1000 times and still have if function reliably, an equally massive improvement in quality (reduction of errors) also needs to take place. To do this, absolute control needed to be imposed on the production environment. These special places are called “clean rooms”, with the air specially filtered to remove any particles, a system of airlocks to get into and out of these rooms, and the personnel in “bunny suits” (full body gowns, masks and hairnets). Pencils are forbidden (a particle of graphite could disrupt the day’s production) and everyone must speak softly or the most delicate equipment cannot be fully calibrated. Does this sound like your production environment? Are you trying to block the client from access to the service process? For most client services the client is clearly “inside” the production facility. Bankers and Lawyers modify documents even while they are being turned through the document center. Computer support groups go to user desks and wait (hope?) for the user to return so they can access their laptop. A corporate library often has to negotiate with end users to guess the identity of the firm they are researching (ex.: a senior executive hands off a request to a junior executive who passes it to a secretary who places the request… losing data at every step). In a service the client MUST be part of the process. And neither actually nor metaphorically are any of us going to get our client into a bunny suit.

Surely the literature on Six Sigma must have something to say about this! And it does. It says, “No system can be Six Sigma unless all parts are Six Sigma.” Let’s go back to our chip plant. Having a clean room is just one part of the process. When they create silicon chips, they need absolutely pure silicon as a base ingredient. Suppliers who couldn’t meet the purity requirements were dropped. Unexpected impurities in the silicon would generate faulty chips, the laser etching devices must make incredibly precise cuts for the chips to work correctly, adhesives holding chips onto circuit boards must resist daunting heat conditions, and on and on for every element in the production process. Let just one element slip and quality may drop by orders of magnitude.  In the industrial world, the Procurement department plays a pivotal role in converting manufacturing needs into vendor relationships and contractual requirements. In the service world, this is impossible.  In E-Discovery, the process of reviewing documents related to a lawsuit requires documents to be provided by the client, adherence to court schedules (which a judge can, and will, change at any time), re-scoping of the number of documents under review (by the lawyers, by the courts, because of changes in legal jurisdiction, etc.). None of these agents are under the control of Procurement, nor are they paid by the “manufacturer” and subject to replacement, leaving services with few levers over the materials used in production.

One last, but vital, point. An industrial firm is highly dependent on the use of very specialized equipment. This equipment becomes more sophisticated with every generation, usually incorporating all the features of the previous generation plus new features, higher speed and a lower price. Because the equipment is owned the manufacturer, they will continue to receive this generation of equipment until it makes sense to replace it with a newer generation. Computers, for example, double their speed about every two years. Services, while they may use computers or phones or other relatively generic technologies has many more elements of their processes carried out by people. With a large number of the processes human dependent, rather than machine dependent, the rate of improvement is unlikely to meet the “10 time reduction in errors every 2 years”… the standard for Six Sigma. Also, machines do not walk out the door and take their embedded business processes with them. We can capture best practices and train new staff (or at least some of them) up to the level of the best staff that left, but the training treadmill counts as “waste”, whereas replacing a five year old machine with a new machine that is half the price and can produce 10 times the product, is “value”.      

It’s not really a single system. It is a collection of processes that has been refined and expanded over time, bases on what worked and what doesn’t work. In fact, then next Six Sigma specialist you work with was probably trained in Lean Six Sigma. By incorporating Lean methodology, it addresses more projects that are more focused on speeding up turnaround than reducing costs. As time goes by, and Six Sigma moves deeper into service delivery (instead of just industrial processes) it will probably incorporate additional tools and might even break into “Six Sigma for Services” or “Six Sigma for IT” or other more specialized versions. What have we learned?

  1. Six Sigma provide tools that can benefit a service environment, but it’s industrial origins set goals and objectives that are unrealistic, even inappropriate, for corporate service.    
  2. Best in class industrial production requires an increasingly isolated, inflexible and tightly controlled environment; best in class services incorporate the client’s open environment, require flexible processes and changes to the production process to accommodate the changing needs of clients.
  3. Six Sigma uses Procurement to eliminate vendors which cannot meet Six Sigma standards, thus avoiding new errors and maintaining quality. Inputs to service processes come from clients, customers, regulators, and other sources that are not paid and therefore, not subject to the Procurement process.        
  4. Industrial processes are primarily performed by equipment, which usually offers orders of magnitude of additional value when replaced. Service processes are primarily performed by people, which not only offer relatively limited room for improvement when they are replaced, but replacements may initially offer less value than their predecessors (until they are trained in the new environment). Equipment replacement offers an immediate increase in value, people replacement entails hiring and re-training costs, generating waste.

In the end, is there a value to Six Sigma in service organizations? Yes, there is. Can anyone point out a Fortune 500 service with Six Sigma quality levels (OK, 5 Sigmas? Anyone have 4 Sigmas?)? Most services are battling to get to 3 Sigmas, and winning that battle may produce a service with high technical performance, that mysteriously fails to improve client satisfaction. We shouldn’t throw out what’s good about Six Sigma just because it has flaws, but we need to be ready to adjust… perhaps rethink… the targets and goals of Six Sigma. It’s like dividing the circumference of a circle by its diameter. Technically it’s 3.141592… followed by an infinite number of digits. But for most real world purposes it makes more sense to just use 3.142. We need a Services version of Six Sigma that is not as complicated as Pi. And that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Delivering Services, Improvement, Continuous or Not, Unique Ideas | Tagged , , , , , , , | 2 Comments

The Truth About Six Sigma (Part I): Bake it into your Operation!


Where was I? Oh, now I remember! All the talk about Six Sigma… it’s the latest thing, it’s just a bunch of old techniques, it can perform miracles, it can do a lot of damage if you’re not careful.  There’s a lot of meaningless chatter on this subject, but fear not… I’m here to help you! When you sit back and put all the information together, the answer is pretty clear. Of course to fully understand the issue you need to also read tomorrow’s Blog, but when you but the two together I know that you’ll agree with the conclusion I’ve reached. And what is that conclusion? Simply put, you NEED to adopt Six Sigma in your operations! Let’s take a closer look, and you’ll see why!

Every system or philosophy in existence has problems. Your current operation has problems. The real question is, “Will some other system take me farther than what I’m using today?” In almost every case, Six Sigma will do this. Why? Well, it’s not really a single system. It is a collection of processes that has been refined and expanded over time, bases on what worked and what doesn’t work. In fact, then next Six Sigma specialist you work with was probably trained in Lean Six Sigma. By incorporating Lean methodology, it addresses more projects that are more focused on speeding up turnaround than reducing costs. As time goes by, and Six Sigma moves deeper into service delivery (instead of just industrial processes) it will probably incorporate additional tools and might even break into “Six Sigma for Services” or “Six Sigma for IT” or other more specialized versions.

Think of Six Sigma as a cookbook. It provides detailed instructions on how to prepare a large number of time tested recipes that guarantee satisfaction. Have you ever picked up a cookbook and scratched your head when you read an instruction like “pebble the butter”, or wondered what you’re supposed to do when a recipe requires ingredients that are not locally available?  Six Sigma explains every single step. For those foodies out there, you may have seen the show, “America’s Test Kitchen”. Go watch an episode on YouTube. They pick a recipe, identify potential problems with the recipe (or inconsistencies in the results), experiment by modifying different variables and then determine which set of variables produces the greatest positive change. Roasted potatoes not crispy enough? Which variety of potatoes are you using, how thickly are they sliced, what type of oil is used and at what temperature? Very, very Six Sigma-ish. Have you ever had an aunt or a grandmother that made some truly wonderful dish, but when you got the recipe it never came out quite the same way? Maybe it was her old stove or the antique pots  to maybe she left out just one critical step or ingredient. Whatever it was, you just couldn’t get it right. However, if you tried using methods of observation, experimentation and measurement you just might get a lot closer to the family recipe (or maybe even an improved version). Still not convinced that your kitchen could benefit from Six Sigma? Just try the American Test Kitchen version of Buttermilk Waffles and you’ll be a scientific improvement evangelist!

The real argument for Six Sigma is simply that it works. Just like any cookbook you may like some recipes more than others, or certain cuisines are more to your taste than others. You can find the dishes you like, with a little experience under your belt you can even improvise. The cookbook means that every time you’re faced with a new culinary challenge, you don’t have to figure out everything on your own and experiment on your dinner guests. Every cookbook you see today builds on previous cuisine (all of that French “baste it in the oven for three hours in a bucket of butter”), but may also incorporate more recent techniques (steaming vegetables in a microwave). Six Sigma is continually adding new information and techniques, but is based on time tested “recipes”.  In fact the very core of Six Sigma, is the bell curve. Do you remember that from math in college? It’s a graph with a big hum in the middle (that represents the majority of the cases you are examining) that tapers off to the left and right. This “bell” is a normal distribution. For example, a “C” grade would be in the middle of the curve, and “F” and “A” grades would be to the far left and right…. Most people get a C, some get D’s and B’s, and just a few get A’s and F’s. The farther you move to the right, and away from the middle, (i.e. the more Sigma’s) the better the quality of your grade. Well, the Bell Curve started getting used as a term in the late 1800’s, the methodology was used for astronomy in the early 1800’s and the underlying math was developed in the mid 1700’s. About the same pedigree as a classic French meringue for dessert. What have we learned?

  1. Working without a cookbook is difficult, leads to reinventing existing recipes and could give your clients indigestion if you’re not careful.
  2. Like any recipe book, Six Sigma is not perfect. We will all like some recipes more than others, and sometimes we will need to alter an old recipe to make it work for us..
  3. Six Sigma is a very large and flexible cookbook. It is continually incorporating new methods, and much of the core process is very old, going back to the earliest days of statistics.
  4. Looking at an existing process, and applying a step by step Six Sigma approach, can identify problematic or “missing” steps that can improve your processes (or waffles… as the case may be).

Six Sigma can benefit your organization, today! Making the decision to move full speed ahead should be as easy as pie, and that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Learning and Development | Tagged , , , , , , , | Leave a comment

A Word To The Wise: Talk Isn’t Cheap!


When you plan to outsource your services, you must balance many competing needs. You need the best services at the best prices, and vendors need the best profits for their investors. You can leverage the lower cost of an offshore or offsite location to add staff and improve service levels, but you don’t want to load your outsourcing service with unnecessary inefficiencies. Somewhere in this cloud of wants and needs, there is one more item that you should consider. Do you want your users to speak directly with the offshore staff, or do you want to have them speak to intermediaries? How you answer this question will impact the cost of your service, and how it functions. Surprisingly, this question is often not asked until after the price for services has been negotiated. When this question is not asked, performance problems often result, leading to redesign or repricing of the services. Let’s take a closer look at two different communication models to see how it affects the overall service model.

The core communication issue is if your users will communicate directly to production staff or if they will communicate through a small group of dedicated staff that may or may not actually produce the work product. This question is not limited to services that have been outsourced. When any service is created you need to consider this question. High volume, same day turn around services… document centers, library research, transcription, PC helpdesk, and call centers…  need a structured communication framework to ensue that there is little variation between work products, even if different people perform your work. Also, when work turns around in just a few hours, clients are often anxious about missing deadlines and want frequent updates on the status of their work. It makes sense for these communications to go through an intermediary with superior communication skills. Fewer points of communication also makes it easier to standardize communications. However, your users prefer direct communication with the person working on their project. Your customers know that when more people are involved in communication, it takes more time to answer a simple question; and more people in the communication process leads to more opportunities for a mistake or miscommunication. You could still have direct communication and high quality communication, but it requires more training for more users and hiring more expensive and harder to locate staff. If your entire service is just a handful of staff, it may be effective to have direct communicators. When the staff is much larger, it not only becomes more expensive to hire and train for all skills, it can be very difficult to find new recruits with all the necessary skills (especially if there is a ceiling on compensation).

The model that is best for you will depend on the culture of your firm, and your industry.  Law firms are typically based on a secretarial model. Lawyers speak directly with secretaries. Each lawyer may see this as centralized communication since each lawyer communicates with just one or two secretaries. In fact, it is a decentralized model since every secretary deals with a different lawyer, who may want similar projects performed to very different standards. Many failed outsourcing plans changed their service from a decentralized to a centralized model, without allowing for this shift is communication. In Investment Banks, services already made the transition to dedicated centers with a centralized model. Still, users might welcome a more direct communications model. For some services, this could be a key improvement in services… if it is carefully thought out and carefully implemented. Neither model is better. Your current communications model is usually just a reflection of the available resources, management and budget when your service was created. Outsourcing provides an opportunity to rethink how your service works.

Outsourcing projects are moving towards a more “centralized” model, because it usually lowers the cost. But this measure of cost should not drive your model; real cost reflects not just what you pay, but what your purchase is worth. When a firm begins outsourcing, the first projects are usually the easiest. If your service is later in the project list, your service may be more difficult to outsource, or may serve a more sensitive client base. Whatever the reason, later phase projects will be under pressure to conform to the model used for previous projects. Be very sure that the staff has the right model and the right communications sills to be successful.

Of course, there aren’t just two models. If you are going to expand communication, you don’t need to completely change your model. For example, you might just expand the number of dedicated communicators, or you might allow all production staff to communicate… but only for simpler subjects, leaving more sophisticated questions to a dedicated communication staff. Give this some thought as early in the process as possible, and remember… BEFORE YOU AGREE TO A RATE FOR YOUR SERVICES, explicitly state your communications model! A few other issues to consider:

  1. You can have both distributed communications AND consistent communication: However, it will bring a higher overall cost. If you are offshoring, you can redirect some savings to fund this service. You should still be able to lower cost, but not quite to the level of a centralized communications model. Ask your vendors to quantify and explain the incremental cost. Have you assumed that your communications staff must stay in your highest cost locations? If your outsourced staff was capable of higher level communication, how would that impact your total cost of operation?
  2. Additional time may be required: If you are building a center of considerable size, it may take additional time to source positions that have both the technical skills for production and the communication skills for customer service. Allow time for additional customer service training for all communicators. Also, if the location is offshore, additional time may be needed to recruit staff that have good communications skills, but that don’t have an accent.
  3. Listen to your clients: Are they asking for more contact with the production staff? If you have centralized communications, are they bottlenecked when volumes are high? Have you had complaints or quality issues because information was lost or misinterpreted in the handoff between the communicator and the production staff?

Think about your needs. Think about the feedback from your clients. Then decide which model is the best for your service. And that’s my Niccolls worth for today!

Posted in Best Practices, Common Sense Contracting, Decision Making, Delivering Services, Unique Ideas | Tagged , , , , , | Leave a comment

Outsourcing Agreements: Price Is Just An Outcome


If this is your first outsourcing agreement, you’re under a lot of pressure to get it right the first time. Whether you work in the Procurement department or you manage the Business unit, you need to convert the complex parameters of your business into specific numbers so that the contract, or at least a Statement of Work (SOW), can be developed.  There are some key parameters that describe how your operations work, that you need to communicate to a vendor in order for the vendor to perform the way you expect. In fact, you need this just so that the two of you are on the same page. The contract process is often hung up when both sides get stuck on this. The vendor is usually happy to help provide parameters for the contract, but you’re going to be VERY reluctant to accept these parameters because you don’t really have any reason to trust the vendor at this point. More likely than not you’re suspicious of any parameters a vendor provides, because you think (sometimes rightly) that the vendor is there to protect their interests and isn’t motivated to give you the absolutely best service at the lowest price. And that’s a big part of this process.

How do we fix this? First of all, you should try the process I’m suggesting during the vendor selection process, rather than after it has been awarded. Why? Because by the time you are at the award stage, when you’re down to just one vendor, the price has already been negotiated. Price was probably a key decision factor for vendor selection. However, did everyone truly understand what they were bidding on? Price is the outcome of the key numbers we’re talking about. If you’ve already selected your vendor and the price, but haven’t agreed on other key parameters (quality of work and production level) you run the risk of buying a service that won’t deliver the product you need. That’s why it’s always better to get this agreed to (and reflected in price) while you are still selecting the vendor. If the vendor is already selected, get these parameters in place as quickly as possible. Here we go!

Remembering that price is the outcome, the two other parameters you need to focus on are Quality and Production levels. If you have two different teams producing the same product, but one team needs to produce the work at 90% quality, and the other at 99%, the team with the higher quality requirements needs to be better (more experienced, faster learners, more mature, more expensive). Down the road, the vendor can organically grow the team and judiciously add new workers, but in order to get your service off the ground the team will need a certain number of experienced workers. Likewise, the higher your expectations for production (should they be 50% utilized… 60%, 80%?), the more experience your team needs. If a good team can run at 60% before errors rise and work backs up, a great team may be able to work at 75% (but may require more expensive workers and managers). Other parameters, such as how long it takes to start work or how the length of the work queue be driven more by the work process, than by people selection. Let’s go onto the next step.

The utilization levels are a question of what you want to set. If you are assuming a 1:1 replacement when work is outsourced, you should assume (as a starting point) that the utilization levels are the same as onshore, whatever that is. You can (and should) build in a continuous improvement clause so that you will benefit from a drive for better work processes. A better way is to leave this parameter open. Tell your vendor that you want the best level of service at the best price, and that they can fill in the blank. Why should you do this? Well, depending on the location and the local talent, it may be more effective for them to have a larger but less experienced team. Alternatively, they might want to keep resources in reserve for peak times. Let them decide how they work best. Then compare the responses from different vendors (who may be offering their services from different locations, with different labor markets). And remember to set a maximum utilization! Some vendors try to make up for an unprofitable contract by getting more work hours out of each worker (if the contract rewards them for doing this). If the vendor says they cannot do this… that’s OK. Just ask them why. The important thing is that it reveals the thinking of the vendor, which will tell you what you need to know about how much or how little the vendor is aligned with you.

All that’s left is the quality parameter. This is where most contract processes grind to a halt. If you’re outsourcing transcription, simple research, document processing, report generation, or any other high volume / real time service… it’s easy to provide a few samples of “prefect work”. Probably a handful of products dominate your production, and because of the volume you know what “perfect” looks like, even if you haven’t fully documented it. If you produce complex industry reports or if you develop  software application, the volume is much lower and the products look less alike. For high volume work produce a few samples of “prefect products” (lets rate this 95%). For low volume work, produce a few samples of “perfect project plans”.  If you don’t have access to perfect products, dummy up a few. Done? Next, take your perfect products and introduce a few errors (on a project plan, things that went wrong). Not fatal, but the kind of work that gets to the client today and is “acceptable” (rate it 75%). Then, add more errors (different types of mistakes than in the last case) that makes the work just barely unacceptable (65%). Clearly indicate each error on the documents and explain the significance of each error (what it is, how important it is… just talk about each). Now, hand this over to your vendor and say, “I would like you to examine each document, review my comments on each error and develop two documents. First, an annotated glossary (a list of each of the Quality Control rules, and any necessary commentary) and a simple quality control guide. This will require some discussions, and you may decide to do some of the work (perhaps develop the glossary). This process will ensure that the vendor understands what you are looking for, and is able to effectively recruit the positions.

Go through this process and you will avoid many of the problems you have heard of in other contracts… not being able to fully utilize outsourced staff (not the right skill match), too many errors (utilization too high for the experience level), or high turnover (the price is wrong, so the vendor cannot afford the right staff). Think about incorporating these steps into your outsourcing process. And that’s my Niccolls worth for today!

Posted in Best Practices, Common Sense Contracting, Decision Making, Delivering Services, Unique Ideas | Tagged , , , , , , , , | Leave a comment