The Game Of Money Laundering, Part 2


In our last blog, we discussed the origins of money laundering and how it evolved over the years. We know that when criminal organizations create big piles of money, they can’t do very much with that money until it moves from the shadow economy into the legitimate economy. The use of computers and global banking creates more opportunities for criminals to sneak money into banks and financial institutions so that they can launder the money and move it back into the economy. The new economy makes the process more complex and more difficult to follow. Still, while this may be a game of “speed laundering” it’s still the same old game. At least it was until recently. There’s a lot happening online that could create new opportunities for money laundering. Today, we’re going to look at one particular scenario that may give us a sneak peek into the future. Let’s dive right in!

The latest money-laundering schemes are driven by technology. Deals move faster and in smaller increments, making it harder for the good guys to find and catch the bad guys.  But when technology combines with innovations in commerce, we get entirely new types of risk. Consider eBay. This firm started out with an idea for a simple auction system. You have a painting that you never really liked, so you sell it on eBay and make a few hundred dollars. Why is that a problem? Well, auctioning isn’t normally a part of the world of finance, so it falls outside of most financial regulations. Add to that the incredible growth of eBay, and we have a unique situation. Old school auction houses, such as Christies and Sotheby’s have been around for hundreds of years, hosting auctions of the world’s finest art and artifacts. At the end of 2011, Sotheby’s had an auction of contemporary art with sales of $200 million in a single night. Yet, the best year for Sotheby’s or its close rival Christie’s has been about $6 billion. This year eBay will pass $150 billion in sales of decidedly more downscale merchandise. That’s more than the economies of 100 countries and rivals the GDP of Bulgaria. The “micro” auctions that typify eBay’s auction model is the perfect vehicle to launder a lot of illicit funds. Why does eBay have so much potential as a money-laundering machine? Four reasons:

  1. Biggest Player: First, eBay is the biggest online auction service. There are other services out there that could do the same scams, and they might be doing that even as we speak. However, the sheer size of eBay makes it the most likely place to look for big scams.
  2. Illegal Activities:  Ebay has many programs and documents that talk about crime prevention. However, eBay has hosted criminal activity with alarming regularity. The most basic crime on eBay is the selling of items that the seller does not possess, and the seller simply walking away with the cash. A step up from that is identity theft and unpaid purchases from these accounts. Stolen goods are regularly sold on eBay, much as criminals used to sell stolen goods to a pawn shop. Of course, eBay has gained quite a bit of notoriety as the world’s largest market for counterfeit goods, often in conjunction with international criminal organizations.
  3. Little Regulation: How can eBay be involved in so many criminal activities and still be in business? It’s because the auction industry is very lightly regulated, and in many locations not regulated at all. There are no international regulations for auctions. Because the auction house does not sell anything… they merely provide a marketplace for sales to take place… most legal systems do not hold the auction house liable for the details of transactions. Tiffany’s, the well-known jeweler, has been using eBay for nearly a decade because of the volume of counterfeit Tiffany’s goods sold on eBay. The courts have backed eBay, stating that an auction house must do more than host an auction to be liable; it must attempt to influence the sale (such as falsely documenting an item’s value), or prevent the buyer from discovering a fraud. Even being aware that an item is most likely counterfeit is not enough to make the auction house liable.
  4. Paypal Connection: So far we have a huge auction house with a lot of questionable transactions, but nothing that looks like a threat to international banking. However, we haven’t asked how are payments made on eBay? A few years ago eBay asked this question and wasn’t happy with the answers. Auctions were constrained by unreliable or slow payment systems. In 2002 eBay bought the electronic payment firm, PayPal. PayPal connects your bank account or credit card to eBay’s auctions and facilitates a rapid payment. According to Wikipedia, Paypal has 232 million accounts, with customers in 190 countries, using 32 currencies. With PayPal as part of the system, the lightly regulated eBay has the global reach to allow criminal activities to develop extremely complex layering schemes… that eBay would not be responsible for. Yet, any such scheme would pass through many banks. Today’s Anti-Money Laundering legislation might hold these banks accountable for involvement in any criminal activity. At a minimum, it would be a black eye to see your bank’s name in the Wall Street Journal because of activity on eBay.

Few banks are aware of the number of illegal eBay transactions or have processes that can prevent involvement in an eBay fraud. Individuals who have been scammed on eBay may contract their banks to stop a payment, but many of these crimes are not reported. True money-laundering schemes may just go unnoticed. But what if the situation is much worse than what I have described? What if eBay worked with an organization that auctioned items that didn’t exist, always used false names and wore disguises to hide their identity? We don’t have to assume. This already exists throughout the world of online gaming. Not gambling, gaming. As in elves and dragons or light saber wielding Star Wars fanatics. What’s going on here?

The granddaddy of online gaming or MMORPG (massively multiple online role-playing) is World of Warcraft (WOW). This one game generates about a billion dollars a year in subscriptions, and an unknown amount of additional money in “commerce,” from its 12 million users. If you’re the kind of person who spends a lot of time on WOWC, you will eventually need to kill an enchanted dragon or go on a quest for some rare object. To get that object you could spend months performing time-consuming tasks, or you could simply buy that object from someone. Not too long ago you would pay fantasy world currency for fantasy world products; today, you pull out your credit card and pay real money. Virtually all of these transactions pass through PayPal.

These fantasy auctions are largely innocent, criminal history tells us that every big criminal scheme was based on a smaller, earlier version. In our last blog, we discussed Al Capone’s use of prohibition to develop an immense criminal empire. We know that eBay has a history of supporting petty theft and unethical behavior, but current law holds the seller… not the auction house… responsible for crimes. Based on other crimes, it’s not hard to imagine that small sums of illicit money are moving through fantasy auctions. Role-playing sites are growing very quickly, and the opportunity to hide and launder money will grow along with these sites. The combination of MMORPG sites, little regulation of eBay and the money moving capacity of PayPal we have a platform to accomplish very sophisticated money laundering. Because banks can be unknowingly enmeshed in these transactions, financial institutions need to understand how eBay is evolving and if it is creating a backdoor for criminals to move money through international banks.

In the last century, police departments kept an eye on pawn shops to find clues about crimes and even get a jump on new potential criminal activities. Ebay has become the world’s pawn shop and will require similar policing. However, eBay, unlike pawnshops of old, has enmeshed banks and financial institutions through PayPal’s connection to bank accounts and credit cards. Over time more advanced payment systems will enter the market but may share eBay’s exemption from financial regulation. We can expect criminals to exploit any new opportunity to launder money. Will banks remain one step ahead of the criminals? They might if they examine and understand new and emerging threats to the banking system. At least, that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Unique Ideas | Tagged , , , , , , , , | 5 Comments

A Game Of Money Laundering, Part 1


Money Laundering

Wherever  illegal money exists, there is a need to “launder” that money so that it can be used in the legitimate economy. In the earlier part of the 20th century, most “money laundering” schemes came from criminal activities. American and foreign Mobs  flourished in the first half of the 20th century, initially as relatively unorganized groups that profited from extortion, robbery, gambling and other illegal activities. Individual criminals, or even a small gang, could live a comfortable life outside of the legitimate  economy. That life would need to be conducted entirely in cash, but that wasn’t a problem when cash was the predominant economic medium. Just as corporations grew in size, criminal organizations also merged and grew, and began to generate much more money than in the past. At the same time the economy moved to credit, credit cards and banking institutions. The new crime mobs had far more money than ever before, but had fewer ways to move that money back into the legitimate economy to buy a car, house or a business. The Mob was “stuck” with massive amounts of cash, and few ways of using that cash without exposing themselves to criminal prosecution. Today, we’re going to look at the origins of money laundering, how the process works and then we’re going to identify future money laundering threats.

The interest in large scale money laundering probably begins with the infamous Chicago Crime boss, Al Capone. Capone had worked his way up the mob ranks and had big plans for the Chicago mob. As Capone began his reign over Chicago, the federal government passed the Volstead Act, making the sale of alcohol illegal. This provided Capone’s organization with a huge market for illicit alcohol, and the ability to charge virtually any price for an already highly profitable commodity. In today’s money, Capone made billions of dollars annually. As revenues and violence escalated, many legal organizations tried to stop Capone and failed. Capone was a clever opponent with vast resources, who had managed to corrupt local police, politicians, judges, newspapers, and entire communities. Even the fledgling FBI failed to stop Capone without much success. Then, an FBI agent named Elliot Ness began investigating revenues from Capone owned businesses. This worked was then refined by Frank Wilson, who matched revenues to Capone’s income taxes, and eventually won a conviction for the notorious gangster.

This experience taught the government that It’s easier for criminals to cover up a crime than to cover up the income from crimes, especially in large criminal organizations. They also learned that by effectively limiting the movement of  illegal profits into the legitimate economy, you can limit the size and power of crime families. However, criminals also learned from the Capone conviction. They learned that big piles of money can be a double-edged sword. If you spend a lot of money, you need to explain where it came from. Even if the government can’t prove it came from a criminal activity, you still need to prove it came from… somewhere! If you can’t, you will leave a trail on your tax forms that will eventually end in a jail sentence. Alternatively, you can just sit on your pile of cash, but if you can’t spend the money why bother to even break the law to make the money? Of course, if you had a magic machine that could cleanse money of illegal activities, getting the money into the economy would be a cinch.

Criminals have been looking for that magic machine for decades. The best example in the 20th Century was the cigarette vending machine. You could find one in any bar, club  or dinner, anywhere in America. It starts with PLACEMENT, when the Mob creates a company to install the machines and will mark up the sales on each machine to show revenue. The revenue claimed through thousands of individual cigarette machines becomes very difficult for the police or the IRS to trace. Blending legal with illegal money on each cigarette machine creates enough confusion to hide illegal money. This LAYERING of money can be further complicated by introducing illicit cigarettes (stolen or with forged tax stamps), altering purchase records to make sales appear higher, and movement of money though other fake corporations. Once the money has been passed through a bank, perhaps changing hands and names in the process, it comes out the other end of the banking system (INTEGRATION) as clean, legitimate money.

In a simpler time, this process work very well. But times changed. Cigarettes were slowly banned from most public meeting places, but new scams replaced the old. The rise of credit cards, and then on-line business and the connected  global economy, created new opportunities to create illicit profits and to launder the money. Electronic transfer increased the speed and size of money laundering schemes. Today’s globally connected world means that money laundering can begin or end anywhere in the world, as funds move across borders. The high speed processing of computers makes it possible to slice money into smaller increments and spread them across a much larger number of accounts than would be possible working manually. Banks and financial institutions have countered with  tighter controls and computer based monitoring. Modern money laundering looks less like a recognizable series of transactions than a cloud of data patterns pulsing across the globe.

The basics of money laundering hasn’t really changed. It starts with placement of funds, followed by layering of deals and accounts to hide transactions, and ends when the “clean” money is integrated into a well laundered account in the legitimate economy. Computers make the process faster and global market makes it harder to find, but this game of cat and mouse has remained the same for decades. But it’s about to change. E-commerce has added some interesting twists to money laundering, and a big player in e-commerce may be about to start the next big thing in money laundering. And that’s where we will pick up part 2 of this blog, but for now… that’s my Niccolls worth!

Posted in Best Practices, Decision Making, Unique Ideas | Tagged , , , , , , | 6 Comments

PMO Basics: More About Risk And Financial Crises


In our last blog we talked about Basel III and the need for better risk identification and quantification by international financial institutions. Today, we’re going to continue with this theme and look at the cyclical process that the financial markets follow, that leads to periodic financial crises.  Basel III can hopefully reduce the damage and duration of the next collapse, but history tells us that collapses will happen regardless of efforts to prevent them. This blog will help the PMO and project managers recognize the elements of each coming financial collapse and plan projects that support new regulations and mitigate damage. We’re at the start of a new cycle, so there is much for us to learn. Let’s get started!

Since 1990, the world has seen at least 3 major financial collapses. Each collapse was unique, yet each had very identifiable and repeatable elements. In retrospect, different experts have argued that each collapse was a result of market forces or of illegal activities. Each time a highly reputable and very large financial firm drove the collapse, establishing the reputation of a new financial product, and then spreading that product throughout the financial world. Each collapse follows a five-stage  process:

  1. Niche product: The cycle begins when some financial expert identifies a niche product, with potential. Sometimes the product that has been traded for a very long time, but only in a small market or by a few specialists. It is usually difficult to understand or has considerable risk. One day the financial expert asks, “How can I expand the size and profitability of my market?”
  2. Reduced risk: Either that expert, or an expert in some other field, comes up with a new idea. This idea will neutralize some or all of the risk. At least under very specific conditions. It could be that the method of measuring risk has remained unchanged in a long time, but the risk factors changed. Inside of a very large market there may be  “outliers” that don’t match the profile of the rest of the market, and can be excluded or repackaged. New technology might even allow you to do something that could not be done before.
  3. Improved value: If your solution can reduce risk, and if the world accepts your solution, then your “new and improved” financial instrument is worth more than the previous value of the underlying assets. Without a free market to determine the “real” value, who determines the price of this product? Lacking a market price to determine the valuation, the value is determined (or overly influenced) by the manager of the product/fund, rather than by an impartial outside force. When pricing is set by individuals with a strong interest in the value going up, products tend to be overpriced, and the risks under-reported.
  4. Demand & overuse: If the new product survives for a reasonable amount of time, and outperforms more established products, new investors will want to participate. The original firm may expand a fund and/or create new funds that use this product. Other firms may try to “clone” the product or fund. However, niche products are by definition small market products. How do you take a product worth tens or hundreds of millions of dollars, and satisfy a market clamoring  for tens of billions of dollars (or even more) of the product? Apparently, you can’t. When demand exceeds the total supply, you debase your product with similar, but inferior assets. By defining something else (hopefully something similar) as “the same” in order to purchase more assets, new (and often undisclosed) risk begins to weigh down the product.
  5. Contamination: Some event occurs that causes doubt. The original fund starts to fall. But that’s OK, this was just a niche product right? No! Because it was redefined to fit demand, it now represents a vast amount of money. Still, if the product was called an “ABC Derivative,” you just need to avoid all “ABC Derivative” funds, or funds that act like it. But wait! Who has been buying these funds? The unfortunate answer is… other funds! While you may be too risk averse to ever invest in this questionable security, “safe” funds that you do invest in have purchased this fund to boost this year’s lackluster performance. Now, investors panic because no one knows exactly which funds have direct or indirect investments in this “contaminated” asset.

That’s a long chain of pretty unlikely events… isn’t it? Unfortunately, history tells us that this chain of events is not just likely, it may well be inevitable. Let’s look at the last few big crises and see how they follow this pattern.

(1990) Junk Bonds – Drexal Burnham Lambert: Bonds are promissory notes to pay back your investment in 5, 10, etc. years; until then, you collect a set interest payment. The greater the risk the firms, the higher the interest rate. If an event occurs that increases risk (for ex.: bankruptcy or rumors of financial problems), investors may sell of their bonds at  below the face value. When a bond sells well below the face value, it is called a “junk bond.” While junk bonds have been around forever, Michael Milken (a Drexel executive) legitimized the widespread use of junk bonds. His theory was that the Federal Government would not let very large firms collapse, because of the potential damage to the economy. Milken gained a golden touch when Ronald Regan’s administration guaranteed loans to bankrupt Chrysler. Milken approved bonds were bought at pennies on the dollar, but valued at or near face value.

When you can almost print money by assembling a new junk bond fund, demand can (and did) explode. Demand for Milken approved junk bonds far exceeded the number of “too big to fail” companies that had “junk” rated bonds. So, the definitions changed, the level of risk rose and the funds got bigger. Drexel not only bought bonds that were once good but dropped in value, they also produced new bonds that were “junk” from day one. Drexel fueled the “greenmail” market by providing cheap financing for corporate raiders, but also provided a “safe” yet profitable boost to pension funds, mutual funds and all sorts of mainstream financial instruments. At the close of the 80’s, the junk market faltered and then fell. Junk funds collapsed. Investors feared these funds, and then learned that they had contaminated the larger market, spreading panic and damage. Investigations were conducted, legal violations were found, Milken and others were indicted, and Drexel was shut down.

(1998) Derivatives – LTCM: Risk is the heart and soul of Wall Street. It determines how much interest you get on a bond and most other financial instruments. For decades, Wall Street looked for ways to limit risk and expand the number of investors. The Black-Sholes model was a financial model for creating a type of financial instrument known as an option (the right to a financial transaction, at some time in the future), and then adding other very specific assets to the fund that move in the opposite direction. By making a perfect combination of options and “hedging” assets, you have a “derivative” that produces a profit. While this model produced a consistent profit, it was not a large profit. But that could be dealt with by using leverage (borrowing money to buy more assets and multiply profits). The Vice-Chairman of Solomon Brothers, John Meriwether, left Solomon and build the world’s largest derivative fund, in partnership with Nobel prize-winning economists Fischer Black and Myron Scholes.

This powerhouse team guaranteed world-wide acceptance, and derivatives became a growing part of the mainstream world of finance. However, greater acceptance meant a bigger fund and still greater leverage, as new assets tended to produce lower returns than the last assets you purchased. Leverage increased from 2:1 to 10:1 to 100: 1 to 1,000:1 and beyond. A supercomputer was purchased to keep up with the speed of trading. According to Wikipedia, in 1998 the fund was nominally worth $4.7 billion, a fairly staggering amount in itself. But total borrowing for “leverage” was $124 billion. Even worse, the borrowed money paid for derivatives that were in themselves leveraged. In reality, the fund controlled $1.25 trillion in assets around the world. Because of the complexity of this fund, few investors seemed to be aware of this risk. Fewer still understood that the Black-Scholes model had an Achilles heel. The model didn’t work reliably during a financial crisis. In 1997 there was a financial crisis in Indonesia, which began to spread to other areas of Asia and in 1998 the Russian economy had a crisis. LTCM was heavily invested in both markets, and crashed. The world bank and other financial institutions work behind the scenes to defuse the now worthless assets to reduce the size of the financial crash. LTCM was closed.

(2008) Subprime – Bear Stearns: Collateralized Debt Obligations (CDOs) are another type of derivative. Here various loans and mortgages are put into a common bucket. As the loans are paid, revenue flows out of the bucket through multiple “spigots”. You can choose which spigot is right for you: very senior, senior, junior, etc. Each spigot has different rules, rates and priority. The most senior position will receive a lower percentage (for lower risk), but has the first turn at the spigot. And so on down the line. The most junior receives a higher percentage (higher risk), but only gets his chance at the spigot after everyone else. If things have not gone well that month, there may be little or nothing left. CDO fund managers believed that this risk adjustment mechanism made their assets more valuable. Buyers agreed, and the number of CDO funds grew.

Bear Stearns jumped into CDO’s with both feet. They build two large funds that blended subprime mortgages (think of them as “junk” mortgages) with other assets. They provided good returns, but then subprime began to appear on the nightly news in connection with financial failure. At first subprime and CDO’s were spoken of separately, but then panic spread as investors learned that subprime had spread and contaminated other assets (funds containing subprime, or funds that contained other contaminated funds). Indictments for the fund managers soon arrived from the Security and Exchange Commission. Interestingly, while the funds stated that subprime holdings were to be no more than 6-8% of assets, in reality it was nearer to 60%. Once again, undisclosed debt increased the damage. The funds were closed, and in a matter of days Bear Stearns was closed and sold off to JPMC.

This cycle of exuberance and collapse repeats about every 10 years. In retrospect, it all seems so obvious and so avoidable. And yet it will probably happen again. There is an emotional element to Wall Street that mere rules and numbers fail to capture. Fund managers, and it would seem regulators, get carried away by the sheer magnitude of success, and questionable ideas are allowed to grow into economy crushing juggernauts. Caught up in the moment, governance is relaxed and otherwise highly respected individuals slide into illegal activities. But what can you, the humble project manager, do?

The PMO office of a financial firm cannot on its own stop these periodic collapses, but you can take steps to limit the damage. Basel III is a new framework for managing risk, and is in the early live testing phase. Other risk management guidelines are appearing as well. Do you see Basel III or other risk related projects in your 2012 project portfolio? Look for them, and prepare your project managers so that they can effectively participate in these projects. Ask department managers if they have planned any risk-management  projects in 2012. Look for new groups that have joined your firm, or highly publicized new hires. New high level hires and new departments often means new types of functions whose risk may not be well understood. It also mean new managers who  may not know the firm has  a PMO office, or how you can help them. Go out and look around. You might just do some networking and find a few new projects, or you might help protect your firm against the next big collapse! At least that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Project Management Office | Tagged , , , , , | 4 Comments

PMO Basics: Understanding Basel III


If you work in a financial institution, big changes are on the way. Lessons learned from the financial collapse of the past few years, has led to new thinking about how financial institutions need to operate. Or maybe it’s old thinking that has come back into fashion. In the years leading up to the collapse, we had a lot of growth and a lot of thinking about what limits growth. The answer, around the world seemed to be less regulation, fewer requirements for liquid assets in an emergency, and ever more complicated financial instruments to yield higher returns. Since the economy did collapse, this now appears to have been a less than perfect combination. In fact, the same combination of factors fueled the rise in world financial markets in the 20s that ended in the Great Depression. That economic event led to a number of regulations that prevented a repeat collapse for many decades. Today, some of the most critical regulations are being developed in Europe. The US government has agreed to adopt one of the most critical regulatory frameworks, Basel III. What is Basel III, and why is it so critical to financial firms (and project managers)? That’s exactly what we’re going to discuss today! Let’s dive in:

Basel III is a framework for bank regulation, named for the town in Switzerland where the “BIG 10” countries originally met. Basel III is a new set of guidelines that follows on from Basel II, which was developed in 2004. The core concept behind Basel III is that every financial institution needs to have the right amount of liquid, accessible assets for both day to day issues and for emergencies. The idea is simple, but the execution is quite difficult. Here are the three major components of Basel III:

  1. Liquidity requirements: Basel II generally required 2.5% reserves, but this obviously proved inadequate in the last financial crisis. Basel III increases this to 7.0%.
  2. Asset definition & testing: In  the last financial crisis, assets that were supposed to be liquid (easily sold or converted into cash) were found to be impossible to sell. Either they were not truly liquid, or they were blended with more difficult to price or sell instruments, especially subprime loans. Basel III will do more to define risk and test it… with periodic “stress tests” that simulate a financial crisis.
  3. Risk adjustment: The financial crisis was not brought about by too many high-quality  investments. Risky investments were made up of questionable assets that are either too complex to understand or too opaque to review caused the crisis. Basel III adjusts reserves based on risk. Financial institutions that want high-risk investments will less available cash to invest; institutions that choose lower risk investments will be able to invest more. How we identify and quantify risk now moves to the forefront of world finance.

In the US, regulators are still defining how Basel III will be applied. In December of 2011, the US agreed to apply Basel III to all banks, and to financial institutions with $50 billion or more in assets.  Which institutions will be included (will Basel III apply to private firms, how?),  and how international standards will be used to identify risk is still being defined. In March of 2012, 19 banks were tested, 15 passed. There are 7,400 commercial banks in the US, so this is just the beginning of a process that will take many years to complete. Banks which have not  passed a stress test will eventually be seen as having undisclosed or unacceptable risk.

Which brings us back to project management. These are huge changes. If you work for a financial firm, projects will be launched before and after ever stress test. New applications need to be developed to track and test fund behavior. Other applications will be created to give senior executives more visibility into the activities of their fund managers; when the next crisis comes, regulators are expected to deal harshly with senior executives who are not aware of how fund managers use the firm’s money. Training is needed throughout the firm, not just in trading and banking groups. By increasing the reserve and liquidity requirements for banks, groups with higher-risk  risk investments will look for ways to offset these requirements by reducing operating costs.

Basel III isn’t anything new, but it is big and it will bring big changes. Look at your project portfolios. If you can’t see a footprint for Basel III you should talk to department managers about their plans. While only 19 firms have been involved in stress testing so far, the thousands of other commercials banks will need to follow… soon.  Get ahead of this trend, and start building new projects or at least put placeholders in your portfolios. Planning now will definitely help to keep you on track when the Basel III wave hits your firm. And that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Project Management Office | Tagged , , , , , | Leave a comment

Word From PEX Week European Conference…


Here’s an announcement from our friends at PEX!

April 23rd will see hundreds of process improvement professionals descend upon London for the annual PEX Week European Conference.  It looks like an exciting event but obviously being London based means it’s out of reach for many from outside of Europe.  Thankfully they’ll be providing a live blog of the event so that those unable to attend in person can still find out what’s going on and interact with speakers at the event.

Highlights (all times GMT):

  • Keynote speech – Equipping your business for all eventualities: Learning from the lessons of the recession and building a stronger organisation for the future by Connie Moore 24th April @ 10am
  • Keynote speech – BIG Data = BIG Opportunities! – Discover the Value of Data Analysis in Developing Customer Centric Process with Andy Jones 24th April @ 12.15
  • Live Q&A on Big data with Connie Moore and Andy Jones 24th April @ 1pm
  • Tweet jam on sustainable innovation 24th April @ 3.10pm
  • Keynote speech – Raising the competitive bar and securing Customer Gold by Steve Towers 25th April @ 9am
  • Live Q&A on Six sigma on steroids with Steve Towers 25th @ 1.15pm

To participate with the live blog simply click here to open up the chat window, or you can join in via Twitter by using the hashtag #pexweek

Live blog provided by the Process Excellence Network!

Posted in Uncategorized | Leave a comment

The General Electric Turnaround: Why We Can’t All Be Jack Welch!


All rights: Reuters

Jack Welch is perhaps the world’s best known improvement guru. His transformation of GE not only took genius, it took guts and determination to convince a company that was doing pretty darn well that it had to do better. Not an easy sell! But he did push GE from “Good” to “Great” and became a legend in the process.  In the 20 years that Jack led GE’s, revenues rose from $30 to $130 billion and company value went from $14 to $410 billion. Quite an impressive record! Not surprisingly, each of us would like to be the next god of corporate improvement, and follow in Jack Welch’s footsteps. That may not necessarily be such a good idea. First, those footsteps were made by some pretty big shoes. There are more corporations than there are genius CEOs. Second, the unique combination of genius and opportunity that drove Welch’s doesn’t come by every day. The uniqueness of Welch’s success was a reflection of the uniqueness of GE itself. Today’s blog takes a look at when we can and can’t apply the GE model.

Whenever I hear a discussion about Jack Welch and GE, the discussion always turns to removing the least productive 10% of the organization. If you’re a one f the world’s super brands, replacing staff may not be a big issue. If you’re a smaller firm with less brand recognition, recruiting top talent from better known competitors may already be your greatest challenge. Let’s take a look at details of the GE transformation and see which parts of it are transferable to your specific firm!

Are you a conglomerate of businesses? A really critical piece of information in understanding GE was that it contained a large number of essentially independent businesses. Before Welch arrived, GE operated a vast number of businesses. Not surprisingly, some businesses were not world leaders. Some might undergo  improvement, but others would continue to be under performers…… perhaps because they were in markets with declining profitability. Businesses that were never going to be #1 or #2 in their filed had to go.  And that’s just what Welch did. He sold or closed business units that could not be exceptional.

If your firm is a conglomerate with many businesses, culling your businesses is a good place to start. Independent business lines can be closed without impacting other parts of the firm. If your firm only produces a single product (or a limited number of products) this may not work. Units within a larger business might be shut down, but that could affect its operations. You might outsource an expensive or unproductive process to reduce the drain on profitability. That’s a step in the right direction, but it falls short of shutting down a group; it especially fails to release management resources that can be focused on more productive product lines.

Is there too much bureaucracy or too little structure? When you’ve worked in a big firm, you understand how crippling bureaucracy can be. Conversely, working in a smaller firm can mean negotiating and then developing new policies and practices every time a new situation arises. Clearly, Welch had to deal with an advanced and entrenched bureaucracy. Lifting this constraint was a key to his strategy. When Jack Welch took over GE when it had more than 400,000 employees spread around the world. Not surprisingly, that meant a lot of confusing and contradictory regulations and rules. But what if you are not one of the world’s largest firms? Instead of dismantling bureaucracy your firm may be focused on building it… writing policy, training staff, measuring adherence to standards, establishing gatekeepers, and installing controls.

In a firm that is still in its early growth stages, or has recently gone through a merger or an acquisition, the process of establishing controls could be far more important than the elimination of restrictions. Most firms battle to do both, getting rid of restrictions where they are not needed, while adding them where they can be of value. Because GE was well established and self-satisfied, removing restrictions was more important to Jack Welch. For the rest of us, the removal of restriction is a slower process. Without the lure of a specific business plan, a corporate reformer has little leverage to remove barriers to productivity.

You have the firm’s attention, where will you lead them? Welch needed to shake things up. GE was profitable, but it could be far more profitable. Welch’s actions gave a slightly drowsy firm a good shake. Not enough to harm, just enough to invigorate. Welch wanted GE to be #1 or #2 in every business, requiring an alert staff to deliver their maximum effort. Contrast this with managers promoting “change for change’s sake.” These managers want to see changes, but are vague about their goals and the rewards for those who deliver it. The result is churn without direction, and often without measurable results. No firm should settle into stagnation, but before you order a firm to rev up its engine, you need to know where you are headed. Without that direction, managers expend energy and workers work harder, but long-term goals are not achieved.

Churn vs. improvement: As I said before, people most often remember Welch’s ongoing removal of the bottom 10% of their staff. Continuous improvement makes sense. Getting rid of lower performing workers makes sense. But cutting staff without a well-defined plan doesn’t make sense. Before you start cutting staff, you need the plan for the new organization. Individuals today who might not be considered the top performers might be the best for the new organization. Likewise, today’s top performers might not  all want to be in the new organization. You need to define the new organization, and your staff needs to see that plan. When you go to the market for new employees, the best workers will be hard to find if your firm develops a reputation for arbitrarily terminating staff. If you genuinely want higher performing staff, you shouldn’t be too surprised if better performing workers cost more than their under performing predecessors. These costs can be dealt with in a business with rising revenues and profitability. But what if you don’t have a business plan that will deliver greater profitability? Approvals for higher salaries will be difficult to justify, and even harder to approve.

Reward performance: If improvements are being delivered, what happens to the top performers? They get rewarded… a lot! While the bottom 10% is cut, the top 20% is heavily rewarded. The spectacular rise of GE provided money for rewards. If your organization does not significantly improve profitability, how will these performers be rewarded? When your staff (and the market) continues to work harder than the staff at competing firms, your best workers will leave. Of course, compensation is just one of the factors in worker loyalty. If your firm has a spectacular reputation, there is a value to having your name on a resume. Google is known for the world-class  chefs at it (free) cafeteria, and for its other “social” programs. If you don’t have the top reputation in your industry or the best benefits, you can expect a higher premium in compensation to attract the best talent.

How does it all fit together: GE was a unique  firm in a unique  position. Jack Welch carefully leveraged the tangible and intangible assets of GE to recruiting and retaining staff. To keep up with his transformational goals, Jack’s plan required the termination of over 112,000 employees in the first five years of his tenure as CEO (25% of the firm’s staff). In most firms, this level of change (and risk) could not be sustained… unless the business plan has an equally massive benefit. If your organization is driving change, is that change part of a singular corporate goal? Does that goal cascade across your firm to deliver staff and procedural changes? If you want improvement on the scale that Jack Welch drove, you need to line up these five factors…

  1. A”BIG”  business goal: Different improvement philosophies use different names, but let’s call this the “big idea.” You need a big idea in order to drive big changes. Even without a top-level  plan, change can (and does) happen. But the degree of change will be more limited. Your plan will be stopped by more internal gatekeepers because they have not been told by their managers that your plan has priority over their role as “protectors” of the organization.
  2. Fewer businesses: If you did nothing more than get rid of businesses that lose money and cannot quickly be made profitable, your firm would be improved. Not all firms have multiple lines of business, but whatever change your plan is driving must free up some of their time so that they can deal with the “big idea.”
  3. Churn in staff: Every firm has a unique brand. The stronger the brand, the more you can churn your staff and still attract the best talent. But don’t just churn your staff, use turnover to move towards a new model. If you turn your group into a dynamo of productivity, but you have no new goals for your re-invigorated staff…  you may just create boredom that undermines productivity.
  4. Reduction of bureaucracy: Lastly, you need to sweep away unnecessary bureaucracy so that the new organization can be agile and effectively pursue new opportunities.
  5. Rewards: Not just the top officers, but all the top performers who drove change needs to share the rewards.

We can’t all be Jack Welch, nor should we all try. Jack’s model for change requires a powerful business goal that the entire firm can follow. Perhaps you can use just a small part of the GE model, but it is a self-supporting model that requires the support of more than a single business group to make it successful. When all of these elements work together, you just might be able to be the next GE, at least that’s my Niccolls worth for today!

Posted in Best Practices, Decision Making, Expectations and Rewards, Improvement, Continuous or Not, Project Management Office, Uncategorized | Tagged , , , , , , | 3 Comments

Google Changes The Ediscovery Landscape


The big new app for 2012 just might be Google’s Apps Vault. Just released in the last days of Q1, this tool sits on top of Google Apps and simplify management of your firm’s documents. This tool will be enormously useful for keeping track of documents, managing space, archiving documents even enforcing your firm’s document retention policies. However, the main goal of the tool is to help small and medium-sized firms (the primary users of Google Apps) to manage the email portion of their ediscovery processes. In previous blogs I’ve talked about the cost of ediscovery, and the many mistakes that are made in the discovery process. Google’s new Vault product has the potential of making the ediscovery process much more error-proof, while dramatically reducing the time and cost for a review. The Vault may not yet be a tool for the largest firms, and at the moment it is limited to management of Gmail, but the release of this tool is likely to be a catalyst that will transform the world of ediscovery. In today’s blog we’re going to review the reasons why this tool will be important to your firm and steps you should take. Let’s dive right in!

The Vault is an extension of Google Apps, that costs $50 per user for an annual subscription. The Vault holds all of your Gmail and instant messages, but does not yet manage other Google documents such as letters, memos, spreadsheets, calendars, etc. (although it would be difficult to imagine Google not expanding its service to cover these  documents in future releases, perhaps for additional fees). Because it is a cloud based application, you have access from your office or home, laptop or smart phone, etc. That level of access, plus Google’s excellent record of uptime, is a pretty compelling reason to keep as much information in the Vault as possible. OK. What are the ediscovery benefits for your firm?

To understand the vast potential of the Vault or a similar product, let’s consider a  typical large financial institution. One that I’m familiar with has a legal budget of just under $1 billion. That doesn’t include all of the IT related expenses, but let’s just focus on the strictly Legal portion of costs. The portion of the legal budget that is devoted to ediscovery varies greatly between firms, even between years. For big financial firms ediscovery accounts for 50% to 70% of the total budget. In our example, that translates into $7,000 to $10,000 per employee. At a cost of $50 per year, Google’s App Vault only needs to create a bit more than ½ of one percent of improvement in the cost/efficiency of ediscovery to be cost-effective. When you see what the Vault can do, it’s obvious that the level of benefits are far, far higher. Lets look at the benefits of the Vault…

  • Current vs. Future Vault: Today’s version of the Vault is restricted to Gmail and instant messages. However, it is inevitable that Google will rapidly add to their Vault product to support documents as well as email (although the most important emails inevitable end up as attachments in email). Not only is this an obvious adjacent market for expansion, but it would not make a lot of sense to have the advantage of Google’s Vault for half of an ediscovery project (email) but not for the other half. I don’t believe that this is due to technological limitations, but rather because there are so many more users of Gmail than other Google Apps. Either by creating private clouds inside of firms or by taking on corporate clients that want to leverage the cost advantages of cloud-based storage, the “other half” of ediscovery will rapidly move onto Google managed servers.
  • Document Policy:Every firm has document management and retention policies, although they may not be written. At a minimum… who can approve a contract, how long do you retain financial records. The bigger the firm, the more complex the rules. But having rules doesn’t mean that they are being actively enforced. Few firms have comprehensive document management. Without these tools, few firms know how many documents they produce, let alone which specific policy each of these documents must follow. Reliable enforcement of document policy reduces lawsuits and court ruling against your firm, and reduces the remaining documents that need to be addressed in ediscovery. Consider just these examples:
    • Sales Agreements: The Vault can enforce document policy. Consider the approval of a sales proposal. A sales representative may misunderstand a product, misstate the pricing, or duplicate conditions from a previous proposal that violate local laws in a new proposal. Most firms sign some proposals that were not properly approved. That runs the risk of lost revenue in executing the agreement, and a lawsuit later on if you cannot operate under this flawed agreement. It may be some time before the Vault controls the original document, but these agreements are usually transported via email within the firm and between firms. This provides many opportunities for tracking and setting rules.
    • Drafts: When a client proposal, or a presentation is created many draft documents are created. Sometimes drafts are shared with clients, sometimes the team is confused over which version they are working on, and after a document is completed and delivered to the client old draft may still be in circulation. While some firms use document management systems, for the vast majority of sales staff email is their document repository. This is how groups collaborate to develop a document, and the email itself contains additional instructions and commentary.
    • Document Cloud: In addition to proposals, agreements and contract, businesses are creating many more documents (especially emails) than they used to. What was once a phone call or in-person discussion has turned into emails, text messages, comments in your calendar, and other documentation that may become “discoverable” in a law suit.
  • Tracking: Some details of how Google’s Vault works still needs to be released. However, we can expect that the product will develop any missing or underpowered features. Because Google has been building massive data storage facilities with hundreds of millions of CPU’s to process the work, the Vault offers the possibility of real-time analysis both during and before an ediscovery project. Because of the current cost structure of ediscovery, firms often wait until the last-minute to start an ediscovery project, to avoid incurring costs. Ironically, this creates most of the cost overruns later in the process when there are changes to the project or a project falls behind schedule.
  • Identification: When the ediscovery process begins, forensics experts need to translate the orders of the court into a specific collection of documents. This requires time and attention of your IT department. The Vault claims to have powerful tools to identify specific emails and to report on them. It will take some time to understand the limits of Google’s Vault, but it undoubtedly provides a better set of tools than most firms use today.
  • Spoliation: After the right files are identified, those files need to be copied and moved to a hosting site. This creates all sorts of opportunities for accidentally damaging or altering files, or losing track of the chain of custody as the files are moved (often manually, on a hard drive) to a publicly accessible site.  With Google’s Vault, you can leave the data where it is (or at least in a new collection of data, within the Vault), eliminating any chance of a break in the chain of custody.
  • Hosting: Google Apps is a Cloud based service. If data remain in the Vault, you only need to provide Google Apps and Vault ID’s to access the “document” collection. Skipping the whole process of finding files, removing duplicates, moving the resulting data to a new system, prepping these files for use by additional tools (with additional fees) could be eliminated or greatly reduced. Legal processes change slowly, but the  more work that is performed before data is moved to an external hosting site, the faster that fees for these project will fall.
  • Analysis: If Google’s tools are sufficiently robust, there is little reason to use other ediscovery tools. The subsequent reduction in fees is a HUGE financial incentive for users to make the maximum use of the Vault. It’s hard to imagine any other ediscovery service or software vendor that can match Google’s combination of financial, programming and processing resources. We can expect Google’s Vault to rapidly develop the tools and features necessary to dominate ediscovery.
  • Everything else: While Google is labeling the Vault as an ediscovery tool, the same tools (or additional tools?) can be extended to solve a host of document issues. Version control of training documents, contract management, storage and access to corporate SOP’s, and so forth. It really doesn’t matter if these features are available today. As more individuals use the Vault, the feature set will become more robust and more functional.

What about large firms? If you don’t use Google Apps and Gmail, isn’t the Vault irrelevant to your firm? Not really. Google is obviously a huge Cloud service provider. But Microsoft offers ediscovery services to the government, and is rumored to be working on newer products. AWS (Amazon Web Services) wants to be the premier Cloud based provider of hosting and data storage services. While AWS has not emphasized ediscovery, their service provides similar core capabilities; these capabilities need to be repacked and expanded as an ediscovery tool. In mid-April the 2012 AWS Summit in New York City will showcase their new products and services. Even though they are not head to head competitors (yet!), Amazon keeps a very close eye on Google (and vice-versa). The Vault idea is too close to Amazon’s product line for them not to respond. Will we see that response in the 2012 Summit? Maybe! But you can bet that the audience is going to be asking questions.

Every firm needs to deal with document management. Firms must decide which documents are kept, how long they are kept, where and how they are stored, and when documents are destroyed. Lawsuits often arise from mistakes made while developing proposals and contracts, or do to written communications. The number of these documents has grown dramatically as corporations have moved from paper based documents to electronic documents, yet most firms lack the tools and resources to control the documents in their firms. This leads to excessive risk and cost during an ediscovery project. The features of Google’s Apps Vault may solve our most basic document issues; if not today, then some time soon.  Even if the Vault isn’t the right product for you, Google’s Vault is likely to set off a price and feature war that will benefit every user of ediscovery tools. At least that’s my Niccolls worth for today!

Posted in Common Sense Contracting, Decision Making, Delivering Services, Unique Ideas | Tagged , , , , , , | 2 Comments

PMO Genius: How To Share & Be Happy


In our last Blog we spoke about the issues that drive the services produced by a shared services group. We talked about six issues that define the functions, cost and billing of shared services. Today, we’re going to take the next step and develop a plan for improving your shared service. While each of you may run different services, and as the saying goes “your mileage may vary,” we can quickly put together some options that you can all use to develop your plan for improving a shared service. Assuming that you reviewed the last blog (if not, read “PMO Basics:  Six Secrets For Managing A Shared Service), and now you have a handle on the basic “levers” of your operations, lets now dive into the details to find solutions and projects for improvement.   

RE-BASELINE: Have the needs of your clients changed since your service was introduced? How often do you survey your clients (or hold governance meetings, etc.) to keep your services tracked to the needs of your clients? If you haven’t done so yet, you need to survey your client base and collect the information you need to develop a 3-5 year plan for your service. You also need to institute annual client reviews where you review the events of the year with clients (changes in volume, requests for new types of services, costs and service levels, etc.) and the client can provide you with information on service needs for the coming year.  Be prepared for the occasional retirement of old services as well as demand for new services.

PRIMARY & SECONDARY CLIENTS: Does the information you’ve collected so far tell you anything about changes in your client base over the coming years? While this is just a model and the parameters will undoubtedly change (for ex.: new business groups may arise that could become clients), it is far easier to update a business plan than to manage a service without knowing where you will be next year or the year after. If you have one or two predominant clients, they will (and should) dominate your planning process. Identifying your top-tier  clients (however you define them) helps you to sort out the feedback you get from your client base throughout the year. For example, you may have one client that wants dramatic changes your model, but only consumes 1% of your services’ another client may consume 60% of your services and wants no changes. You have to decide how your model works, but in most cases the dominant client has a larger say in how your service will work.

BUSINESS MODEL: If you’ve grappled with your client’s business models, now you need to grapple with your own business model. Take the data you’ve collected from clients, plus information about your own business parameters (you may have limits on how large you can grow, what you can charge, which services you can offer, etc.) and create a 3-5 year business plan. This plan should cover the basic map of your future… which client groups will grow and shrink, your need for personnel and space, and any expected impact from technology (elimination of services, lowering of costs, etc.). What happens when you have clients with conflicting needs? …

SERVICE AGREEMENTS: One size does not fit all! Once again, you have to decide what your business model is. In some cases your organization may only want you to offer one flavor of service. However, if you have any latitude consider the following based on how external services deal with shared service clients. When you begin customization within a shared service you need to plan carefully. A completely customizable service may also be unusable, but an overly customized service can quickly become too expensive and too error prone. Think about each of these features carefully, but here are some options…

  • Service Level: In commercial services, it is understood that a shared service has a slightly higher error rate and less productivity due to more standards to train against and more SLA’s to meet. If the client has exceptionally low tolerance for errors or has other needs that require a dedicated service, it is possible to run dedicated staff within a shared service. Always ensure that the service is carefully dedicated. Does the client just want dedicated staff, or do they require separate physical space? There is a wide range of options, so you need to be very clear with the client as to what they need and IF you can support that need.
  • Basic vs. Premium: Within your general service cost you may prove “premium” services that not all clients need. For example, if most clients want support from 9am to 5pm, but a few want late night or weekend support, you might want to carve out this function and bill it as a premium service. Be careful that you DON’T create so large a list of options that clients feel that you are “nickel and diming” them, but DO identify the few options that are driving your cost of operation and charge for them separately. Let clients vote with their wallets to determine how many premium services are needed.
  • Buybacks:  If you do charge clients directly, they may complain about carrying staff when there is no work to do. Previously, you carried these costs, and this unused capacity drove up your costs. You might share these costs by buying back unused capacity from this client… if that is not a security conflict (did they need a dedicated team because of data security?). Assuming no conflicts, every moth this client would get a credit for any time that you could use their team for other work. Please remember that this model is more difficult to manage and won’t work for everyone. But if it does work, it can solve a common shared service problem.

MANAGEMENT REPORTS: You need to be sure that the monthly report uses the correct metrics and tracks them accurately.

  • Reports: Develop a standard report that covers quality (errors), speed (meeting deadlines), unit cost (resources needed per job), and type of service used (if you provide multiple services).
  • Resources: Try to keep these reports as similar as possible, and keep an eye on the cost of reporting. The purpose of a management report is not to provide every possible metric, but to answer the questions: “Are we meeting our service level agreements?” and “Which of these metrics need to be improved or changed?” You want to be sure that most of your operating costs go towards production, not administration.
  • Common Coin: one other consideration is that your reports use a “common coin” for reporting the use of resources. The best coin might be an hour of worker time (if different workers have vastly different costs, this may need to be adjusted). The reason for this is to be able to estimate your capacity for work. Let’s assume that one function costs 1 coin and other costs half a coin (and other functions have other costs), you can easily compare unused resources expected client workloads. This  common coin method allows you to more accurately staff your service without too many or too few resources.

SECURITY: When you have workers in a shared environment, there can be issues with the security and privacy of client data.

  • Paper Management: Make sure that you have a policy for controlling  the movement of paper in and out of your shared area. You should have locked bins to put print outs into, which are hauled away and he contents destroyed every day. People should not have the ability to move paper into and out of the shared space.
  • Screen Viewing: A person working for one client shouldn’t read the contents of a screen with another client’s data.  There are films that can be applied to a computer screen that makes it difficult for anyone other than the person directly in front of the screen to read. Use these!
  • Physical Access: In addition to your own workers, clients may come into or near your work space. You want to take precautions to ensure that putting these clients into contact with each other does not create a security risk. If so, you may need to control their access, have separate entrances, etc. You should also keep logs of who enters and leaves to address any issues of security in the room.
  • Data Devices: The bane of shared services is the ubiquity of thumb drives, smart phones, laptops and other devices with data-storage  devices that might allow your workers to copy data and take it out of the work area. You can limit what they bring into the work space, but unless you have body scanners and physically check every knapsack, its’ difficult to control. An alternative is to turn off the USB ports on all of your computers (all of these devices connect via the USB). Without a USB connection, these data devices cannot remove any data.

When you are responsible for managing a shared service, there are always more details to consider. However, if you develop a shared service that addresses these six issue areas you’re on your way to creating a world-class  service. Every service is different, and every internal service has constraints on what you can and can’t do. But if you think through your service, understand the needs of your clients and keep your operations transparent you can deliver the best possible service within those constraints. And that’s my Niccolls worth for today!

Posted in Decision Making, Delivering Services, Project Management Office | Tagged , , , , , | Leave a comment

PMO Basics: Six Secrets For Efficient Shared Services


Every big corporation manages multiple shared services. Shared services are often a bit mysterious to a PMO, and improvement projects are sometimes overly complex. It is difficult enough to provide a service that can keep a single client happy, a shared service has more variables to juggle and more chances to make mistakes that anger clients. While there is no magic bullet that will make a service perfect for every client, under every circumstance, there are a lot of ways to ensure that their needs are met and that service is run efficiently. Which features of a superior shared service will work best with which organization can be debated, but there really aren’t those many elements in this discussion. Today, we are going to take a look at the decisions you need to make to boost how client’s rate shared service or control costs. So, let’s dive right in with…

Business Model: In order to provide the right services for your client, you need to know what they are trying to achieve. The most basic question you need to answer is, “Do you want this service to grow?” Some clients may want you to expand as much as possible, especially if you are replacing a higher-cost  resource. Others may want to keep your service limited to certain hours, a specific cost or a specific cost per unit. If you don’t know if your clients want you to grow or shrink, you may be headed to unexpected disappointment every time you “improve” your service.

Re-Baseline: When a service has been in operation for a long time, the needs of clients often change. Clients who used your service after they were in production, may have simply accepted the way your service worked without trying to customize it. If you now support many different groups, chances are they have different needs. Go back and talk to your clients or survey them.  Do they all need all the features of your services? Are some of the functions much more important than others? Does every group need the same hours of support? Find out what everyone wants. You may be surprised to find that some very expensive aspects of your support are not very highly valued.

Service Agreements: Once you know what each client wants, think like an entrepreneur. Your culture may severely limit “customizing” services or it may encourage it. But if you have the ability to bill for different service levels, give some thought to what each client is asking for, what it would cost the client (not just today but over at least three years), and how that would affect your operation… and the firm as a whole. Call it a service charter, but the process of developing this agreement can drive out the reasons why clients complain about your services AND identify unnecessary operating costs.

Primary & Secondary Clients: Carry the entrepreneurial model further. When you run a business you have tier one clients and other clients. If 90% of your service is consumed by a single client, they are your tier one. If they want your service to change or go in a certain direction, you need to do it. Alternatively, if you have 10 clients who each consume between 8% and 12% of your services, then no single client dominates service delivery and changes need to be based on the entire client base. Even so, go back to the data from re-baselining. In one or two years, will your client mix still be the same? Do you need to start moving service levels towards a future model?

Management Reports: If you don’t already produce client specific reports, you need to start. Whatever you are using for reporting needs to be able to show the client, and the activity attributable to that client. Each client needs to know the details of how they are using your services, and you need to know how services are being used by clients. When you have clear reporting, you may not only learn that certain features of your service are more or less popular than you assumed… you also create a common language  to discuss service levels and service problems. Rather than clients telling you to, “improve your service,” service reports turn this conversation into, “We need better turnaround times on the weekends.” Converting complaints into actionable improvement plans is the result when you put good management reporting in place.

Security: One last thing to remember is that when you have a shared environment… especially in a regulated or specially secure environment (financial services, legal, accounting, consulting)…  you have an obligation to make sure that you have taken all necessary steps have been taken to ensure that each client’s data is kept secure. Issues of data security can occur even in a dedicated service environment. When you operate a shared service, you have an even higher obligation to ensure that client data is handled carefully and securely. If you work in a global environment, verify rules of operation in other offices. Europe has formalized many rules for privacy (as well as security) that are not necessarily intuitive to a US manager.

Dividing your management attention between multiple clients is difficult, at best. However, if you carefully examine each of these issues you will be able to build (or re-engineer) a more responsive and more efficient service. Of course, it would be easier still if someone could just walk you through each of these issues and give you specific responses that you could adapt to your environment… a specific toolkit that will get you to version 2.0 of your service. Gee that would be convenient, wouldn’t it? So, why don’t we tackle that in our next blog?  We’ll build a simple “how to” checklist specifically for improving shared services. And that’s my Niccolls worth for today… but it’s just the start for this subject!

Posted in Best Practices, Delivering Services, Project Management Office | Tagged , , , , , , | 1 Comment

PMO Genius: 8 Lessons Learned About Outsourcing


In our last “PMO Basics,” we covered the reasons and methods for building a lessons learned file. Today, we’re going to apply lessons learned to Outsourcing. Why single out outsourcing? There are several good reasons. First, the recent wave of  outsourcing affected most big corporations, creating vast pools of reusable knowledge about creating and managing outsourcing programs. Second, outsourcing was overwhelmingly driven by short-term cost savings needs created by the collapse of the world financial markets. Third, whenever a wave of change hits that is driven by disaster and panic, it’s a pretty good bet that a lot of mistakes will be made and there will be many opportunities for improvement on the early model; because the collapse was so big and so quick, outsourcing is littered with some pretty bad models that you need to be careful not to replicate. Consider a workforce tasked with building a dam overnight to prevent an expected flood or an evacuation created while a building is collapsing. Neither is likely to produce a model that’s going to win any awards. Finally, outsourcing contracts are typically for 3 or more years, and many of the “panic years” contracts are coming due for renewal. This is a REALLY good time to learn from the past. If not, you are very likely to be stuck with a failed contract.

Most of the outsourcing contracts created in the last 10 years have been focused on short-term savings. The outsourcing contract costs less than the current cost of operation, but after this first reduction the contract is “stuck” on maintaining this cost, with few provisions or incentives for continuous improvement, cost will not fall and quality will not rise (for details, see the previous blog on the “Downward Spiral”). Because these contracts were built when service demand was falling (i.e. in an economic downturn) most firms find that these services do not scale back up as well as expected.  As you will find when speaking to your colleagues, many people based outsourcing services hit their best stride around year 2-3, and then deteriorate (for reasons will we discuss shortly). Let’s  see what we’ve learned about outsourcing over the past few years:

WORLD ASSUMPTIONS: Many outsourcing programs were offshoring programs, because the lowest-cost  programs leveraged the lower wages in other labor markets. However, over the past few years the cost of US labor has dropped, the availability of talent has increased and the rate of inflation in popular outsourcing locations (such as India) has been high. In offshoring, the labor component is just a small part of the cost you pay. In India the cost of space much higher than in comparable US or UK outsourcing markets, power (computers, air conditioning, etc.) cost more and is rapidly rising as the price of oil rises, inflation is higher, and exchange rates add an element of risk into the next contract. These are probably not the assumptions you used when you created your first contract; it’s time to update them.

MANAGEMENT ATTENTION: When a program receives management attention, it improves. Study after study shows that any attention (make the work area brighter, dimmer, lower the temperature, raise pay), improves the work-product…  temporarily. Outsourcing focuses a lot of management attention on these programs. Far more attention than their in-house predecessors received. But senior managers don’t want to put time into this forever. Especially when the complaints start. What happened in your organization? In year two and three, did senior managers stop showing up to the daily, and weekly meetings? Are senior managers even attending the monthly and quarterly reviews? Are there other signs that senior managers are less involved? As the seniority of management drops, has the pace of improvement also fallen? What about your next contract?

EARLY VS. LATER DAYS: Aside from management attention, there are other reasons that performance drops over time. On a new team, an exceptional  new employee might become an assistant supervisor in six months, and then get promoted to supervisor, shift manger and eventually head of the service in rapid-fire promotions. However, after a little while, when the service hits it’s full size, the creation of new  promotional positions slows and then disappears. New “A” performers look for positions on other teams, and the “A’s” on your team look for their next position elsewhere. The same would be true of a domestic team. But in India there is an assumption of a promotion every six months or so, very much continuing the college system of competing two semesters a year. Once you cease to have rapid promotions, the stream of “A’s” drops off, negatively impacting performance and increasing staff turnover and loss of knowledge on the team.

TEAM AGE: Outsourcing teams are younger than the team they replace. Domestic teams are a bit younger, and offshore teams are a lot younger. For many offshore workers, working on your team is their first full-time job. That’s not necessarily a bad thing, but it does have an unexpected consequence. If this is your first job out of college, you haven’t yet decided what you want to do for the rest of your life. A significant number of these “freshers” as they are called, decide that they want to do something else and leave. When this stream of attrition joins the others, you get the typical offshore 30%-50% (or more) turnover in staff.

INTERNAL DEADWOOD: One of the unspoken reasons why outsourcing is attractive for corporations is that it provides the opportunity to get rid of problem employees. However, these employees often became problems because of past mistakes by the firm. Workers “promoted” into confused job positions because the manager had no other option to give them a raise. Workers who believe they are performing well due to inconsistent or non-existent annual reviews.  Annual raises for seniority rather than skills and performance. And genuine problem workers who have not been dealt with due to confused HR rules or weak managers (usually both). However, now that problem employees have long since been cleared out and processes are documented and work performance is accurately measured and reported…how does this affect future outsourcing? Is there still deadwood to clear? Or could more progress be made by applying what you’ve learned and increasing the employee value (performance, knowledge, motivation, etc.)?

METRICS & TRACKING: Before you outsourced you may have not identified metrics and produced monthly management reports. Even if you did, after working with your outsourcing programs you may have new ideas about which metrics to track and how to collect data. In light of new information… how well did the old operation work? How well is outsourcing performing? Should your outsourcing program be moved further away (from domestic to offshore), closer (from offshore to domestic) or can you run a better program in-house now that your tools have been improved?

PROJECT VS. BENEFIT: An unfortunate fact of project management is that we put a lot of effort into seeing that the project implementation goes as planned (all deliverables delivered on time, and on budget) but only the minority of projects are tracked until they deliver their proposed benefits. Like most efforts in life, outsourcing doesn’t always deliver the promised benefits. Many outsourcing programs start without clearly outlined benefits, or a fully defined cost baseline (you included salaries, but did you include space costs?) and only recently have programs looked at the cost of redundancies (severance packages for terminated employees). Current studies show that less… perhaps far less… than half of outsourcing projects succeed. Do you have exact success criteria for your outsourcing programs? How successful have they been? Are you seeing differences in success between onshore, offshore and programs that created improvements to in-house programs?

CONTRACT RENEWALS: Typical outsourcing programs can deliver short-term benefits. When it is time to renew, what happens? If the 1st contract was well designed, then there isn’t much room for cost improvement when you renew. Perhaps you did get a significant benefit in the 2nd contract, and you are about to renew again. Do you expect another big cost reduction? Alternatively, contracts that are built around the idea of continuous improvement and process redesign will continue to reduce costs and improve performance. You simply cannot institute a process of innovation and change under a short-term cost reduction contract… which always focuses on freezing processes and performing to pre-set standards. If your contract does not deliver additional improvements for the next three years (or 5 years, or even farther into the future), does it make any sense to continue to use this type of contract?  Have you looked at the alternatives (details in this Blog).

When you ask these eight questions, and put together your lessons learned file, you may be surprised. You may be surprised at how much the world has changed, and how much this affects your baseline assumptions. You may also be surprised at how the verified results of your outsourcing programs diverge from their assumed results. You need to apply these new lessons and challenge old assumptions on new projects. Decisions based on real data inherently use old data, perhaps obsolete data. In a rapidly changing world, your assumptions need to be based on the most-recent  data available. In outsourcing, today’s data is considerably different from just three years ago and very different from the many stories we heard that are closer to 10 years old.

Build your lessons learned file and keep the information up to date, and your projects will deliver much larger benefits. And that’s my Niccolls worth for today!

Posted in Best Practices, Common Sense Contracting, Decision Making, Delivering Services, Learning and Development, Project Management Office | Tagged , , , , , , | 1 Comment