Tuesday, June 26, 2012

Why We Need Hokum (Part Five)

Here's a bit more thinking on the subject of hokum.   Use the labels on the right to find more, or click here.

Fiction is a polite word for lying. If you pick up a novel or a short story, you know going in that it’s not true, at least not as a whole. Historical novels have some real history in them; science fiction novels have some real science in them — but characters, dialogue, and situations are made up.

And yet, for many of us, the world inside our favorite novels is at least as real to us as the actual world in which we live. I first read Lord of the Rings in the unauthorized Ace Books edition back in 1966 (sorry to say, I no longer have those copies) and have re-read it every couple of years since. There are any number of books about which I have the same feelings: the early Saint stories by Leslie Charteris, most of the oeuvre of Robert Heinlein, Pride and Prejudice, and many more.

I lose myself in these alternate and unreal worlds consciously, in a process known to writers as the “willing suspension of disbelief.” There’s nothing inherently wrong with that, but we start to run a risk when we willingly suspend disbelief about stories that aren’t labeled as fiction. That’s when we fall completely into the world of hokum.

Some cases of hokum are relatively harmless. If you believe in the Loch Ness Monster, UFOs, or Bigfoot, you probably aren’t going to live your life very differently. The basic idea — that there are more things in heaven and earth than are dreamt of in your philosophy — isn’t inherently silly. Occasionally, things once thought to be fanciful turn out to be real, as in the case of the once-mythical black swan. But that’s not the way to bet.

Some beliefs start out legitimate and turn into hokum. When respected psychiatrist Immanuel Velikovsky first proposed his theory that Old Testament legends reflected actual catastrophic close-contacts with other planets, he was attempting to perform legitimate science. Stephen Jay Gould said of him, “Velikovsky is neither crank nor charlatan — although, to state my opinion and to quote one of my colleagues, he is at least gloriously wrong.”

And wrong he was. His proposed celestial mechanics were physically impossible, violating the laws of conservation of energy and momentum, among others. There is no sin, however, in being wrong. But for anyone today to believe Velikovsky, the term “crank” becomes increasingly accurate.

Religion is a notorious breeding ground for hokum. Before you get offended, let me point out that even the most religious person doesn’t believe in the stories of all the other religions. If you’re a strong Christian, you tend to reject Hindu or Muslim stories as hokum — and, of course, they return the favor. But any religion is more than the stories told in its holy books. Religions provide moral and ethical codes, prescriptions for daily living, and in some cases suggestions on the proper ordering of society. One can accept a moral prescription without necessarily accepting the dogma that goes with it — although the reverse is not true.

Hokum becomes increasingly dangerous when it forces a rewrite of things known to be true. In science, evolution is merely the best known area of disagreement, but the website Conservapedia goes so far as to attack the theory of relativity! In Texas, there’s an attempt to modify American history itself, removing such dangerous figures as Thomas Jefferson from the curriculum.

The desire to embrace hokum — the need for hokum — seems baked into the human psyche. It can promote a sense of wonder and mystery, or simply be a harmless pastime. But hokum is a dangerous thing, both for those who embrace it, and those who must suffer under the yoke of those who do. Confining yourself to the self-aware hokum of fiction may be the safest course of action.

Tuesday, June 19, 2012

Duct Tape and Watergate (Watergate Part 5)

For previous installments of my irregular series tracing the history of the Watergate scandal, click here. This week, the actual break-in, and the sad story of the man who discovered it.

On June 17, 1972, security guard Frank Wills apprehended Bernard Barker, Vergilio Gonzales, Eugenio Martinez, Frank Sturgis, and James McCord inside the Democratic National Committee headquarters in the Watergate building, a combination office, condominium, and hotel complex near the Kennedy Center in Washington, DC. The men were arrested and locked up until a preliminary hearing the next morning.

At the hearing, the judge asked each of the men their names, homes, and where they worked. James McCord mumbled his answer when it got to where he worked. The judge told him to speak up. “Where do you work?” the judge asked again.

 “I work for the Committee to Re-Elect the President,” McCord said. And with those words, the Watergate scandal officially began. As we’ve noted in previous installments of this story, a lot had already happened by this time — the Enemies List, the raid on the office of Pentagon Papers leaker Daniel Ellsberg’s psychiatrist’s office, and Operation GEMSTONE. The infamous “two-bit burglary” was the incident that unraveled the entire web.

The June 17 burglary, interestingly, was not the first. During the previous Memorial Day weekend, the Watergate team tried not one, but twice, and failed on both attempts, one because they couldn’t get to the staff elevator before the night alarm got turned on, and the second because Gonzales, a locksmith, failed to pick the lock to the DNC headquarters office door.

On the night of May 28, a third attempt succeeded. The burglars installed a wiretap and room mike in the office of DNC chairman Larry O’Brien, and photographed as many documents as they could. Across the street from the Watergate, in Room 419 of the Howard Johnson’s (now a George Washington University dorm), Hunt and the other mission commander, G. Gordon Liddy, monitored the bugs, but evidently they didn’t work that well. By June 5, they gave McCord new instructions: fixing the room monitoring bug and fixing a problem with one of the phone taps.

On June 17, the team went back in, using the door between the garage and the stairwell. To make sure it didn’t lock, they put duct tape over the latch bolt. Security guard Frank Wills, working the midnight to 7 am shift, spotted the tape on a routine patrol of the building, and removed it. He didn’t think anything else about it, and kept going.

But when he came back on his next round, he found that the tape had been replaced! And so he called the police.

Frank’s story didn’t turn out well. When the Watergate complex didn’t give him a raise for discovering the burglary, he quit. His fifteen minutes of fame lasted a year or so, and after that he wasn’t able to hold a steady job. In 1983 he was convicted of shoplifting. By 1993, he was so broke that he was washing his clothes in a bucket. He died in 2000 of a brain tumor.

Tuesday, June 12, 2012

Watergate Considered as a Helix of Semi-Precious Stones (Watergate Part 4)

An expanded version of this article is part of my book Watergate Considered as an Organization Chart of Semi-Precious Stones, available in paperback and ebook versions.

This is the fourth installment of my irregular series tracing the history of the Watergate scandal. Part 1, “Watergate and Me,” appeared last July. Part 2, “The Enemies List,” appeared in August. Part 3, “Hunt/Liddy Special Project 1” appeared in September. Some of the material in this installment is adapted from my book The Six Dimensions of Project Management (with Heidi Feickert)

These three installments covered some of the origins of Watergate: the founding of the Plumbers Unit, the establishment of the Enemies List, and the beginnings of covert operations. None of these had any direct bearing on the eventual Watergate break-in that began the official scandal, but because they involved many of the same cast of characters and the same resources, the unraveling of one led inexorably to the unraveling of the others.

The original “Plumbers Unit” was established to stop the leaking of classified information (most notably the Pentagon Papers) to the news media, but beginning some time after October 1971 (sources conflict on the exact date), G. Gordon Liddy was asked to move from the Plumbers operation to Nixon’s campaign organization, the Committee to Re-Elect the President (CRP, but often abbreviated by its enemies as CREEP), with responsibility for establishing an intelligence operation for the campaign. He moved to his new offices in December, and shortly thereafter was being introduced as the man in charge of “dirty tricks.”

By January, Liddy was ready to present his plan, known as Operation GEMSTONE. Budgeted at $1 million (in 1972 dollars), it was quite extensive. Each element of the operation had its own gem subtitle:

  • DIAMOND — Counterdemonstration activities
  • GARNET — Recruit unpopular groups to put on pro-Democratic demonstrations
  • RUBY — Infiltrate spies into the organizations of Democratic contenders
  • SAPPHIRE — Establish a houseboat filled with prostitutes as a “honey trap” during the Democratic convention
  • COAL — Funnel money to the campaign of African-American candidate Shirley Chisholm
  • TURQUOISE — Commando raid to destroy the air conditioning for the Democratic convention
  • QUARTZ — Microwave interception of Democratic telephone traffic
  • EMERALD — Chase plane to eavesdrop on Democratic candidate aircraft
  • CRYSTAL — Electronic surveillance of the Democratic convention
  • OPAL — Clandestine entries to plant telephone bugs in the offices of Muskie, McGovern, and the DNC
  • TOPAZ — Photograph documents during clandestine entries
  • BRICK — Funding operation

Attorney General (and CRP chairman) John Mitchell rejected the plan as excessive, and sent Liddy back to the drawing boards. Liddy burned the charts, and prepared a second plan, this one capped at $500,000. He cut EMERALD (chase plane), QUARTZ (microwave interception), and COAL (funding Shirley Chisholm — Mitchell told Liddy that Nelson Rockefeller had already taken care of that.) SAPPHIRE lost the houseboat, but kept the prostitutes.

This too was rejected for cost, and Liddy tried a third time. He kept the four OPAL break-ins, the two RUBY agents, and two of the SAPPHIRE prostitutes, along with some of the DIAMOND capabilities. Mitchell and his team approved the smaller operation just before new campaign finance rules would make it harder to fund the program.

Interestingly, the actual Watergate break-in, although eventually part of the OPAL operation, was nowhere to be seen in the original plans.

Next week: the Break In.

Tuesday, June 5, 2012

A New Approach to Qualitative Risk Management

This week’s Sidewise Thinking post is hardcore project management, for those of you who are interested in that sort of thing. The chart and approach to qualitative risk analysis comes from my recent AMACOM self-study sourcebook, Project Risk and Cost Analysis, but the words are original to this blog piece.

Qualitative risk analysis as expressed in the PMBOK® Guide has always given me a headache. The definition is confusing and badly written. That’s not just my opinion, it’s the common experience of lots of people taking project management seminars. (Certainly it's true of my seminars, but I’ve seen many other trainers struggle with this as well.) The problem isn’t with the individual tools. They are easy enough to understand and apply, and the utility is reasonably obvious. But when you look at the similarities and difference between qualitative and quantitative risk analysis using the official PMBOK® (11.3, 11.4) definitions, some problems arise.

·      Qualitative risk analysis is the process of prioritizing risks for further analysis by assessing and combining their probability of occurrence and impact.
·      Quantitative risk analysis is the process of numerically analyzing the effect of identified risks on overall project objectives.

When people first encounter this, the response is a great big "Huh?" I don't really blame them. Here's why.

To see the problem in PMBOK, start with the most obvious difference between the two types: quantitative risk analysis uses numbers and qualitative risk analysis uses "prioritization." In practice, these amount to the very same thing.

How do you value a risk? According to the book, you multiply its probability of occurrence by the impact should it occur, expressed by the formula R = P x I. For example, a ten percent risk of losing a thousand dollars turns into 0.1 x $1,000, or $100. If the cost of dealing with that risk is less than $100, it’s clearly a good investment. (It’s always worth noting that the reverse isn’t necessarily true. Spending more than $100 may well be a good idea, but you may have to prove it.) That's the quantitative way.

In the qualitative approach, according to PMBOK, you might say the probability is LOW and the impact is, say, MEDIUM (depends on the size of the project). You look on a grid to see where the terms intersect (usually LOW). But that's the same thing as saying LOW x MEDIUM = LOW, or P x I again.

In other words, the “numerical analysis of effect” and “combined probability and impact” are, for all practical purposes, synonyms — the very definition of risk says so. That creates a lot of blurring between the two processes, especially in terms of their utility. No matter which technique you use, you end up with a priority ranking of your project’s risks. (You also get analytical data about those risks.) Then you move on to risk response planning.

Prioritizing risks gives you only a two-dimensional sort: higher or lower. That’s quantitative risk analysis — whether you use actual dollars and percents, or whether you use a scale of high, medium, and low. You rank the risks into a numerical hierarchy. And that's it.

Prioritizing risks isn't nearly all you need to be doing. In this post, I want to give you a new way to think the two kinds of risk analysis.

You need to organize your list of identified risks in three dimensions, based on the initial choices you must make. Some risks require more rapid response than others, severity notwithstanding. Some risks affect the project but aren’t yours: they may belong to the customer, your boss, or other departments in the organization. If you’re running an IT project and you identify legal risks, you probably should route those risks to the general counsel rather than deal with them yourself. Some risks have solutions — and others have no solution at all.

Using the accompanying chart, let’s look at those choices.

IMPACT: The first gateway is to examine a risk’s impact. Forget probability: if the impact is low enough, you’re done with the risk. If the cost of the risk if $50 and the project is $5 billion, it’s irrelevant whether it happens or not. Granted, first impressions can be deceiving and the impact of a given risk can change over the lifespan of the project, so you don’t want to throw this list away.

PROBABILITY: Only then should you think about the probability of occurrence. If you are lucky enough to have real numbers, good for you. More often, you’ve got a vague idea (it’s pretty likely to happen, or maybe it’s very unlikely) or you don’t have any idea at all about the probability.

Here’s where you deal with the R = P x I formula, whether you’re using 0.1 x $1000 or a high, medium, and low scale from a table (Low x Medium = Low). And yes, when determining impact, you have to extend your vision from the impact on the work package to the impact on the project as a whole, and from there to the impact on outside people and organizations.

At the end, you put more risks on the parking lot. Perhaps the impact is serious enough to be noticed, but the probability is ridiculously remote. You make sure that the risks that get further action are the ones with the highest net value.

According to PMBOK, you're done — but you aren't.

URGENCY: Risk responses follow the Godzilla Principle: baby problems are easier to deal with than full-grown problems. The available set of responses to a given risk tends to deteriorate over time. Even if some risks have higher value, you need to move urgent risks to the front of the queue. It’s less important whether the risk event is coming up soon; the key question is when your solutions expire. So we’ve already violated the priority order we established in the previous step — another tip-off that prioritizing risks by probability and impact isn’t nearly enough to do the job.

OWNERSHIP: If there’s no reason for a risk to jump to the head of the line, we go back to the prioritized list of risks, but with another question: do these risks fall under the jurisdiction of the project manager or team? As we noted, legal risks usually belong to the legal department, rather than, say, IT. Other risks may fall generally into our project, but if the impact is great enough, higher levels of the chain of command usually get the final say-so.

Transferring risk, in other words, often doesn’t wait until risk response planning — when we need to pass the buck, we pass it early in the process. Of course, we’re often responsible for providing information and options to the people who own the decision, and sometimes responsible for implementing the solutions they devise, but the key question of ownership has already been settled.

ACTIONABLE: Our remaining pile of risks shrinks with each step, but before we start on the process of planning our risk responses, there’s still one more sort that needs to be done. Do these risks have answers? In other words, can we find a potential proportionate and cost-effective response to the risk that doesn’t create serious negative consequences as a side effect?

If the answer is yes, we have our tentative risk solution. We accept the risk and move forward to risk response planning.

ACCEPTABLE: If the answer is no, we have a decision to make. We can decide to accept the risk
Maybe we add something to the contingency allowance for dollars or time. Maybe we come up with a recovery plan. But either way, we move forward.

But if the risk is too serious, moving forward may be a very bad idea indeed. We have to re-think the project. Maybe we modify it. Maybe we cancel it. Either way, we’re no longer going ahead with the original vision.

* * *

No matter what approach we use, we plan risk responses for some risks, but not all of them. In the PMBOK model, we plan risk responses based on the priority of the risk. As you’ve seen, however, that’s not enough. Think of qualitative risk analysis as the overall process of sorting risks not merely by P x I, but by the nature of the initial actions you should take.

     Is it SERIOUS? If not, accept it.

     Is it LIKELY? If not, accept it.

     Is it URGENT? If it is, act on it now.

     Is it MINE? If not, transfer it.

     Is it ACTIONABLE? If so, act on it.

     If not, is it ACCEPTABLE? If not, rethink the project.