Extreme Programming stands in contrast to the usual, deliverable-based methodologies. XP is based around activities. The rigor of the methodology resides in people carrying out their activities properly.
Not being aware of the difference between deliverable-based and activity-based methodologies, I was unsure how to investigate my first XP project. After all, the team has no drawings to keep up to date, so obviously there would be no out-of-date work products to discover!
An activity-based methodology relies on activities in action. XP relies on programming in pairs, writing unit tests, refactoring, and the like.
When I visit a project that claims to be an XP project, I usually find pair programming working well (or else they wouldn't declare it an XP project). Then, while they are pair programming, the people are more likely to write unit tests, and so I usually see some amount of test-writing going on.
The most common deviation from XP is that the people do not refactor their code often, which results in the code base becoming cluttered in ways that properly developed XP code shouldn't.
In general, though, XP has so few rules to follow that most of the areas of embellishment have been removed. XP is a special case of a methodology, and I'll analyze it separately at the end of the chapter.
Personally, I tend to embellish around design reviews and testing. I can't seem to resist sneaking an extra review or an extra testing activity through the "should" door ("Of course they should do that testing!" I hear you cry. Shouldn't they?!).
The way to catch embellishment is to have the directly affected people review the proposal. Watch their faces closely to discover what they know they won't do but are afraid to say they won’t do.
Untried
Most methodologies are untried. Many are simply proposals created from nothing. This is the fullblown "should" in action: "Well, this really looks like it should work."
After looking at dozens of methodology proposals in the last decade, I have concluded that nothing is obvious in methodology design. Many things that look like they should work don't (testing and keeping documentation up to date, for example), and many things that look like they shouldn't work actually do work (pair programming and test-first development, for example).
The late Wayne Stevens, designer of the IBM Consulting Group's Information Engineering methodology in the early 1990s, was well aware of this trap.
Whenever someone proposed a new object-centered / object-based / object-hybrid methodology for us to include in the methodology library, he would say, "Try it on a project, and tell us afterwards how it worked." They would typically object, "But that will take years! It is obvious that this is great!" To my recollection, not one of these obvious new methodologies was ever used on a project.
Since that time, I use Wayne Stevens' approach and see the same thing happen.
How are new methodologies made? Here's how I work when I am personally involved in a project:
· I adjust, tune, and invent whatever is needed to take the project to success.
· After the project, I extract those things I would repeat again under similar circumstances and add them to my repertoire of tactics and strategies.
· I listen to other project teams when they describe their experiences and the lessons they learned.
But when someone sends me a methodology proposal, I ask him to try it on a project first and report back afterwards.
Used once
The successor to "untried" is "used once." The methodology author, having discovered one project on which the methodology works, now announces it as a general solution. The reality is that different projects need different methodologies, and so any one methodology has limited ability to transfer to another project.
I went through this phase with my Crystal Orange methodology (Cockburn 1998), and so did the authors of XP. Fortunately, each of us had the good sense to create a "Truth in Advertising" label describing our own methodology’s area of applicability.
We will revisit this theme throughout the rest of the book: How do we identify the area of applicability of a methodology, and how do we tailor a methodology to a project in time to benefit the project?
Methodologically Successful Projects
You may be wondering about these project interviews I keep referring to. My work is based on looking for "methodologically successful" projects. These have three characteristics:
· The project was delivered. I don't ask if it was completed on time and on budget, just that the software went out the door and was used.
· The leadership remained intact. They didn't get fired for what they were doing.
· The people on the project would work the same way again.
The first criterion is obvious. I set the bar low for this criterion, because there are so many strange forces that affect how people refer to the "successfulness" of a project. If the software is released and gets used, then the methodology was at least that good.
The second criterion was added after I was called in to interview the people involved with a project that was advertised as being "successful." I found, after I got there, that the project manager had been fired a year into the project because no code had been developed up to that time, despite the mountains of paperwork the team had produced. This was not a large military or life-critical project, where such an approach might have been appropriate, but it was a rather ordinary, 18-developer technical software project.
The third criterion is the difficult one. For the purpose of discovering a successful methodology, it is essential that the team be willing to work in the prescribed way. It is very easy for the developers to block a methodology. Typically all they have to say is, "If I do that, it will move the delivery date out two weeks." Usually they are right, too.
If they don't block it directly, they can subvert it. I usually discover during the interview that the team subverted the process, or else they tolerated it once but wouldn't choose to work that way again.
Sometimes, the people follow a methodology because the methodology designer is present on the project. I have to apply this criterion to myself and disallow some of my own projects. If the people on the project were using my suggestions just to humor me, I couldn't know if they would use them when I wasn't present.
The pertinent question is, “Would the developers continue to work that way if the methodology author was no longer present?”
So far, I have discovered three methodologies that people are willing to use twice in a row. They are
· Responsibility-Driven Design (Wirfs-Brock 1991)
· Extreme Programming (Beck 1999)
· Crystal Clear (Cockburn 2002)
(I exclude Crystal Orange from this list, because I was the process designer and lead consultant. Also, as written, it deals with a specific configuration of technologies and so needs to be reevaluated in a different, newly adapted setting.)
Even if you are not a full-time methodology designer, you can borrow one lesson from this section about project interviews. Most of what I have learned about good development habits has come from interviewing project teams. The interviews are so informative that I keep on doing them.
This avenue of improvement is also available to you. Start your own project interview file, and discover good things that other people do that you can use yourself.
Author Sensitivity
A methodology's principles are not arrived at through an emotionally neutral algorithm but come from the author's personal background. To reverse the saying from The Wizard of Oz, "Pay great attention to the man behind the curtain."
Each person has had experiences that inform his present views and serve as their anchor points. Methodology authors are no different.
Читать дальше