The cost of missing this sweet spot is a lowered probability of making a really useable product, and a much higher cost for running all the experiments.
There are many alternative, if less effective, mechanisms you can use when you can't land on this sweet spot. They have been well documented over the years: weekly interview sessions with the users; ethnographic studies of the user community; surveys; friendly alpha-test groups. There are certainly others.
Missing this sweet spot does not excuse you from getting good user feedback. It just means you have to work harder for it.
One-Month increments
There is no substitute for rapid feedback, both on the product and on the development process itself. Incremental development is perfect for providing feedback points. Short increments help the both the requirements and the process itself gets repaired quickly. The question is, how long should the delivery increments be?
The correct answer varies, but project teams I have interviewed vote for 1-3 months, with a possible reduction to two weeks and a possible extension to four months.
It seems that people are able to focus their efforts for about three months, but not much longer. People tell me that with a longer increment period, they tend to get distracted and lose intensity and drive. In addition, increments provide a team with chances to repair their process. The longer the increment, the longer between such repair points.
If this were the only consideration, then the ideal increment period might be one week. However, there is a cost to deploying the product at the end of an increment.
I place the sweet spot at around one month, but have seen successful use of two or three months.
If the team cannot deliver to an end user every few months, for some reason, it should prepare a fully built increment in that period, and get it ready for delivery, pretending, if necessary, that the sponsor will suddenly demand its delivery. The point of working this way is to exercise every part of the development process, and to improve all parts of the process every few months.
Fully automated regression tests
Fully automated regression tests (unit or functional tests, or both) bring two advantages: The developers can revise the code and retest it at the push of a button. People who have such tests report that they freely replace and improve awkward modules, knowing that the tests will help keep them from introducing subtle bugs. People report that they relax better on the weekends when they have automated regression tests. They run the tests every Monday morning, and discover if someone has changed their system out from under them.
In other words, automated regression tests improve both the system design quality and the programmers' quality of life.
There are some parts of the system (and some systems) that are difficult to create automated tests for.
One of those is the graphical user interface. Experienced developers know this, and allocate special effort to minimize the portions of the system not amenable to automated regression tests.
When the system itself does not have automated regression tests, experienced programmers find ways to create automated tests for their own portion of the system.
Experienced developers
In an ideal situation, the sweet spot, the team consists of only experienced developers. Teams like this that I know report much different, and better, results compared with the average, mixed team.
Since good, experienced developers may be two to ten times as effective as their colleagues, it would be possible to shrink the number of developers drastically if the team consists entirely of experienced developers.
On project Winifred, we estimated before and after the project that six good Smalltalk programmers could develop the system in the needed timeframe. Not being able to get six good Smalltalk programmers at that time, we used 24 programmers. The four experienced ones built most of the hard parts of the system, and spent much of their time helping the inexperienced ones.
If you can't land in this sweet spot, consider bringing in a half-time or full-time trainer or mentor to increase the abilities of the inexperienced people
The Trouble with Virtual Teams
"Virtual" is a euphemism meaning "not sitting together." With the current popularity of this phrase, project sponsors excuse themselves for imposing enormous communication barriers on their teams.
We have seen the damage caused to a project by having people sit apart. The speed of development is related to the time and energy cost per idea transfer, with large increases in transfer cost as the distance between people increases, and large lost opportunity costs when some key question does not get asked. Splitting up the team is just asking for added project costs.
I categorize geographically distributed teams into three sorts, some of them more damaging than others. My terms for them are multi-site, offshore, and distributed development.
Multi-site Development
Multi-site is when a larger team works in relatively few locations, each location contains a complete development group, and the groups are developing fairly decoupled subsystems.
Multi-site development has been performed successfully for decades.
The key in multi-site development is to have full and competent teams in each location, and to make sure that the leaders in each location meet often enough to share their vision and understanding. Although many things can go wrong in multi-site development, it has been demonstrated to work many times, and there are fairly standard rules about getting it to work, unlike the other two virtual team models.
Offshore development
Offshore development is when "designers" in one location send specifications and tests to "programmers" in another location, usually in another country.
Since the offshore location lacks architects, designers and testers, this is quite different than multi-site development.
Here's how offshore development looks, using the words of cooperative games and convection currents.
The designers at the one site have to communicate their ideas to people having a different vocabulary, sitting several time zones away, over a thin communications channel. The programmers need a thousand questions answered. When they find mistakes in the design, they have to do three expensive things: first, wait until the next phone or video meeting; second, convey their observations; and third, convince the designers of the possible mistake in the design. The cost in erg-seconds per meme is staggering, the delays enormous. Testing Off-shore Coding
One designer told me that his team had to specify the program to the level of writing the code itself, and then had to write tests to check that the programmers had correctly implemented every line they had written. The designers did all the paperwork they considered unpleasant, without the reward of being able to do the programming. In the time they spent specifying and testing, they could have written the code themselves, and they would have been able to discover their design mistakes much faster.
I have not been able to find methodologically successful offsite development projects. They fail the third test: The people I have interviewed have vowed not to do it again.
Fortunately, some offshore software houses are converting their projects into something more like multi-site development, with architects, designers, programmers and testers at the programming location. While the communications line is still long and thin, they can at least gain some of the feedback and communication advantages of multi-site development.
Distributed development
Distributed development is when a team is spread across relatively many locations with relatively few, often only one or two, people per location.
Читать дальше