Crispin, Lisa - Agile Testing - A Practical Guide for Testers and Agile Teams

Здесь есть возможность читать онлайн «Crispin, Lisa - Agile Testing - A Practical Guide for Testers and Agile Teams» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2008, Издательство: Addison-Wesley Professional, Жанр: Старинная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Agile Testing: A Practical Guide for Testers and Agile Teams: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Agile Testing: A Practical Guide for Testers and Agile Teams»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

Agile Testing: A Practical Guide for Testers and Agile Teams — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Agile Testing: A Practical Guide for Testers and Agile Teams», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

картинка 413Your team is disciplined and writes tests for every bug found.

Alternatives and Suggestions for Dealing with Bugs

As teams mature, they find procedures that work for them. They eliminate redundant tasks. They become more practiced at using story cards, story boards, and project backlogs. They use tests effectively, and learn which bugs to log and what metrics make sense to their team. In this section, we’ll share some ideas that other teams have found work for them.

Set Rules

Set rules like, “The number of pink cards (bugs) should never get higher than ten at any one time.” Revisit these each time you have a team retrospective. If your defect rate is going down, no worries. If the trend is the opposite, spend time analyzing the root cause of bugs and create new rules to mitigate those.

Fix All Bugs

Don’t forget to fix low-priority bugs found during the iteration as well, because they have an effect on future development. In our experience, there seems to be a strong correlation between “low priority” and “quick to fix,” although we don’t have hard facts to support that. We suggest stopping small, isolated bugs before they become large, tangled bugs.

Combine Bugs

If you find a lot of bugs in one area, think about combining them into an enhancement or story.

Janet’s Story

When I first started working at WestJet, I found a lot of small issues with the mobile application. The application worked correctly, but I was confused about the flow. I only found these issues because I was new and had no previous perceptions.

The team decided to group the issues I had raised and look at the whole issue as a new story. After studying the full problem with all of the known details, the final outcome was a solid feature. If the bugs had been fixed piecemeal, the effect would not have been so pretty.

—Janet

Treat It as a Story

If a “bug” is really missed functionality, choose to write a card for the bug and schedule it as a story. These stories are estimated and prioritized just like any other story. Be aware that bug stories may not receive as much attention as the new user stories in the product backlog. It also takes time to create the story, prioritize, and schedule it.

The Hidden Backlog

Antony Marcano, author of www.TestingReflections.com, points out that while user stories and their acceptance tests describe desired behavior, defect reports describe misbehavior. Behind each misbehavior is a desired behavior, often not previously defined. Thus, behind every defect report may be a hidden user story. He explains his experiences.

In Chapter 5, “Transitioning Typical Processes,” we mentioned Antony Marcano’s blog post about defect tracking systems being a hidden backlog in agile teams. Antony shares his ideas about how to bring that secret out into the open.

XP publications suggest that if you find a bug you should write an automated test reproducing it. Many teams file a bug report and then write a separate automated test. I’ve found that this results in duplication of effort—and therefore waste. When we write a bug report, we state the steps, what should have happened (expectation), and what actually happened (anti-expectation). An automated test tells you the same things—steps, expectation, and running it for the first time should demonstrate the anti-expectation. When you are able to write an automated acceptance test as easily as you write a bug-report and the test communicates as much as the bug report does and your backlogs and story boards allow you to manage the work involved in fixing it, then why write a separate bug report?

Bug metrics are all that remain. Bug metrics are traditionally used to help predict when software would be ready for release or highlight whether quality is improving or worsening. In test-first approaches, rather than telling us if quality is improving or worsening, it tells us how good we were at predicting tests—that is, how big the gaps were in our original thinking. This is useful information for retrospectives and can be achieved simply by tagging each test with details of when it was identified—story elaboration, post-implementation exploration, or in production. As for predicting when we will be able to release—when we are completing software of “releasable quality” every iteration—this job is handled by burn-down/burn-up charts and the like.

With one new project I was working on, I suggested that we start using a bug-tracking system when the need for one was compelling. We captured the output of exploratory testing performed inside the iteration as automated tests rather than bug reports. We determined whether the test belonged to the current story, another story, or whether these tests inspired new stories. We managed these stories as we would any other story and used burn-down charts to predict how much scope would be done by the end of the iteration. We never even set up a bug-tracking system in the end.

There is a difference between typical user stories and bug-inspired user stories, however. Previously our stories and tests only dealt with missing behaviors (i.e., features we know we want to implement in the future). Now, they also started to represent misbehaviors . We found it useful to include summary information about the misbehavior in our proposed user story to help the customer prioritize it better. For example:

As a registered user, I want to be prevented from accessing the system if my password is entered using the incorrect case, so that I can feel safer that no one else can guess my password, rather thanbeing allowed to access the system.

The “rather than” was understood by the customer to mean “that’s something that happens currently”—which is a misbehavior rather than merely a yet-to-be-implemented behavior.

Using this test-only approach to capturing bugs, I’ve noticed that bug-inspired stories are prioritized more as equals to the new-feature user stories, whereas before they often gave more attention to the “cool new features” in the product backlog than the misbehaviors described in the bug tracking. That’s when I realized that bug-tracking systems are essentially hidden, or secret backlogs.

On some teams, however, the opposite is true. Fix-all-bugs policies can give more attention to bugs at the expense of perhaps more important new features in the main backlog.

Now, if I’m coaching a team mid-project, I help them to find better and faster ways of writing automated tests. I help them use those improvements in writing bug-derived automated tests. I help them find the appropriate story—new or existing—and help them harness the aggregate information useful to retrospectives. Eventually, they come to the same realization that I did: Traditional bug tracking starts to feel wasteful and redundant. That’s when they decide that they no longer want or need a hidden backlog.

If bugs are simply logged in a DTS, important information might be effectively lost from the project. When we write acceptance tests to drive development, we tend to focus on desired behavior. Learning about undesired behavior from a defect, and turning that into stories is a vital addition to producing the right functionality.

Blue, Green, and Red Stickers

Each team needs to determine the process that works for it, and how to make that process easily visible. The following story is about one process that worked for Janet.

Janet’s Story

A few years ago, I worked on a legacy system with lots of bugs already logged against the system before agile was introduced. One of the developers was adamant that he would not use a defect-tracking system. He firmly believed they were a waste of time. However, the testers needed the defects logged because there were so many.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Agile Testing: A Practical Guide for Testers and Agile Teams»

Представляем Вашему вниманию похожие книги на «Agile Testing: A Practical Guide for Testers and Agile Teams» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Agile Testing: A Practical Guide for Testers and Agile Teams»

Обсуждение, отзывы о книге «Agile Testing: A Practical Guide for Testers and Agile Teams» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x