Crispin, Lisa - Agile Testing - A Practical Guide for Testers and Agile Teams
Здесь есть возможность читать онлайн «Crispin, Lisa - Agile Testing - A Practical Guide for Testers and Agile Teams» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2008, Издательство: Addison-Wesley Professional, Жанр: Старинная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.
- Название:Agile Testing: A Practical Guide for Testers and Agile Teams
- Автор:
- Издательство:Addison-Wesley Professional
- Жанр:
- Год:2008
- ISBN:нет данных
- Рейтинг книги:4 / 5. Голосов: 1
-
Избранное:Добавить в избранное
- Отзывы:
-
Ваша оценка:
- 80
- 1
- 2
- 3
- 4
- 5
Agile Testing: A Practical Guide for Testers and Agile Teams: краткое содержание, описание и аннотация
Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Agile Testing: A Practical Guide for Testers and Agile Teams»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.
Agile Testing: A Practical Guide for Testers and Agile Teams — читать онлайн бесплатно полную книгу (весь текст) целиком
Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Agile Testing: A Practical Guide for Testers and Agile Teams», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.
Интервал:
Закладка:
Database maintainability is also important. The database design needs to be flexible and usable. Every iteration might bring tasks to add or remove tables, columns, constraints, or triggers, or to do some kind of data conversion. These tasks become a bottleneck if the database design is poor or the database is cluttered with invalid data.
Lisa’s Story
A serious regression bug went undetected and caused production problems. We had a test that should have caught the bug. However, a constraint was missing from the schema used by the regression suite. Our test schemas had grown haphazardly over the years. Some had columns that no longer existed in the production schema. Some were missing various constraints, triggers, and indices. Our DBA had to manually make changes to each schema as needed for each story instead of running the same script in each schema to update it. We budgeted time over several sprints to recreate all of the test schemas so that they were identical and also matched production.
—Lisa
Plan time to evaluate the database’s impact on team velocity, and refactor it just as you do production and test code. Maintainability of all aspects of the application, test, and execution environments is more a matter of assessment and refactoring than direct testing. If your velocity is going down, is it because parts of the code are hard to work on, or is it that the database is difficult to modify?
Interoperability
Interoperability refers to the capability of diverse systems and organizations to work together and share information. Interoperability testing looks at end-to-end functionality between two or more communicating systems. These tests are done in the context of the user—human or a software application—and look at functional behavior.
In agile development, interoperability testing can be done early in the development cycle. We have a working, deployable system at the end of each iteration so that we can deploy and set up testing with other systems.
Quadrant 1 includes code integration tests, which are tests between components, but there is a whole other level of integration tests in enterprise systems. You might find yourself integrating systems through open or proprietary interfaces. The API you develop for your system might enable your users to easily set up a framework for them to test easily. Easier testing for your customer makes for faster acceptance.
In Chapter 20, “Successful Delivery,” we discuss more about the importance of this level of testing.
In one project Janet worked on, test systems were set up at the customer’s site so that they could start to integrate them with their own systems early. Interfaces to existing systems were changed as needed and tested with each new deployment.
If the system your team works on has to work together with external systems, you may not be able to represent them all in your test environments except with stubs and drivers that simulate the behavior of the other systems or equipment. This is one situation where testing after development is complete might be unavoidable. You might have to schedule test time in a test environment shared by several teams.
Consider all of the systems with which yours needs to communicate, and make sure you plan ahead to have an appropriate environment for testing them together. You’ll also need to plan resources for testing that your application is compatible with the various operating systems, browsers, clients, servers, and hardware with which it might be used. We’ll discuss compatibility testing next.
Compatibility
The type of project you’re working on dictates how much compatibility testing is required. If you have a web application and your customers are worldwide, you will need to think about all types of browsers and operating systems. If you are delivering a custom enterprise application, you can probably reduce the amount of compatibility testing, because you might be able to dictate which versions are supported.
As each new screen is developed as part of a user interface story, it is a good idea to check its operability in all supported browsers. A simple task can be added to the story to test on all browsers.
One organization that Janet worked at had to test compatibility with reading software for the visual impaired. Although the company had no formal test lab, it had test machines available near the team area for easy access. The testers made periodic checks to make sure that new functionality was still compatible with the third-party tools. It was easy to fix problems that were discovered early during development.
Having test machines available with different operating systems or browsers or third-party applications that need to work with the system under test makes it easier for the testers to ensure compatibility with each new story or at the end of an iteration. When you start a new theme or project, think about the resources you might need to verify compatibility. If you’re starting on a brand new product, you might have to build up a test lab for it. Make sure your team gets information on your end users’ hardware, operating systems, browsers, and versions of each. If the percentage of use of a new browser version has grown large enough, it might be time to start including that version in your compatibility testing.
When you select or create functional test tools, make sure there’s an easy way to run the same script with different versions of browsers, operating systems, and hardware. For example, Lisa’s team could use the same suite of GUI regression tests on each of the servers running on Windows, Solaris, and Linux. Functional test scripts can also be used for reliability testing. Let’s look at that next.
Reliability
Reliability of software can be referred to as the ability of a system to perform and maintain its functions in routine circumstances as well as unexpected circumstances. The system also must perform and maintain its functions with consistency and repeatability. Reliability analysis answers the question, “How long will it run before it breaks?” Some statistics used to measure reliability are:
Mean time to failure:The average or mean time between initial operation and the first occurrence of a failure or malfunction. In other words, how long can the system run before it fails the first time?
Mean time between failures:A statistical measure of reliability, this is calculated to indicate the anticipated average time between failures. The longer the better.
In traditional projects, we used to schedule weeks of reliability testing that tried to run simulations that matched a regular day’s work. Now, we should be able to deliver at the end of every iteration, so how can we schedule reliability tests?
We have automated unit and acceptance tests running on a regular basis. To do a reliability test, we simply need to use those same tests and run them over and over. Ideally, you would use statistics gathered that show daily usage, create a script that mirrors the usage, and run it on a stable build for however long your team thinks is adequate to prove stability. You can input random data into the tests to simulate production use and make sure the application doesn’t crash because of invalid inputs. Of course, you might want to mirror peak usage to make sure that it handles busy times as well.
You can create stories in each iteration to develop these scripts and add new functionality as it is added to the application. Your acceptance tests could be very specific such as, “Functionality X must perform 10,000 operations in a 24-hour period for a minimum of 3 days.”
Читать дальшеИнтервал:
Закладка:
Похожие книги на «Agile Testing: A Practical Guide for Testers and Agile Teams»
Представляем Вашему вниманию похожие книги на «Agile Testing: A Practical Guide for Testers and Agile Teams» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.
Обсуждение, отзывы о книге «Agile Testing: A Practical Guide for Testers and Agile Teams» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.