-

Забезпечення Якості -- Quality Assurance
-

четвер, 15 липня 2010 р.

Selenium

Web Application Testing System

Selenium IDE is a Firefox add-on that records clicks, typing, and other actions to make a test, which you can play back in the browser.

Selenium Remote Control (RC) runs your tests in multiple browsers and platforms. Tweak your tests in your preferred language.

Selenium Grid extends Selenium RC to distribute your tests across multiple servers, saving you time by running tests in parallel.

Podcast: Using Selenium for application testing

Tutorial: Introducing Selenium IDE, an open source automation testing tool

Tutorial: Installing and running Selenium-RC in Perl

понеділок, 12 липня 2010 р.

Рекомендації

Як писати Рекомендації:
1. Рекомендації завжди пишуться тільки позитивні!
2. Завжди використовуйте ім'я Рекомендованого.
3. Опишіть як довго Ви знаєте Рекомендованого, як довго Ви з ним/нею працюєте, перелічіть обовязки на займаній посаді.
4. Зазначте найкращі риси, якими Рекомендований/на відзначився/лася у роботі, базовані на реальних подіях (аналітичні здібності, виконавчіть, акуратність, швидкість, самостійне вирішення проблем, хороша комунікація, здатність швидко навчатися, культура спілкування, якісна співпраця у колективі, визнання помилок, робота над покращенням результатів, і так далі).
5. Упускайте слабкі сторони. Якщо незнаєте, що позитивного написати, то краще відмовтесь від написання Рекомендацій.
6. Дотримуйтесь тематичних меж у написанні Рекомендацій. Це не твір на тему Рекомендованого. Не більше однієї, двох сторінок.
7. Незважаючи на те що вік, сімейний статус, приналежність до певних груп, релігія і тд і тп є дуже важливими, не коментуйте цього.
8. Залишіть свої контакти, якщо Ви не заперечуєте відповідати на запитання, які можуть виникнути у роботодавця.
9. Уважно перечитуйте. Рекомендації репрезентують не тільки Рекомендованого а й Вас також. :)

четвер, 8 липня 2010 р.

Денний/Тижневий Тест Звіт

Formulating test status reports based on daily status criteria.

Your management or the customer are asking for some very standard test status reports, so the good news is that this is relatively straightforward.

Test execution: The most important metrics in this are total test cases, test cases executed, test cases remaining, test execution rate, and "glidepath execution rate." Total test cases are, obviously, the total number of cases you must execute during this project. Test cases executed is the sum total of cases which have been executed (pass or fail, but NOT including "blocked" cases). Test cases remaining should be the sum of cases not executed plus cases blocked, for this metric represents all the work remaining. Test execution rate should be the average number of cases your team is executing per day. Finally, the glidepath is the rate of execution your team needs to maintain in order to complete the project.

To calculate the execution rate, take the total number of cases executed (passed, failed, but not "blocked") and divide by the number of working days. Note that you also have to communicate the number of cases failed and the time it will take to retest these. Some teams want you to report this as a separate metric, some teams want the "failed" cases to not count as executed, and be counted as remaining test cases.

To calculate the glidepath rate, take the total number of remaining cases divided by the number of days remaining in the project. If you have 100 test cases remaining, and 10 days left, you need to execute 10 test cases per day.

The critical thing to keep in mind here is what your manager or customer care about. They want to know two things, above all else: 1) Are you on track to be done on time, and 2) Is the product in good shape? The way they answer the on track question is simple -- is your average test case execution rate (the average number of cases executed per working day) higher than your glidepath rate? If yes, your project is probably green. If not, your project is yellow or red.

The next metric here is the test case failure rate. This is a strong indicator of product quality, as well as your test execution rate. If you have a 50% failure rate, that means 1 out of 2 test cases are failing. That indicates terrible requirements definition or engineering quality. If your failure rate is significantly less (10%, 5%) it probably means the requirements were well defined and the code quality is high. It could also mean that your test team is overlooking defects -- as project lead, you need to have your pulse on your team's performance, and be able (and willing) to tell management if you think your team is overlooking defects in the project. Failure rate is also helpful in calculating execution rate and glidepaths. Like I said, some teams want all tests executed counted in the execution rate, some only want passes (personally, as a manager I want to see the PASS rate as well as the executed rate). Calculating the failure rate is also simple -- how many test cases in 100 are failing?

Finally, bug rates. The most important bug metric right now is probably how many defects are generated per test case. So if you have executed 100 test cases, how many defects have resulted from those? Coupled with the count of remaining test, it's a good way to get a decent idea of how many defects are still 'lurking' in the product.

Metrics are a funny thing. They can be used for good or warped for bad. It's very important to focus on the metrics themselves, and separate the discussion of what the metrics might mean. Above all, if the metrics indicate a schedule or product risk, don't try to paint an artificially pretty picture. Be forthcoming in the information and help management through the repercussions.

Скопійована відповідь з http://searchsoftwarequality.techtarget.com/

Відповідь експерта John Overbaugh, Director of Quality Assurance, Medicity, Inc.