Anutthara - MSFT

I am a tester at Microsoft and spend my testing time alternating between manual and automated testing. Automation is mostly for regression testing and with a bit of randomization thrown in for hitting more code paths. Most of the manual testing I do is exploratory and I find the experience can still use some improvement.

Here is my wish list to improve my manual testing experience:

1. It is so hard to file a bug with the exact steps that hit the issue. I need to open up the bug tracking application, remember all that I did, record all info, take screenshots, attach those and then recheck if all relevant info has been provided. Somehow, this is too tedious when I have dozens of bugs to file on new bits.

2. The bugs that I file sometimes are not reproducible on the developer's machine, due to differences in configuration/environment. Sometimes, this is mitigated by recording the exact environment in which bug was filed, but most times, it is hard to know what exactly I need to record. Is it the CPU processor usage, is it env vars etc.

3. I know this is a hard one, even contentious perhaps, but most times, it would be good to start out on leads on potential bug ridden areas. Code coverage may be one indication, but almost always isn't a good indicator. If there were an incredible metric that could take into account complexity of code, dependencies in code, quality ( !) of code and coverage and churn out numbers on probability of hitting an issue in a piece of code, that would be cool!

We are working on solving some of these problems here. And I would love to hear from other manual testers out there, esp outside MSFT, as to what are their pet peeves in manual testing.

Thanks!




Re: Software Testing Discussion Manual Testing woes

Khurram Arif

All above problems you point out almost occurs daily at every tester's work place. I m currently working in Pakistan and would be glad if some experinces are shared on this forum. Looking forward to get some use hints from experienced one's in our filed.




Re: Software Testing Discussion Manual Testing woes

Paul Jordan

1. & 2. In exploratory testing you do need some structure, otherwise you get caught out in the way you describe. Before you start exploratory testing it's advisable to work out a) which areas you're going to test, b) short notes for each action detailing which screens you've selected, which values you've entered, etc. This way you could hope to recreate the faults/issues you find.

3. There is method of determining complexity of code (cyclomatic complexity), although this doesn't determine the likelihood of finding bugs/issues in the functionality (http://en.wikipedia.org/wiki/Cyclomatic_complexity). Control Flow graphs are used determine this value..





Re: Software Testing Discussion Manual Testing woes

Herb

It's hard to give the more relevant advice without knowing what application or component you're testing, but there are some general comments I can make that may help:

For point 1:

I would concentrate on one bug at a time. If the program has so many errors that you can literally find dozens of bugs doing some routine operation, then I would start with the first bug you see. Log it. Then on to the next one. I woudln't recommend finding 12 bugs and then opening tickets for them all of them after you've found them all, and then trying to remember what you did.

Time is critical. Start logging the bug as soon as you find it. Write down (in the bug tracking software or just in notepad) what you were doing at the time. Collect the log files or screenshots or whatever else you need to find the root cause or assist the developer. After you've done this then go on to the next error.

For point 2:

Making sure you note your environment is critical when logging bugs. There will always be cases where things aren't reproducible on certian machines because libraries other modules and even the operating system type or version could be different. Note the environment in all of your bugs. Every application depends on its environment and is affected by it.

If you don't know what to record about the environment that is most relevant to the error you are seeing, learn more about the product or module. A tester should be a product expert. If you know certian functions put a heavy load on the web server and this is failing or being slow, note that. If you know that a search algorithm takes a lot of processor time, note that. Know what is happening in the product.

For point 3:

This is a wish that all us tester have: some magical algorithm that will tell us where the bugs are. This isn't likely to ever be created. However, there are certian heurtistics you can use that can give you an idea of where more errors are. Some common ones are:

1. Areas that have lots of errors are more likely to have errors in future revisions

2. Areas that are heavily used tend to produce more field cases just because they are used a lot more than other areas

3. Areas that are complex tend to have more errors

4. Areas that have many upstream dependencies are prone to more errors (by upstream dependency i mean that they rely on a lot of other components and maybe external libraries), this is because if any of the other areas change, this area may break

5. Areas that junior developers have been assigned tend to have more errors

These are a few examples of how to get a feel where more errors will occur. They are not hard and fast rules, but they are a workable way to get a grasp of where you can find the most errors.

I hope this helps you out!

-Herbjeet Bal






Re: Software Testing Discussion Manual Testing woes

James Rodrigues

Some ideas but without more details it's hard to say what approach is best.

1. If the types of errors you are finding are crashes then perhaps a debugger call stack is enough to show the problem condition. If you are able to jump into the debugger and track the failure this not only helps log a bug with a deeper description of the code but also has the added benefit of giving you insight into the inner working of the code. Maybe an API is being misused and if so then a search into the source code may find other places this is also happening. Find a root cause and eliminate it.

2. I have seen folks utilize recorders so that when they are testing there is a background application logging everything they are doing. At the point they see a failure condition they can dump the log and create a playback. I have mixed feelings about how useful this approach will be but it is something to think about. The type of recorder I am describing is usually associated with a tool that enables you to manually create automated tests. Not saying this is great but something to consider.

3. Maybe you can have more asserts added to the code so that when you stumble upon a problem the code itself announce it has failed. As long as the asserts are descriptive then you will be more capable of describing the problem. If this becomes a good method for your application then I would invest in adding code to the assertion mechanism to do things like pull call stacks or create Watson dumps.

Just some initial thoughts.






Re: Software Testing Discussion Manual Testing woes

frankcao

Some additional thoughts after comments of other people:

For 1&2: Debug logs may also help. The appliction that I test have detailed debug log, so we usually can reconstruct what happened. If your app is V1, you may ask your developer to add debug logs.

For 3: Finding bug-prone areas is an art, and there is no definitive answers. Others have mentioned factors that can indicate complexity.

For tester, (1) Once you found a bug in an area that was not well tested before, it often leads to the discovery of a class of many bugs, so you should immidiately follow important bugs and explore similiar bugs. This is easy (2) The hard part is to predict the complexity with the factors others pointed out, and allocate test efforts. Testers with lots of experience with the product usually can give good estimates.

For managers, you should look at complexity measures and predict number of bugs. If the predicted number of bugs is not found by testing, you should worry about the effectiveness of the testing. Such prediction only works for large product with many bugs, because many factors can make the bug count for smaller products look random.






Re: Software Testing Discussion Manual Testing woes

Michael Hunter - MSFT

Screen recorders can come in handy for proving that a bug actually happened and for remembering what you were did to make it happen. There are several on the market, and Microsoft testers can use the one in our internal tools repository. Start it running first thing in the morning, and then leave it running all day. If file size gets to be an issue, then start a new recording periodically during the day as you switch from one task to another.




Re: Software Testing Discussion Manual Testing woes

Michael Bolton

1. It is so hard to file a bug with the exact steps that hit the issue. I need to open up the bug tracking application, remember all that I did, record all info, take screenshots, attach those and then recheck if all relevant info has been provided. Somehow, this is too tedious when I have dozens of bugs to file on new bits.

One place to start is to begin with reducing your effort. Do you have to remember all that you did, or just the relevant parts Do you need to take screen shots for every problem

For the things that you perceive that you actually do need to do--in what way could tools help you to do this automatically Have you considered video capture tools, like Camtasia, BB Test Assistant, or (better yet) Test Explorer (which captures a whole ton of stuff about your machine and its configuration) Does your bug tracking application force you to repeat yourself needlessly

2. The bugs that I file sometimes are not reproducible on the developer's machine, due to differences in configuration/environment. Sometimes, this is mitigated by recording the exact environment in which bug was filed, but most times, it is hard to know what exactly I need to record. Is it the CPU processor usage, is it env vars etc.

See the reference to Test Explorer above.

3. I know this is a hard one, even contentious perhaps, but most times, it would be good to start out on leads on potential bug ridden areas. Code coverage may be one indication, but almost always isn't a good indicator. If there were an incredible metric that could take into account complexity of code, dependencies in code, quality ( !) of code and coverage and churn out numbers on probability of hitting an issue in a piece of code, that would be cool!


Well, sure it would, but why stop there Why not have a metric that could list and itemize every coding error, with source code file names and line numbers to point to each problem

Instead of metrics, consider learning about general systems, critical thinking skills, test heuristics, risks analysis, questioning skills.

---Michael B.






Re: Software Testing Discussion Manual Testing woes

Anutthara - MSFT

Thanks for all the great responses, folks. You will hear more on manual testing from me on this thread. Hope we have this conversation going.