These probably aren't going to come every day, but here's the first. Well, technically, it's the third, since I already posted three buzzwords earlier. In any case, a binary search is a useful method to find the source of a problem. It's similar to a traditional binary search algorithm, but is used in QA and other technical troubleshooting scenarios.
Basically, you look at the variables that could be causing or contributing to the problem you're experiencing, then eliminate or change 1/2 of them. If the problem goes away, you know it was one of the variables you changed. If not, it wasn't. You keep repeating until you narrow it down to the actual cause of the problem.
A realistic use would be discovering a new bug and trying to figure out when it was introduced. If you have a build every week and have 20 builds, you'd go back 10 weeks. If the problem is gone, you go back 5 weeks. If it's not, you go back 15. Using this method, you'd quickly find the build that introduced the problem.
Another use would be if you had a document that was causing a printer driver crash. Eliminate 1/2 the objects on the page and print. If it crashes, one of those objects is causing the crash. If it doesn't crash, it was one of the objects you removed. You'd then have a document with 1/2 the objects that still crashes. Then, take half of those objects away and print. It won't take long until you have a document with a single object on it that causes a crash. Add that document (and the original) to your bug report, and you'll keep your developers happy.
Saturday, February 21, 2009
Firefox using tons of memory
I have to admit, I love tabbed browsing. If I have 7 tabs open, that's low for me. I usually have at least 15 tabs open, sometimes up to 30. When I see an interesting link, I'll open it in a new tab to look at later. That way, I don't interrupt my train of thought.
The problem is that Firefox was using tons of memory. I was usually seeing memory usage near 600 MiB. This may not seem like much, but when you only have 1 GiB of RAM and want to use a Solaris VM at the same time, it tends to make your system come to a crawl because of virtual memory swapping.
So, I did a little research and found a fix. Now, the VM Size in Task Manager for Firefox is around 140 MiB, which is much more reasonable for me. The problem is that, by default, Firefox keeps a history of pre-rendered pages for each tab. With 1 GiB of physical memory, it defaults to 8 pages per tab. Multiply that by 15 tabs, and it's keeping 120 pre-rendered pages in RAM. Add lots of graphics to those pages, and it starts adding up very quickly.
Fortunately, Firefox is very customizable, so it's easy to fix. This page explains the details (that's a great site for Firefox tweaks by the way), but the short solution is this:
1) Type about:config in the Firefox address bar and hit enter.
2) Find browser.sessionhistory.max_total_viewers and change the value to 0.
3) Restart Firefox.
Before doing this, check the memory usage of your system and Firefox. Bring up Task Manager. If your Commit Charge is less than your physical RAM, it won't improve system performance. In fact, it would make going back and forth in the browser history (using the arrows at the top) a little slower in Firefox, since it will have to render each page every time you go back.
However, if your commit charge is greater than your physical RAM, it might help. Sure, you'll take a performance hit when it has to render every page again, but the performance hit caused by using virutal memory is much worse. To confirm that Firefox is eating up your RAM, go to View/Select Columns in Task Manager and enable Virtual Memory Size. Then, see how much Vitual Memory Firefox is using. If it's more than 150 MiB or so, it's worth trying this tweak.
I still need more memory, but at least I can leave my browser open when I do other things now.
The problem is that Firefox was using tons of memory. I was usually seeing memory usage near 600 MiB. This may not seem like much, but when you only have 1 GiB of RAM and want to use a Solaris VM at the same time, it tends to make your system come to a crawl because of virtual memory swapping.
So, I did a little research and found a fix. Now, the VM Size in Task Manager for Firefox is around 140 MiB, which is much more reasonable for me. The problem is that, by default, Firefox keeps a history of pre-rendered pages for each tab. With 1 GiB of physical memory, it defaults to 8 pages per tab. Multiply that by 15 tabs, and it's keeping 120 pre-rendered pages in RAM. Add lots of graphics to those pages, and it starts adding up very quickly.
Fortunately, Firefox is very customizable, so it's easy to fix. This page explains the details (that's a great site for Firefox tweaks by the way), but the short solution is this:
1) Type about:config in the Firefox address bar and hit enter.
2) Find browser.sessionhistory.max_total_viewers and change the value to 0.
3) Restart Firefox.
Before doing this, check the memory usage of your system and Firefox. Bring up Task Manager. If your Commit Charge is less than your physical RAM, it won't improve system performance. In fact, it would make going back and forth in the browser history (using the arrows at the top) a little slower in Firefox, since it will have to render each page every time you go back.
However, if your commit charge is greater than your physical RAM, it might help. Sure, you'll take a performance hit when it has to render every page again, but the performance hit caused by using virutal memory is much worse. To confirm that Firefox is eating up your RAM, go to View/Select Columns in Task Manager and enable Virtual Memory Size. Then, see how much Vitual Memory Firefox is using. If it's more than 150 MiB or so, it's worth trying this tweak.
I still need more memory, but at least I can leave my browser open when I do other things now.
Friday, February 13, 2009
Buzzwords, Part I
I've heard many buzzwords in my years of SQA and have come to realize that, while the basic ideas behind these buzzwords make sense, there are too many people who misuse or overuse these terms. I'm only including a few for now.
Unit Testing - This term is misused far too often. I've heard many QA testers talk about running unit tests when they're really doing black box testing. True unit testing is done in a development environment, using development code.
Unit tests are important, but the problem is that developers often write their own unit tests. If they make assumptions in the code, such as the inputs being fed to a function, odds are they'll make the same assumptions when coding the unit tests.
Effective unit tests should be written by a second developer or, ideally, a SQA engineer with enough development knowledge to read the unit of code under test. Even if QA can't write the test, somebody from QA should be looking at the specs and suggesting what the unit test should feed to the code.
Inputs and Outputs - When interviewing interns straight out college, these words come up all the time. When asked what method of software testing they would use if employed, I constantly heard things like, "I'd feed it different inputs and compare the output to the expected result." It sounded like I was listening to a textbook. This type of testing is important, but the inputs and expected outputs come from the product specifications. These are the same specifications that the developers used to write the code in the first place. If the tester and the developer interpret the specifications the same way, there could be missed bugs.
Sometimes, you just need to poke it with a stick in random places to find the weak spots.
Black Box Monkey Testing - This is really underused, not overused, but I wanted to include it anyway. I first heard this term when reading Visual Test 6 Bible by Thomas R. Arnold II. The concept is based on the infinite monkey theorem. Using automation (preferably) or a person, you start randomly typing and clicking, looking for bugs. A few years ago, when creating automation to test printer drivers, I implemented monkey testing code. It would open the printer driver properties and start doing random things. It might randomly move the mouse around. It could click in random spots. Or, it could pick a random field on a random tab and set it to a random value. We actually found a few serious bugs this way that may never have been found otherwise. Well, they probably would have been found, but by the users, which is usually not who you want finding bugs.
Of course, you still need targeted testing, unless you have an infinite number of computers running an infinite number of test plans and an infinite number of people to go through the logs to find out what caused the BSoD. It's still a very valuable method of testing. If done properly, a monkey test would continue to work, even after your GUI is completely changed.
Unit Testing - This term is misused far too often. I've heard many QA testers talk about running unit tests when they're really doing black box testing. True unit testing is done in a development environment, using development code.
Unit tests are important, but the problem is that developers often write their own unit tests. If they make assumptions in the code, such as the inputs being fed to a function, odds are they'll make the same assumptions when coding the unit tests.
Effective unit tests should be written by a second developer or, ideally, a SQA engineer with enough development knowledge to read the unit of code under test. Even if QA can't write the test, somebody from QA should be looking at the specs and suggesting what the unit test should feed to the code.
Inputs and Outputs - When interviewing interns straight out college, these words come up all the time. When asked what method of software testing they would use if employed, I constantly heard things like, "I'd feed it different inputs and compare the output to the expected result." It sounded like I was listening to a textbook. This type of testing is important, but the inputs and expected outputs come from the product specifications. These are the same specifications that the developers used to write the code in the first place. If the tester and the developer interpret the specifications the same way, there could be missed bugs.
Sometimes, you just need to poke it with a stick in random places to find the weak spots.
Black Box Monkey Testing - This is really underused, not overused, but I wanted to include it anyway. I first heard this term when reading Visual Test 6 Bible by Thomas R. Arnold II. The concept is based on the infinite monkey theorem. Using automation (preferably) or a person, you start randomly typing and clicking, looking for bugs. A few years ago, when creating automation to test printer drivers, I implemented monkey testing code. It would open the printer driver properties and start doing random things. It might randomly move the mouse around. It could click in random spots. Or, it could pick a random field on a random tab and set it to a random value. We actually found a few serious bugs this way that may never have been found otherwise. Well, they probably would have been found, but by the users, which is usually not who you want finding bugs.
Of course, you still need targeted testing, unless you have an infinite number of computers running an infinite number of test plans and an infinite number of people to go through the logs to find out what caused the BSoD. It's still a very valuable method of testing. If done properly, a monkey test would continue to work, even after your GUI is completely changed.
Labels:
black box monkey testing,
buzzwords,
inputs,
outputs,
SQA,
unit testing
Subscribe to:
Posts (Atom)