Friday, April 5, 2019
Things Have Been Crazy
It's been awhile since I've updated this blog, and I am sorry for that. I am trying to get back to writing and should start having more content again.
Tuesday, January 6, 2015
The Number of My Followers Has Doubled
In honor of the number of followers doubling since the last time I checked, I thought I'd start writing again. This will be a short post, but there will be more. I can't believe it's been over 2 years since I wrote anything on this blog.
So, to my followers, rest assured that I'll be posting to this blog again. I apologize this took so long, and that apology goes out to both of my followers. :)
So, to my followers, rest assured that I'll be posting to this blog again. I apologize this took so long, and that apology goes out to both of my followers. :)
Friday, August 24, 2012
A percpetion of insecurity
Security is an important part of any product. Whether it's a network attached drive, a router, a computer, or even a flash drive, if a customer's data can be accessed by an unauthorized user, it's bad. I'm not a security expert, but I have found my share of security flaws in products I've tested. Some were really bad (giving access to data through the Interent). Some were't as bad (giving access to limited information on the LAN). I've pushed for them all to be fixed.
However, there are two things that always wind up triggering "battles" between product management and SQA:
1) Exposing applications unecessarily. An example would be a server listening on port 80 when it could easily listen on a random port, such as 37272. I'll put a bug in stating that it should be moved, and invariably it comes back as "security through obscurity isn't secure." I completely agree with that and have even blogged about it in the past (see http://artofsqa.blogspot.com/2011/03/insecurity-through-perspicuity.html).
However, I believe in insecurity through visibility (yes, I've used "perspicuity" in the past, but "visibility" is a much more common word, easier to type and spell, and the connotations actually fit the phrase better). I'm not saying that "hiding" the app makes it more secure, but exposing it means that if there is a security flaw, the "bad guys" are more likely to find it before you do. After all, most port scanners that hit random IP addresses only scan a limited number of ports. Why have your app hit by port scanners if it doesn't have to?
2) Sending data in a way that appears to be insecure, even if it's not. For example, a product sends the source code for PHP files when you do a get without logging in. No customer data is exposed, and the product has open source firmware, so anybody can get the source code anyway. There is no real risk, but it is still bad from a marketing perspective.
A perception of insecurity is just as bad as a flaw in security. Because of the pride factor in being the first to disclose a vulnerability, some security researchers do not take into account the exploitablility of the vulnerability and are quick to disclose. In the example above, the disclosure might state that pages are accesssible without logging in. A disclosure like this means somebody researching your product might shy away from purchasing it because it's "insecure."
However, there are two things that always wind up triggering "battles" between product management and SQA:
1) Exposing applications unecessarily. An example would be a server listening on port 80 when it could easily listen on a random port, such as 37272. I'll put a bug in stating that it should be moved, and invariably it comes back as "security through obscurity isn't secure." I completely agree with that and have even blogged about it in the past (see http://artofsqa.blogspot.com/2011/03/insecurity-through-perspicuity.html).
However, I believe in insecurity through visibility (yes, I've used "perspicuity" in the past, but "visibility" is a much more common word, easier to type and spell, and the connotations actually fit the phrase better). I'm not saying that "hiding" the app makes it more secure, but exposing it means that if there is a security flaw, the "bad guys" are more likely to find it before you do. After all, most port scanners that hit random IP addresses only scan a limited number of ports. Why have your app hit by port scanners if it doesn't have to?
2) Sending data in a way that appears to be insecure, even if it's not. For example, a product sends the source code for PHP files when you do a get without logging in. No customer data is exposed, and the product has open source firmware, so anybody can get the source code anyway. There is no real risk, but it is still bad from a marketing perspective.
A perception of insecurity is just as bad as a flaw in security. Because of the pride factor in being the first to disclose a vulnerability, some security researchers do not take into account the exploitablility of the vulnerability and are quick to disclose. In the example above, the disclosure might state that pages are accesssible without logging in. A disclosure like this means somebody researching your product might shy away from purchasing it because it's "insecure."
Thursday, July 28, 2011
Should I Enter a Bug?
This is a question I've come across many times. When should you write a bug report, and when should you just send an email?
Generally speaking, it's best to be safe and enter a bug, since the bug tracking system will keep a history of any back-and-forth conversation that anybody can view. You just don't get that with email. I've seen many justifications for sending an email instead of entering a bug report, but in most cases, it still makes sense to enter a bug report.
Some common reasons for sending an email are:
1) "I don't have time to enter a bug." I'm sorry, but this is a pretty weak reason. If you have time to send an email, you have time to enter a bug. I've gotten long, drawn-out emails with detailed bug descriptions. These emails must have taken 10 to 15 minutes to write. It would have been just as easy to enter the same information in the bug tracker.
Granted, you really may be in a hurry and don't have time to do a "proper" bug report. You may not have time to confirm it can be reproduced. You may not have time to figure out who should be in the notification list. However, entering a bug with as much detail as you have and saying, "I'll clean this up tomorrow" is better than not entering a bug at all (as long as you really clean it up tomorrow). At least it will be tracked, and all stakeholders will be able to see it if they look.
Entering a quick bug will take no more than one extra minute when compared to a quick email. If you see a bug, enter it. Period.
2) "I'm not sure if it's a duplicate." For the most part, you should use the search function of the bug tracker. If you're short on time, see #1. If you wind up entering a duplicate, who cares? As long as you made an effort to find the duplicate, it's no big deal to have your bug closed in the next bug triage.
The only time this might be legitimate is if you're almost positive you've seen the bug before and can't find it in the bug tracker. Then, a quick, "I think this is a duplicate, do you know the bug number?" might be in order. However, if nobody knows the number, or you don't get a response, enter the bug. Again, a duplicate bug every once in awhile is no big deal.
3) "I don't like the bug tracker and email is just easier." There are three options: learn to live with it, find an alternative bug tracker, or find an alternative job. I've worked with many different bug trackers. Some were great. Some were horrible. I lived with the horrible trackers until they were replaced. Don't let the tool you use prevent you from doing your job.
4) "This is a serious bug, and we're going live tomorrow." It's still no excuse. Enter a quick bug (see #1). If you want to be absolutely sure the right people are aware of it, send an email saying, "I just found a show stopper. See bug #X!"
5) "I'm not sure if it's a bug." This is common. You don't want to enter a bunch of "as designed" bugs. On the other hand, you don't want bugs falling into an email void. First, find the specs and see if it is a bug. If so, enter it. If the specs aren't clear, you might email the project manager or some other engineers with a brief description of the problem. If you don't get an answer right away, enter the bug anyway.
However, this really shouldn't be a problem for your projects. A test engineer really should know the specs as well as the developers. If you don't know how the product is supposed to work, how can you test it? Of course, if you are looking at another project and stumble across what might be a bug, this is when you'll wind up having to send an email (or just hunting down the Project Manager and asking).
Generally speaking, it's best to be safe and enter a bug, since the bug tracking system will keep a history of any back-and-forth conversation that anybody can view. You just don't get that with email. I've seen many justifications for sending an email instead of entering a bug report, but in most cases, it still makes sense to enter a bug report.
Some common reasons for sending an email are:
1) "I don't have time to enter a bug." I'm sorry, but this is a pretty weak reason. If you have time to send an email, you have time to enter a bug. I've gotten long, drawn-out emails with detailed bug descriptions. These emails must have taken 10 to 15 minutes to write. It would have been just as easy to enter the same information in the bug tracker.
Granted, you really may be in a hurry and don't have time to do a "proper" bug report. You may not have time to confirm it can be reproduced. You may not have time to figure out who should be in the notification list. However, entering a bug with as much detail as you have and saying, "I'll clean this up tomorrow" is better than not entering a bug at all (as long as you really clean it up tomorrow). At least it will be tracked, and all stakeholders will be able to see it if they look.
Entering a quick bug will take no more than one extra minute when compared to a quick email. If you see a bug, enter it. Period.
2) "I'm not sure if it's a duplicate." For the most part, you should use the search function of the bug tracker. If you're short on time, see #1. If you wind up entering a duplicate, who cares? As long as you made an effort to find the duplicate, it's no big deal to have your bug closed in the next bug triage.
The only time this might be legitimate is if you're almost positive you've seen the bug before and can't find it in the bug tracker. Then, a quick, "I think this is a duplicate, do you know the bug number?" might be in order. However, if nobody knows the number, or you don't get a response, enter the bug. Again, a duplicate bug every once in awhile is no big deal.
3) "I don't like the bug tracker and email is just easier." There are three options: learn to live with it, find an alternative bug tracker, or find an alternative job. I've worked with many different bug trackers. Some were great. Some were horrible. I lived with the horrible trackers until they were replaced. Don't let the tool you use prevent you from doing your job.
4) "This is a serious bug, and we're going live tomorrow." It's still no excuse. Enter a quick bug (see #1). If you want to be absolutely sure the right people are aware of it, send an email saying, "I just found a show stopper. See bug #X!"
5) "I'm not sure if it's a bug." This is common. You don't want to enter a bunch of "as designed" bugs. On the other hand, you don't want bugs falling into an email void. First, find the specs and see if it is a bug. If so, enter it. If the specs aren't clear, you might email the project manager or some other engineers with a brief description of the problem. If you don't get an answer right away, enter the bug anyway.
However, this really shouldn't be a problem for your projects. A test engineer really should know the specs as well as the developers. If you don't know how the product is supposed to work, how can you test it? Of course, if you are looking at another project and stumble across what might be a bug, this is when you'll wind up having to send an email (or just hunting down the Project Manager and asking).
Thursday, March 10, 2011
Insecurity through Perspicuity
I was having a discussion with a colleague about security. In particular, we were discussing whether or not we should use standard HTML ports for an Internet-facing application. I thought we shouldn't use them because it would increase the risk unecessarily.
When I mentioned this, his response was, "you can't rely on security through obscurity."
I agreed. However, I still insisted that the ports should be changed. My logic was that using standard ports exposes your service to more people, which is insecurity through perpicuity and an unecessary increase in the risk of being attacked.
One example that is sometimes given as an argument against security through obscurity is that you can't just hide your front door with bushes, leave the door unlocked, and expect nobody to break in. This is true, but it doesn't mean that just because you think your house is secure, you should put a note outside that says, "Be back in a week, I put my $50,000 cash in the safe."
By exposing standard ports, you're guaranteeing that every port scanner, even those configured for a minimum scan, will find your server. You may think it's secure, but when a 0-day exploit is discovered, there is a window of opportunity from when it's disclosed to when the patch can be applied on your server where you are vulnerable. No matter how closely you track security vulnerabilities, 0-days are always a risk.
If you're using standard ports, the number of people who are aware that you're running a vulnerable service is going to be many times higher than if you were running on non-standard ports. That means your risk of being attacked before you can patch is also many times higher.
When I mentioned this, his response was, "you can't rely on security through obscurity."
I agreed. However, I still insisted that the ports should be changed. My logic was that using standard ports exposes your service to more people, which is insecurity through perpicuity and an unecessary increase in the risk of being attacked.
One example that is sometimes given as an argument against security through obscurity is that you can't just hide your front door with bushes, leave the door unlocked, and expect nobody to break in. This is true, but it doesn't mean that just because you think your house is secure, you should put a note outside that says, "Be back in a week, I put my $50,000 cash in the safe."
By exposing standard ports, you're guaranteeing that every port scanner, even those configured for a minimum scan, will find your server. You may think it's secure, but when a 0-day exploit is discovered, there is a window of opportunity from when it's disclosed to when the patch can be applied on your server where you are vulnerable. No matter how closely you track security vulnerabilities, 0-days are always a risk.
If you're using standard ports, the number of people who are aware that you're running a vulnerable service is going to be many times higher than if you were running on non-standard ports. That means your risk of being attacked before you can patch is also many times higher.
Security in Consumer Network Products
Security is important for all software and hardware network products. However, when a security vulnerability is found in a consumer-grade product, all too often, I hear the argument that since the only attack vector is from the consumer's LAN, the priority to fix the vulnerability is low. After all, if users have intruders on their LAN, the vulnerability is the least of their worries.
Although this may be true to a certain extent, I still would argue against this. There are some reasons to investigate and address security vulnerabilities that may not be a real-world threat to consumers' data.
First, somebody could have a misconfigured wirelesss router. An attacker could get on their LAN withough their knowledge and wind up accessing their data stored on a NAS because of a bug that "nobody would realistically exploit." To be honest, this falls under the "they're on the LAN, so they have bigger problems" umbrella, but it is always an additional attack vector to consider.
Second, when the public finds out about an obvious exploitable security hole, especially one that is easily fixed, it makes the product and company look bad. People start to wonder what other problems are hidden in your product if you let out something that easy to find and fix.
Granted, not every security bug can or should be fixed, otherwise, you'd never release the product. For example, a potential Denial of Service (DoS) attack on a network device may not be a problem if that device is going to be behind a firewall. Even if an attacker gets on the LAN, are they really going to try to crash your media player?
However, regardless of the likelihood of a vulnerability being exploited, each known vulnerability should still be investigated to see how they might affect your product. I've seen cases where a DoS vulnerability was being triggered by a third-party device that was unintentionally sending malformed packets. We got reports from users that their device was crashing for no apparent reason. The logs didn't help, and we were unable to reproduce it. It was only after running a security scanner against the product that we found the vulnerability and were able to tie it to the crashes reported by users.
Of course, the stakes are raised when dealing with SMB or enterprise products. Even with consumer products, once you expose a single port to the WAN, security becomes critical, not just important.
Although this may be true to a certain extent, I still would argue against this. There are some reasons to investigate and address security vulnerabilities that may not be a real-world threat to consumers' data.
First, somebody could have a misconfigured wirelesss router. An attacker could get on their LAN withough their knowledge and wind up accessing their data stored on a NAS because of a bug that "nobody would realistically exploit." To be honest, this falls under the "they're on the LAN, so they have bigger problems" umbrella, but it is always an additional attack vector to consider.
Second, when the public finds out about an obvious exploitable security hole, especially one that is easily fixed, it makes the product and company look bad. People start to wonder what other problems are hidden in your product if you let out something that easy to find and fix.
Granted, not every security bug can or should be fixed, otherwise, you'd never release the product. For example, a potential Denial of Service (DoS) attack on a network device may not be a problem if that device is going to be behind a firewall. Even if an attacker gets on the LAN, are they really going to try to crash your media player?
However, regardless of the likelihood of a vulnerability being exploited, each known vulnerability should still be investigated to see how they might affect your product. I've seen cases where a DoS vulnerability was being triggered by a third-party device that was unintentionally sending malformed packets. We got reports from users that their device was crashing for no apparent reason. The logs didn't help, and we were unable to reproduce it. It was only after running a security scanner against the product that we found the vulnerability and were able to tie it to the crashes reported by users.
Of course, the stakes are raised when dealing with SMB or enterprise products. Even with consumer products, once you expose a single port to the WAN, security becomes critical, not just important.
Wednesday, November 24, 2010
SQA and the Scientific Method
My son has been learning about scientific method in his science class. As I've been helping him with his homework, I realized that I use scientific method when I find a bug.
For example, suppose you're testing remote access software installed on a Windows client. You're noticing that on one system, it keeps losing connection to the server. This is something I ran into once. Now, if you're a test monkey, you'll write up a bug saying, "Brokey brokey, no worky" and let development figure it out.
However, if you're reading this blog, you like making it easy for developers. So, you'll wind up asking yourself, "Why does this one system have a problem with disconnecting from the server?" At this point, you've just started approaching this from a scientific point of view.
Next, you'll do some research and elminate variables. What's unique about the one system with the problem? What could cause the connection to drop? Is it the network it's connected to? Is it a bad cable? Does it just not like me?
Once you've decided what could be causing the problem, you'll start with the first hypothesis. You'll want the simplest and easiest to test, so maybe it's the network. You'll test the hypothesis by moving the "bad" computer to the same network as the "good" computer. In fact, you could even use the same network cable that the "good" computer used. If it still fails, you've eliminated three variables (network, cable, and port on the switch). If it works, you've gotten it down to three.
If it still fails, it's back to the hypothesis and experiment loop. You'll want to keep eliminating variables until you find the cause of the problem. Maybe it's faulty hardware. Maybe it's another app. Maybe it's a feature unique to the computer.
In my case, the failing system was a laptop. After some experimentation, I traced the problem to the SpeedStep feature. If I turned that off, it worked fine. I entered the bug. When a developer got it, the root cause was found in minutes. It turned out that the API used to time the 60 second keep alive packet failed if the processor speed changed. When the app launched, the CPU usage was high, so the processor ran at full speed. Once it went idle, it slowed down, which slowed the timer down. Then, it missed the keep alive packet and the server assumed the client had disconnected and closed the pipe.
A good bug report starts with a question, then some reseach. After that, it's a cycle of coming up with a hypothesis, testing it, and repeating until you can prove a hypothesis and find the cause. Finally, you report the findings to a developer through a bug report and, hopefully, get the bug fixed.
For example, suppose you're testing remote access software installed on a Windows client. You're noticing that on one system, it keeps losing connection to the server. This is something I ran into once. Now, if you're a test monkey, you'll write up a bug saying, "Brokey brokey, no worky" and let development figure it out.
However, if you're reading this blog, you like making it easy for developers. So, you'll wind up asking yourself, "Why does this one system have a problem with disconnecting from the server?" At this point, you've just started approaching this from a scientific point of view.
Next, you'll do some research and elminate variables. What's unique about the one system with the problem? What could cause the connection to drop? Is it the network it's connected to? Is it a bad cable? Does it just not like me?
Once you've decided what could be causing the problem, you'll start with the first hypothesis. You'll want the simplest and easiest to test, so maybe it's the network. You'll test the hypothesis by moving the "bad" computer to the same network as the "good" computer. In fact, you could even use the same network cable that the "good" computer used. If it still fails, you've eliminated three variables (network, cable, and port on the switch). If it works, you've gotten it down to three.
If it still fails, it's back to the hypothesis and experiment loop. You'll want to keep eliminating variables until you find the cause of the problem. Maybe it's faulty hardware. Maybe it's another app. Maybe it's a feature unique to the computer.
In my case, the failing system was a laptop. After some experimentation, I traced the problem to the SpeedStep feature. If I turned that off, it worked fine. I entered the bug. When a developer got it, the root cause was found in minutes. It turned out that the API used to time the 60 second keep alive packet failed if the processor speed changed. When the app launched, the CPU usage was high, so the processor ran at full speed. Once it went idle, it slowed down, which slowed the timer down. Then, it missed the keep alive packet and the server assumed the client had disconnected and closed the pipe.
A good bug report starts with a question, then some reseach. After that, it's a cycle of coming up with a hypothesis, testing it, and repeating until you can prove a hypothesis and find the cause. Finally, you report the findings to a developer through a bug report and, hopefully, get the bug fixed.
Subscribe to:
Posts (Atom)