In my earlier post, I called James out on his post, which was a fairly biased statement about EMC’s testing for security, or lack thereof. In my post, I pointed out that the security warning did not warrant such an attack. I tried to point out that James wasn’t necessarily wrong in his statements, just that he didn’t provide any evidence that backed them up. He criticized their proactive efforts when the source material calls for a reactive effort.
Well, James replied to me in two subsequent posts. The first post endeavored to teach me about the importance of testing for security in systems proactively. It wasn’t a lesson that I needed, having heard of the SQL Injection attack back in the 90s as a weakness in ASP applications (or at least an attack that was fairly similar). Being aware of these issues, I’ve make a point of controlling what a user can do in interfaces.
His points are valid though, so I wanted to take time to talk about them. This is my first post in a series addressing the points he brings up. So if I don’t address something now, don’t worry it’ll come.
Testing Due Diligence
I’ve been a Software Product Manager in the past (AnnoDoc, among others, when it was owned by Infodata Systems). It was a trying experience to get the software adequately tested to support every environment our clients needed. For AnnoDoc, we had to support every combination of:
- The last three version of Adobe Acrobat, Standard and Professional.
- The last two versions of Internet Explorer.
- Windows 2000, 98/ME, and XP.
That was just the client. The server had to support the last two major versions of Documentum and all the related supported platforms. It was a testing mess. We were able to deliver a product that had minimal environment specific issues. Most issues were multi-platform in nature.
We didn’t have to test security heavily. Our server component had to have an authenticated session passed to it from the Documentum client. From there, the actions that the user could take could only impact annotation objects belonging to the named user. It was fairly straightforward and heavily parameterized from an interface perspective.
Testing for the Unexpected
One of the responsibilities of vendors that provide systems is to make the system secure and safe to use. This goes beyond authentication and authorization, two related yet different concepts. This covers securing against unexpected acts by the user, both accidental and malicious.
Now, take a similar grid of supported platforms from above and add functionality and a complex user interface. Security becomes more of a risk because the interface is more empowered and has more places to be breached. Testing this through scripts and manually can be a time consuming task, and no sure thing. There are people I know that can crash an immature system in 5 minutes if they try hard. Those people typically are not testers as their skill set typically moves them up and out of any testing roles quickly.
With average skilled testers, how do you test for security flaws in the system? You need to use automated applications. The need is in direct correlation to the number of interfaces and the complexity. If it is simple, you can test manually and keep things under control. As it grows in size and complexity, the potential for critical holes increase as well.
Another reason to control what the user does? Simple. Take a successful system and empower the user to run queries willy-nilly. They may eventually ask something that brings the system to its knees trying to put the answer together. While no security breach has taken place, the system is unusable until that query completes, if it ever does.
Evidence, Present or Absent?
In his second post James points out:
If you look for “X” and don’t find it, does that prove that there is no “X”? No.
James is quite correct. Earlier, I called EMC’s reaction to the vulnerability good. The existence of the vulnerability is not proof that there is not proactive testing. The existence of several vulnerabilities could be construed as a reason for concern.
James made the accusation, so it is up to him to back it up. If he had started by asking how they did it, and he got answers that danced around the topic, that would have been some evidence. However asking the question and not getting an answer in public, voluntary, forums like blogs is not evidence.
I can think of three reasons why James may not be getting the answers he seeks. One is that he alienates some people with his approach. His approach of Incite for Insight can work with some people, but it also shuts people down, and once shut down, there is almost no starting up. I obviously do not get enough abuse in my normal everyday life as I keep coming back for more verbal abuse.
The second is maybe that the people he asks don’t know all the answers to the questions he is asking. Let’s take Craig Randall for an example. He is a nice guy and I’ve enjoyed chatting with him over the past year. He is an architect of the Documentum platform. Knowing what little I do about the product development in EMC, it would not surprise me if he didn’t know what tools the QA department used to test the software ().
As a product manager, I needed to know as I was responsible for QA. However, the actual designers and developers didn’t need to know. They needed to be aware of the issues to design against them, and to fix them when found.
Of course, there is the ever present third reason. They’ve been instructed not to talk about such things unless they are in role A, B, or C. I have no idea. In Craig’s case, option one takes precedence over the other two. Craig blogs because he enjoys sharing the new things he learns and his many of his posts show the enthusiasm of the writer. He may know the answers James seeks and may be allowed to share. He may have decided that responding to James just leads to new abuse, which is not why he blogs.
I’m leaving the SOA, DFS, and DQL v SQL topics subsequent posts. This post is long enough…