The Endless Security Cycle

I have been thinking about how to write this post for a while now. I have several approaches to choose from, but then I hit on the key concept. It doesn’t matter. Here is the general pattern of James’ approach to this topic.

  • James will criticize ECM security as a whole and then point to one or more issues.
  • I then attempt to explain why those key “issues” aren’t issues.
  • James will then elaborate or comment on my post in one or more follow-ups, usually explaining something that I didn’t put in my post for one or more reasons. In the case in point, I didn’t take it deep enough. While doing this, he ignores any defenses I may have made of the “issues”. He invariably bringing up other “issues” as well.

Rather than continue the cycle, and eat my time up, I’m going to post one more time on this topic and move on for now. Some disclaimers of my own:

  1. I no longer work for a vendor, and I never did on the development side of the house for an ECM vendor. (I was a hired-gun for PC DOCS in the late 90s). My job is to architect and deliver solutions that are secure, fast, and scalable. When I find a problem, I let the vendor know and work around it. When I find a cool solution that I can share contractually, you’ll see it.
  2. I am not writing a manual here. This isn’t a wiki where I am going to come back and update things. This is basically a long series of articles. Due to time and space, I don’t put everything down in detail. I understand that my readers have to take the time to read everything, and I’m sure not going to write something that seems too long, like these disclaimers.

That being said…

Do Your Best

James is right. I’m not a big fan of his tactics at times, but his basic message is important. Security is important. You can’t always point to a return on the investment, but when it goes wrong, it is too late.

You try and test for the unexpected. That means architects banging on the system just to see what they can do. That means using testing automation software when the scenarios start to get out of hand. That means testing on as many different platforms and configurations as possible.

If a product has a bug, which is what a security issue is when you get down to it, it means that the tests missed something. It doesn’t necessarily mean that the testing was poor or treated as unimportant. It means that the developers and testers weren’t all-knowing and missed something. If lots of bugs and security issues pop-up, then you question the entire process and the commitment.

The bigger the system, the more bugs that will be found. A committed vendor can improve its testing at a rate at least equal to the growth of the product. There will be bad releases. The question is, do they continue to be that bad or does the vendor improve?

If nobody finds a security hole or bug in the real world, that means that nobody is using the system. Simple as that. The vendor can do all the right things, but there are so many quirks out there in the “wild” that they’ll never get them all.

When a bug was found in the “wild”, back in the day, we would identify it, fix it, and add tests to catch it in the future. We would also look for similar bugs. Usually for every bug that someone reported, we would find one or two more that we could catch and fix before anyone encountered them.

Making a secure and stable product is a process that never ends. The only products that release with no bugs or security flaws aren’t growing and are in maintenance mode.

Minor Little Detail on DFS

James needs to understand one little detail about the latest version of DFS (breaking my no issue rule). His latest comment attacks the following feature:

The DFS SDK now includes a.NET productivity layer (consumer library) for development of .NET-based DFS consumers. The .NET productivity layer is functionally equivalent to the Java productivity layer, with the exception that a .NET client can consume DFS services only remotely (as web services).

The DFS .NET productivity layer is Common Language Specification (CLS) compliant; therefore, you are free to develop above DFS using your CLS-compliant language of choice (e.g. C#, VB.NET, Managed C++, even IronRuby, IronPython, etc.). Samples of C# DFS consumers are provided in the SDK as well as new XML Documentation (i.e. C# equivalent to Java’s Javadoc) and HTML Help (.chm) documentation.

DFS service development tools have been enhanced to provide a facility for generating CLS-compliant .NET productivity layer support for your won services, too. The DFS .NET productivity layer is based on Microsoft’s Windows Communication Foundation (WCF) framework within .NET 3.0.

James, it is a development TOOL designed to help .NET developers more quickly develop DFS based applications. There is no actual change to DFS itself in relation to this feature. Calls could be made to DFS from .NET before this latest release. The difference is that before everything had to be built manually.

EMC is just trying to make life easier for developers.

On Abuse in the Blogsphere

I just want to let James know that I don’t feel abused. I said that in jest. If I truly felt abused, I’d be done.

I would have been done a while ago for his constant attacks on Craig Randall, blaming him for every ECM issue and taking him to task for not responding. Others have called it a day . I keep reading to try and call James to task on some of the things he posts and because I do learn some things outside of the ECM world. Eventually, that may not be enough motivation to keep reading.

James, please start to challenge and question people without insulting. If you don’t, you may find yourself yelling into the void and not get any response. It is your blog and your right to use it as you will. Just remember, we respond and interact at our will.

4 thoughts on “The Endless Security Cycle

  1. Chris Campbell says:

    I’ve only been vicariously following the security topic for the past few weeks. Mostly because I’ve been so busy with a ton of other stuff. Some of it is because of the same tired arguments thrown out.

    Here’s my take. If someone really, really wants information it’s just going to be a matter of how much time and money are you willing to spend to get it. What DRM hasn’t been cracked in someway already? If you can see it or hear it, that information can be copied in some way.

    My philosophy has always been to spend your time and money protecting the most important, critical information. Sometimes you have to totally segment off sections of your network if it’s critical. You just do the best you can.

    If I wanted to steal James’ identity, it’s fairly easy in today’s world. No need to hack into a EDM system. I’d just dig through his trash or steal his mail. That’s the low hanging fruit. Here’s the thing, the people who want your information are going to be after only a few things: personal identity, financials or trade secrets. The people who are actually doing the hacking are professionals brokering information to organized crime. (Not always, but you’d be surprised.)

    So what if a company uses automated testing? It’s all in how you use the tool. Just because I have a surgical operating room in my basement doesn’t make me a brain surgeon. Concentrate on your employees, making sure they are trained and happy. They are the ones going to be stealing from you most of the time anyway.

    Don’t loose any sleep over his blog rants. It’s hard to take him seriously anyway when his blog loses focus constantly because his thoughts wander and then he clutters up his site with random political commentary. (Seriously, what’s up with that? It doesn’t matter if it’s left-wing or right-wing. Mixing ECM content with Hillary/Bush/Iraq images makes him look like he’s the next “Timecube Guy” or is a frequent caller to Art Bell.)

    Like

  2. As I read through this thread, I had flashbacks to long, unproductive meetings with security people who were always quick to criticize our methods, malign our professional ethics and identify problems their automated tools had discovered. They never seemed to offer any practical solutions or have any idea of how to quantify the risk much less the cost to remediate it.

    Inherent in the meetings was the underlying theme ‘it is your responsibility to prove that there are no security problems’ without ever once acknowledging that YOU CAN’T PROVE A NEGATIVE. You can never prove something doesn’t exist. You can only prove that it does or collect sufficient evidence to the contrary and lower the risk to an level acceptable for the case at hand.

    What some of the sensible guys in these meetings did teach me is that it is ultimately not the software vendor’s responsibility to secure the environment. We ran our own scans on everything and turned up the same things time and again on every product. To be terribly honest, no one was really worried about SQL injection attacks from inside the firewall. It’s easier to bribe an administrator or make a photocopy of what you left on your desk. As far as I am concerned if you ever got hacked because you put a WebTop login on the internet you got what you deserved.

    A vendor will only ever test what the market demands, not what an idealogical self proclaimed thought leader declares. ECM is a business not a religion and dogma without ROI is for the simpleminded. It’s up to the owner of the system – not the component vendor to understand what risks he is introducing when he installs the product. If the product leaks like a tea strainer – shame on you for buying it in the first place OR worse for deploying it in an unsecured manner.

    Like

  3. I’ll hold you to that – I’ll be the fat guy in the armedia booth wearing the kevlar vest trying not to make eye contact with James.

    Like

Comments are closed.