We’ve heard these stories before, haven’t we?

Religious objections to equality are not the right way to set directions for government.  Policy should not be set by those who still prohibited interracial dating all the way up to this century.

Arguments were held yesterday in front of the Supreme Court on the Greece v. Galloway Supreme Court case regarding prayer during legislative sessions: the transcript is posted here.  Background and discussion on the case is available at SCotUSblog, and details of AU’s involvement in the case can be found at the au.org website.

Anything is possible once you no longer care if you succeed and stop trying and just sit on the couch in your underwear imagining you’re doing it.

Strategy needs translation to activity in order to preserve intent.  Otherwise strategy remains on paper only, and never becomes the actual direction in which we are heading.

I suppose I need to clarify that a little?  It’s a general idea I’ve been using in my day to day work for a while now, but I sometimes find it hard to explain in detail: I thought putting it in writing might help me work through some of the kinks.

I think the best way to reason through it is using a specific example, and since Vulnerability Management (VM) seem to be top of mind for me right now, I’ll use Security as the example.  Note that I’m not picking on Security as the only culprit here, I think we (as a company) do this all over the place, but Security is something I’m familiar with.

We had several major gaps in our security and compliance processes that we were trying to close with the VM project.  A perfect example is the identification and fixing of software patches within our environment.  The way this should work is:

  • someone is closely monitoring the releases of patches from the vendors
  • someone identifies which patches are relevant to the company and should be installed
  • someone identifies where the patches are needed (where the relevant software versions are present within the company)
  • someone creates a task to install the patch, routed to the correct group
  • someone performs the task, installs the patch

Or let’s think of another example: setting a security policy:

  • someone writes the policy that says “all databases containing credit card info must be encrypted”
  • someone interprets the policy for each of the databases in the company
  • someone documents the approved products, installation and configuration options to set correctly in order to meet the policy requirements
  • someone installs and configures the products to meet the policy

The problem?  All those “someones”.  Organizationally, what we have traditionally had at the enterprise hasn’t done a very good job of assigning the right people to those roles, mostly by putting too much of a wedge between the policy ‘definers’ and the policy ‘implementers’.  This got a bit worse as part of our outsourcing, but it was here all along.  We ended up with the following types of situation:

  • – someone identifies the software patches that should be installed in the company
  • – that info is handed over the wall to the operations teams
  • – ops teams don’t know where the patch is required, or don’t have enough operationally-specific information to install correctly/completely
  • – any questions back over the wall get the response: “We don’t do that, that’s operations, we just define the policy, you have to figure it out”
  • – the patch never gets installed


  • The statement “databases with credit card information should be encrypted” is made as a policy
  • The policy is handed over the wall to the ops teams, and they are told “go encrypt everything that has CC data”
  • The ops teams ask “where do we have CC data?  And how do we configure the 150 options that this encryption software has, to make sure we meet your expectations.  And how do we support this over time?  Who will be monitoring the logs, and who do they notify when something happens?  And what are the ‘something happens’ that you need to know about, and which are noise?”
  • Strategy teams says “that’s an ops issue.”
  • Ops team installs and configures software incorrectly, incompletely or ignores it in the absence of complete knowledge on how to implement and support it.  It’s not supported over time, and no one looks at the logs to see and respond to errors.

We need to improve the way the strategy groups respond to requests for clarification and understanding from the operations groups.  The best way to do that is make sure the conversation goes this way:

“Here are the standards and the policies and the requirements”

“OK.  How do I implement all that in this environment?”

“Hmm.  I don’t know.  But I know the strategy, you know the environment: let’s figure it out together.”

How do we do that?  Good question.  The recent work we’ve been doing to develop Minimum Baseline Standards is a great start, but it’s not enough to get five of these a year from a consultant, where they remain pretty much frozen in stone until the next year.  We need meetings between the strategy and the operations representatives to be a natural, regular part of business, and we need people whose main responsibility is to translate from one to the other, breaking down the high-level strategy to the detailed implementation, with full knowledge about both.  Otherwise the strategy remains on paper: ignored, implemented incorrectly, or implemented to barely satisfy the letter of the law, rather than the spirit inherent in the strategy.

How did we attempt this in VM?  By making sure that it wasn’t enough for the security strategy groups to identify patches that needed to be addressed in theory, but requiring they link those patches to the vulnerabilities identified, within our company, by our scanning systems.  Then translating the strategic view into an operational activity list: “here’s what we have to do, within our company, on these specific servers, in order to meet our requirements for our security strategy.  And here’s when you can do it, and here’s the group that is responsible for the task.”  It’s more work, sure: there’s the additional steps to map the identified issues with the enterprise-present issues, and map the issues to the activity required to fix them, and map the activities to the respective responsible and accountable groups.  But it’s necessary, and I’m not sure there’s a better or more efficient way to do it.

The same applies to all of our strategies.  We do a sub-optimal job of translating strategy to operational process to solve the “Monday morning 8:00am” problem, which can be expressed as follows: when a system administrator sits down as his/her desk at 8:00am next Monday morning, they have a thousand things they can do.  How do they know what they should do, and do first?  When they make a selection and start work, are they choosing the operational tasks that are (ultimately) prioritized, sorted and filtered by the company strategy?  Can you show that link from the company strategy to the first item done at 8:00am?  If not, then your strategy only exists on paper.

We had a need to put in place a vulnerability management system for our servers, and it needed to contain a ton of different data from multiple systems, bringing it all together in a way that was relatable in order to provide a “scorecard” for each server that could be rolled up by business unit.

So we built it.

I want to document a bit of this, partially so I can remember how we did it, but also so that others can hopefully learn from our mistakes.

When the process first started, I was approached with a request to build a “health check” report for our servers.  It was practically impossible for us to understand the overall security status of a particular server, considering all of the variables and different systems that held part of the data.  In order to understand the “health” of a server, we need to be able to know:

  • What high-level business applications run on it?
  • What software is installed on the server to support that business application?
  • Does the application fall in scope of any of our security and regulatory compliance programs (e.g. S-Ox, PCI, PII, GLBA)?  And if so, what are the algorithms that determine whether this server falls into scope?
  • What basic tools does the server need installed for day-to-day management and monitoring?
  • What additional tools does the server need installed for compliance and regulatory compliance (e.g. HIDS for PCI)?
  • Are those tools reporting correctly, and are they configured in the right way?
  • Are all the tools reporting conflicting information?  For example, is the software asset management tool reporting an installation of a monitoring tool, but the console for that tool has not received any communication from that agent?  That can imply misconfiguration (or simple disabling) of a particular tool.
  • What vulnerabilities exist on the server?  And are they:
    • missing patches
    • configuration file issues
    • missing tools
    • incorrect group memberships

At the end of the day, there are two outputs from collecting and understanding this pile of data

  1. The “health check” report, which can algorithmically be converted into a “risk score” for each server
  2. The “activity list” report, which is the list of things that need to be done to this server to reduce the “risk score”.

To build this, we leveraged:

  • MS  SQL (database to store all the collected data)
  • SQL Reporting Services (to produce the two reports listed above, as well as a metric buttload of other reports)
  • SQL Integration Services (to import and aggregate all the data from the multiple sources)
  • Iron Speed Designer (for the interface)

All of this to bring in data from (currently)

  • Our Application Portfolio Manager (to understand the relationship between servers and business apps, and the scopes for those applications)
  • Service Center (the quasi-CMDB and server asset management tool, to get basic data on the servers themselves)
  • Our event logging tool
  • Our HIDS tool
  • Multiple A/V tools (including different versions of McAfee and Symantec agents)
  • The database monitoring and encryption tool)
  • Multiple vulnerability management and patch deployment systems
  • Our internal vulnerability assessment tools, which assign categories and overall security severities and importance to the discovered vulnerabilities
  • The software asset management tools
  • The reporting tools from the supplier/vendor supporting the server hardware itself
  • Several other smaller utilities and consoles to provide additional required data: financial, business unit ownership, responsibility and ownership hierarchies

More details in coming posts.

Figuring out why a Cognos report that ran just fine before now seems to want to run the entire report query before I even get to the second cascaded prompt… not easy, but I’m guessing “bug”. Must I upgrade again?

I’ll start this out by mentioning that I know about PLMXML, and it’s not what I want.

Here’s what I want: a standard format that vendors can use to exchange information about the lifecycles for their products.  Ideally, there would be standardized lifecycle phases that would mean the same thing across all implementations, and standard lifecycle formats.

Haven’t seen that anywhere here on the Intertubes, and I’m thinking of working on proposing and/or developing a standard for it with others who seem to like the idea, but I wanted to throw the idea out there so that I don’t spend the next three months on a specification only to be told “Oh yeah, we already did that two years ago, and we’re all smarter than you are, so we addressed all these other issues that you didn’t think about.”  And then they’d give me a wedgie.

Simple stuff really: I want to be able to import this info into our architecture repository so we can start comparing the technologies used by our apps to the vendor lifecycles, and feed that as a big chunk of data into our strategic planning.  You have an app that will still be up and running in five years, but all the technologies it’s running on will be end-of-support-life in one year?  Well, then your five-year-plan had better include a project to upgrade it, hadn’t it?

Basic attribute requirements for a high-level component in the XML would be:

  • Technology Name
  • Technology version/subversion(s)
  • Product description
  • Technology type (hardware product model, software product version, industry standard version)
  • Lifecycles (should be several, these are just examples): Beta, Supported Release, End of Standard Support, End of Extended Support, Discontinued
  • Each lifecycle has a start date and an end date, at a minimum.  The last cycle’s end date can be “infinite” or “undetermined” or something similar
  • Each lifecycle (and the versioning doc as a whole) can have a categorization as to how public the knowledge is, ranging from “freely available” to “confidential”, but that doesn’t mean there’s DRM on the XML doc itself.  That’s a separate security and control question.
  • Comments for lifecycles and for the component
  • URIs to the most recent version of the lifecycle doc for this technology

See?  Simple stuff.  But so simple I expect someone has at least looked at it.


Next Page »