We’ve heard these stories before, haven’t we?

Religious objections to equality are not the right way to set directions for government.  Policy should not be set by those who still prohibited interracial dating all the way up to this century.

Arguments were held yesterday in front of the Supreme Court on the Greece v. Galloway Supreme Court case regarding prayer during legislative sessions: the transcript is posted here.  Background and discussion on the case is available at SCotUSblog, and details of AU’s involvement in the case can be found at the au.org website.


Anything is possible once you no longer care if you succeed and stop trying and just sit on the couch in your underwear imagining you’re doing it.

Wireless Information System for Emergency Resp...

Image via Wikipedia

Situation: an SSIS package was configured to call a separate package contained within the same database.  Result: error:

Description: Error 0xC0014062 while preparing to load the package. The LoadFromSQLServer
method has encountered OLE DB error code 0x80040E09
(The EXECUTE permission was denied on the object 'sp_ssis_getpackage',
database 'msdb', schema 'dbo'.).  The SQL statement that was issued has failed.

Solution: find the user account that is associated with the connection in the “Execute Package” task. In SQL 2008 R2, that account needs to be granted the db_ssisoperator role in the msdb database, otherwise it can’t find the other package that is being called.  I’m not sure what other effects this might have on rights, but it seems to be the right role according to the description on this page, where it states that the ssisoperator role gets read rights only.

Sometimes I walk alone outside late at night, when the streets are empty of everything but the whisper of lives lived and lost, and my eye is drawn up to the billions of stars over my head, and I have to wonder: is a SUM/COUNT/MAX/MIN over a VALUES statement the most effective way to calculate an aggregate over columns instead of rows?

The answer, by the way, is “Yes”.

For example:

SELECT keycol,
(SELECT MAX(colval) FROM (VALUES(col1val),(col2val),(col3val),(col4val),(col5val), (col6val)) AS D(val)) AS MaxTable
FROM dbo.TableName

That’s pretty beautiful, considering the mess of UNION and CASE statements this would otherwise require.

Strategy needs translation to activity in order to preserve intent.  Otherwise strategy remains on paper only, and never becomes the actual direction in which we are heading.

I suppose I need to clarify that a little?  It’s a general idea I’ve been using in my day to day work for a while now, but I sometimes find it hard to explain in detail: I thought putting it in writing might help me work through some of the kinks.

I think the best way to reason through it is using a specific example, and since Vulnerability Management (VM) seem to be top of mind for me right now, I’ll use Security as the example.  Note that I’m not picking on Security as the only culprit here, I think we (as a company) do this all over the place, but Security is something I’m familiar with.

We had several major gaps in our security and compliance processes that we were trying to close with the VM project.  A perfect example is the identification and fixing of software patches within our environment.  The way this should work is:

  • someone is closely monitoring the releases of patches from the vendors
  • someone identifies which patches are relevant to the company and should be installed
  • someone identifies where the patches are needed (where the relevant software versions are present within the company)
  • someone creates a task to install the patch, routed to the correct group
  • someone performs the task, installs the patch

Or let’s think of another example: setting a security policy:

  • someone writes the policy that says “all databases containing credit card info must be encrypted”
  • someone interprets the policy for each of the databases in the company
  • someone documents the approved products, installation and configuration options to set correctly in order to meet the policy requirements
  • someone installs and configures the products to meet the policy

The problem?  All those “someones”.  Organizationally, what we have traditionally had at the enterprise hasn’t done a very good job of assigning the right people to those roles, mostly by putting too much of a wedge between the policy ‘definers’ and the policy ‘implementers’.  This got a bit worse as part of our outsourcing, but it was here all along.  We ended up with the following types of situation:

  • – someone identifies the software patches that should be installed in the company
  • – that info is handed over the wall to the operations teams
  • – ops teams don’t know where the patch is required, or don’t have enough operationally-specific information to install correctly/completely
  • – any questions back over the wall get the response: “We don’t do that, that’s operations, we just define the policy, you have to figure it out”
  • – the patch never gets installed


  • The statement “databases with credit card information should be encrypted” is made as a policy
  • The policy is handed over the wall to the ops teams, and they are told “go encrypt everything that has CC data”
  • The ops teams ask “where do we have CC data?  And how do we configure the 150 options that this encryption software has, to make sure we meet your expectations.  And how do we support this over time?  Who will be monitoring the logs, and who do they notify when something happens?  And what are the ‘something happens’ that you need to know about, and which are noise?”
  • Strategy teams says “that’s an ops issue.”
  • Ops team installs and configures software incorrectly, incompletely or ignores it in the absence of complete knowledge on how to implement and support it.  It’s not supported over time, and no one looks at the logs to see and respond to errors.

We need to improve the way the strategy groups respond to requests for clarification and understanding from the operations groups.  The best way to do that is make sure the conversation goes this way:

“Here are the standards and the policies and the requirements”

“OK.  How do I implement all that in this environment?”

“Hmm.  I don’t know.  But I know the strategy, you know the environment: let’s figure it out together.”

How do we do that?  Good question.  The recent work we’ve been doing to develop Minimum Baseline Standards is a great start, but it’s not enough to get five of these a year from a consultant, where they remain pretty much frozen in stone until the next year.  We need meetings between the strategy and the operations representatives to be a natural, regular part of business, and we need people whose main responsibility is to translate from one to the other, breaking down the high-level strategy to the detailed implementation, with full knowledge about both.  Otherwise the strategy remains on paper: ignored, implemented incorrectly, or implemented to barely satisfy the letter of the law, rather than the spirit inherent in the strategy.

How did we attempt this in VM?  By making sure that it wasn’t enough for the security strategy groups to identify patches that needed to be addressed in theory, but requiring they link those patches to the vulnerabilities identified, within our company, by our scanning systems.  Then translating the strategic view into an operational activity list: “here’s what we have to do, within our company, on these specific servers, in order to meet our requirements for our security strategy.  And here’s when you can do it, and here’s the group that is responsible for the task.”  It’s more work, sure: there’s the additional steps to map the identified issues with the enterprise-present issues, and map the issues to the activity required to fix them, and map the activities to the respective responsible and accountable groups.  But it’s necessary, and I’m not sure there’s a better or more efficient way to do it.

The same applies to all of our strategies.  We do a sub-optimal job of translating strategy to operational process to solve the “Monday morning 8:00am” problem, which can be expressed as follows: when a system administrator sits down as his/her desk at 8:00am next Monday morning, they have a thousand things they can do.  How do they know what they should do, and do first?  When they make a selection and start work, are they choosing the operational tasks that are (ultimately) prioritized, sorted and filtered by the company strategy?  Can you show that link from the company strategy to the first item done at 8:00am?  If not, then your strategy only exists on paper.

We had a need to put in place a vulnerability management system for our servers, and it needed to contain a ton of different data from multiple systems, bringing it all together in a way that was relatable in order to provide a “scorecard” for each server that could be rolled up by business unit.

So we built it.

I want to document a bit of this, partially so I can remember how we did it, but also so that others can hopefully learn from our mistakes.

When the process first started, I was approached with a request to build a “health check” report for our servers.  It was practically impossible for us to understand the overall security status of a particular server, considering all of the variables and different systems that held part of the data.  In order to understand the “health” of a server, we need to be able to know:

  • What high-level business applications run on it?
  • What software is installed on the server to support that business application?
  • Does the application fall in scope of any of our security and regulatory compliance programs (e.g. S-Ox, PCI, PII, GLBA)?  And if so, what are the algorithms that determine whether this server falls into scope?
  • What basic tools does the server need installed for day-to-day management and monitoring?
  • What additional tools does the server need installed for compliance and regulatory compliance (e.g. HIDS for PCI)?
  • Are those tools reporting correctly, and are they configured in the right way?
  • Are all the tools reporting conflicting information?  For example, is the software asset management tool reporting an installation of a monitoring tool, but the console for that tool has not received any communication from that agent?  That can imply misconfiguration (or simple disabling) of a particular tool.
  • What vulnerabilities exist on the server?  And are they:
    • missing patches
    • configuration file issues
    • missing tools
    • incorrect group memberships

At the end of the day, there are two outputs from collecting and understanding this pile of data

  1. The “health check” report, which can algorithmically be converted into a “risk score” for each server
  2. The “activity list” report, which is the list of things that need to be done to this server to reduce the “risk score”.

To build this, we leveraged:

  • MS  SQL (database to store all the collected data)
  • SQL Reporting Services (to produce the two reports listed above, as well as a metric buttload of other reports)
  • SQL Integration Services (to import and aggregate all the data from the multiple sources)
  • Iron Speed Designer (for the interface)

All of this to bring in data from (currently)

  • Our Application Portfolio Manager (to understand the relationship between servers and business apps, and the scopes for those applications)
  • Service Center (the quasi-CMDB and server asset management tool, to get basic data on the servers themselves)
  • Our event logging tool
  • Our HIDS tool
  • Multiple A/V tools (including different versions of McAfee and Symantec agents)
  • The database monitoring and encryption tool)
  • Multiple vulnerability management and patch deployment systems
  • Our internal vulnerability assessment tools, which assign categories and overall security severities and importance to the discovered vulnerabilities
  • The software asset management tools
  • The reporting tools from the supplier/vendor supporting the server hardware itself
  • Several other smaller utilities and consoles to provide additional required data: financial, business unit ownership, responsibility and ownership hierarchies

More details in coming posts.

Figuring out why a Cognos report that ran just fine before now seems to want to run the entire report query before I even get to the second cascaded prompt… not easy, but I’m guessing “bug”. Must I upgrade again?

I’ll start this out by mentioning that I know about PLMXML, and it’s not what I want.

Here’s what I want: a standard format that vendors can use to exchange information about the lifecycles for their products.  Ideally, there would be standardized lifecycle phases that would mean the same thing across all implementations, and standard lifecycle formats.

Haven’t seen that anywhere here on the Intertubes, and I’m thinking of working on proposing and/or developing a standard for it with others who seem to like the idea, but I wanted to throw the idea out there so that I don’t spend the next three months on a specification only to be told “Oh yeah, we already did that two years ago, and we’re all smarter than you are, so we addressed all these other issues that you didn’t think about.”  And then they’d give me a wedgie.

Simple stuff really: I want to be able to import this info into our architecture repository so we can start comparing the technologies used by our apps to the vendor lifecycles, and feed that as a big chunk of data into our strategic planning.  You have an app that will still be up and running in five years, but all the technologies it’s running on will be end-of-support-life in one year?  Well, then your five-year-plan had better include a project to upgrade it, hadn’t it?

Basic attribute requirements for a high-level component in the XML would be:

  • Technology Name
  • Technology version/subversion(s)
  • Product description
  • Technology type (hardware product model, software product version, industry standard version)
  • Lifecycles (should be several, these are just examples): Beta, Supported Release, End of Standard Support, End of Extended Support, Discontinued
  • Each lifecycle has a start date and an end date, at a minimum.  The last cycle’s end date can be “infinite” or “undetermined” or something similar
  • Each lifecycle (and the versioning doc as a whole) can have a categorization as to how public the knowledge is, ranging from “freely available” to “confidential”, but that doesn’t mean there’s DRM on the XML doc itself.  That’s a separate security and control question.
  • Comments for lifecycles and for the component
  • URIs to the most recent version of the lifecycle doc for this technology

See?  Simple stuff.  But so simple I expect someone has at least looked at it.


We have three different types of applications or software that are hosted on Citrix servers: pretty typical stuff.

  1. Fat client runs on Citrix, connects to a back-end hosted somewhere else
  2. Thin client (browser) runs on Citrix, connects to back-end hosted somewhere else
  3. Standalone client runs on Citrix directly, no back end.  Office applications (e.g., Word, Access) as well as full-blown virtual desktops fall into this category.

Option (2) always sounds like a strange one (wouldn’t any Citrix client also have a browser installed?), until you realize that we use those for remote connections.  We don’t allow direct access to internal applications from the outside world, but you can either VPN in or connect to the external network-facing Citrix boxes, and then run the apps from there.

We run Troux’s Enterprise Repository to model our Enterprise Architecture world, including the relevant parts of our application portfolio, and I’m trying to find the most appropriate way to model the Citrix relationships.  You see, in Troux’s world an application runs on software modules, which are instantiations of specific versions of software products on infrastructure components.  So as an example, an application (which is a business construct, referenced as a whole, different from the component pieces of software like MS SQL) called “HR Payroll Processing” runs on MS SQL 2005 on Server1.  In that case, the Application is “HR Payroll Processing”, the software module is “MS SQL 2005 on Server1”, which is an object that links the app to the server, and the infrastructure component is Server1.

We have never traditionally used any software modules for the clients.  In fact, we’ve never represented the clients (fat or thin) in the EA repository: never really needed to.  I’m trying to figure out whether it makes sense for us to model the clients that are installed on Citrix servers using software modules, or whether it actually makes more sense to use a different object type. Has anyone out there dealt with this?  Not necessarily on Troux, I’d be interested in hearing about experiences on any EA modeling tools.  My idea is that we would create, for the cases above:

  1. Software modules called “HR Payroll Processing Client – CitrixServer1”.  These would link to the application and the server objects.
  2. Software modules called “Browser Client – CitrixServer1”.  These would link to the application and the server objects.
  3. Software modules called “<Software name> – CitrixServer1” (e.g., “Outlook 2007 – CitrixServer110”).  These would link only to the server objects.

One of the complexities of this model is that we need to produce reports that show the owners for everything installed on the Citrix servers for DR and general operational purposes.  But now this means I have to assign owners to the Citrix installations of standalone software like Office, and the report will have to show the union of owners of the first two types (which is a secondary link to the application owner relationships) and the standalone software.

In the alternate approach, I create Application objects for each of the Citrix installations of standalone software, maybe make them children applications of a parent “Citrix implementation” app. This seems duplicative to me (an app called “Citrix Outlook 2007” that runs on a software module called “Outlook 2007 – CitrixServer1” that runs on “CitrixServer1”), in addition to going against our standard application definition which states that software <> application.  There are pros and cons to each approach, but we’re early enough in our EA gathering that we don’t know how this information will be updated and used in the future, so it’s hard to understand where the additional work will be least annoying: in the work needed to create additional application objects, or in the need to create output reports.  Either way, someone’s going to be doing more work than strictly necessary in order to make the output look consistent, and I know that if I have to create additional application objects that person will be me.  I’ll do it if I have to, but I want to make sure I’m doing the right thing.

Probably just open musing at this point: I know that if someone asked me this question my answer would be “it depends”.   Anyone else been working on EA modeling?  Anyone modeled their Citrix apps?

I’ve been burned by this one so many times it’s not funny.  Although if you like seeing me get frustrated, then I guess it is funny.

Here’s the issue: you have a SQL SELECT statement that you’re using in a Cognos Report Studio report, and you’ve verified that it is syntactically correct. It runs fine in Studio Express, for example.  But then you try to add a filter (also one that is syntactically correct) in Report Studio, and you get an error.  The SQL SELECT statement is correct, the filter is correct, but enable the filter and if fails.  What…?

Short answer: Don’t put “ORDER BY” sort statements in the SQL SELECT command. Your order statements should only occur in Report Studio.

Long answer: 

The reason this errors out is that when you put in a filter in the report (not in the original SQL SELECT), Report Studio adds that filter to the end of the SELECT statement it constructs.  So if the SQL is:  

SELECT ApplicationName from R_Applications

And you add a filter like “[ApplicationName]=’something’”, then Cognos bundles them together and sends this request to the SQL server:

SELECT ApplicationName from R_Applications 
WHERE [ApplicationName]=’something’

If the statement in the original SQL is

SELECT ApplicationName from R_Applications
ORDER BY ApplicationName

Then when Cognos sends the statement with the filter enabled it sends:

SELECT ApplicationName from R_Applications
ORDER BY ApplicationName
WHERE [ApplicationName]=’something’

Which is a syntax error: WHERE cannot come after ORDER BY.

Ta da!

Caveat: running v8 of Report Studio, I hear rumors that this behavior changes slightly in later .x revisions.

When you create burst reports or run any Report Studio report so that it sends the results out as an email, you have a few options on the format in which the file goes out. Some of them don’t work so well in our environment because of restrictions on file types that we have in our email systems, so .htm or .html files will never make it past our filters: that one is obvious. What other types can you send reports out as?

  • CSV: success!
  • XML: success!
  • PDF: success!
  • Excel: FAIL!

That last one is a little confusing, to say the least. I would expect XML to fail before XLS. There’s a reason it does fail, though: no matter how you attempt to send an Excel attached file, Cognos actually sends out an .mht file instead, albeit with a MIME type of application/vnd.ms-excel. In my mind this is astonishingly backwards, especially considering there is no indication of what it’s going to do and why.

In any case, here’s how to fix it: you must add a server parameter to send the mht file using an .xls extension. This means you’re still sending an .mht file, but it at least looks like, and behaves like, an Excel file. To do this you must add a server parameter.


  1. Click the Tools menu in the Cognos portal and select “Server Administration”
  2. Select “Set Properties” for ReportService
  3. Select the “Settings” tab
  4. In “Advanced Settings” (usually the first option), click the ‘Edit…’ link
  5. Select “Override”
  6. In the first empty set of boxes, type in the parameter name RSVP.FILE.EXTENSION.XLS and set the value to TRUE

Repeat the above steps to Set Properties for BatchReportService, and when you send the reports send them as Excel 2002 (NOT as Excel 2000 or Excel single sheet).

Not very smart, misleading, and the cause of the error you get when you try to open an “Excel” file that came from Cognos, where it states that the file is not in the format that the extension indicates. An error that you get every. time. you. open. the. file.

A topic obviously near and dear to my heart.  It’s an article in CIO magazine, which by definition kind of means it will be superficial (it’s the same magazine that produced the lightweight “Apple as the enterprise desktop” I linked to some weeks ago).

To make it worse, they do the usual “post only two paragraphs per web page so you have to visit more pages and get more ads” trick which annoys me to no end, so here’s the list of things you should know:

  1. Telecommuting Saves Money. Truly.
  2. Telecommuters Really Can Be More Productive
  3. Telecommuting Doesn’t Work for Every Individual
  4. Trust Your People
  5. Hone Management Skills for Telecommuting
  6. Keep the Telecommuter in the Loop
  7. Tools and Technology Make a Big Difference

It also has an interesting sidebar that asks questions about who should pay for the tools the telecommuter users.  Doesn’t give any answers, but it does mention the things that need to be considered.

I have my opinions about #7, specifically on who should pay for a 30″ Apple monitor that would fill a very empty space on my desktop.  Plus the 8-core Mac Pro that would be attached to it, of course.

Here are a few more that the CIO needs to know about:

  • Telecommuting doesn’t work for every job.  While the article mentions that it doesn’t work for every individual, it neglects the fact that there are some jobs for which telecommuting isn’t appropriate.  If the employee needs to be physically present to power systems on or off, that’s an obvious misfit, but there are also many jobs where most of the work has to do with relating to people directly (for organization, coordination or influence), and which may be poor candidates.
  • Telecommuting has significant benefits for the organization and the employee, but has drawbacks for the employee in job and promotion opportunities.  As I’ve said many times in the past: it’s a tradeoff.  But the employer should not fall into the trap of considering telecommuting as a “reward” that somehow is nothing but the opportunity for the employer to goof off.
  • There’s more travel involved than you’d think.  When I started telecommuting, we did some calculations about how often I would have to travel back to the office locations, and I believe we said something along the lines of “once every six weeks or so”.  There’s been far more travel than that, and potential full-time telecommuters need to understand that this is vital for them to remain on the agenda and in other people’s minds.
  • Don’t underestimate the benefit of a good desk and a comfortable chair: you’ll be using them more often than you would in the office (you never have to get up to go to a conference room, since you’ll take all conferences on your phone).  A good keyboard is also vital, but OSHA regulations tend to be downplayed or ignored.

Something I learned when using Attensa, but that works very well in our current (test) implementation of NGES, and should work in any RSS/Atom feedreader that dumps blog entries into subfolders in Outlook. I've been using what I've found to be a very efficient way to read Atom/RSS feeds within the folders in Outlook, but it requires Outlook 2003: use a very simple search folder.

NGES, by default, puts each feed into a separate folder under a top-level "Feeds" folder. Normally, you'd have to open each folder individually to read it, which doesn't lend itself well to the type of "skimming" reading that many feeds require (I'm looking at you, deli.cio.us/popular).

Here are the steps:

  • Right-click on "Search Folders" in Outlook 2003, select "New Search Folder"
  • Select Custom – Create a custom Search Folder Click "Choose" to specify search criteria
  • Call the new folder whatever you want (e.g. "All Feeds") Select, for the folders that will be included in the search folder, the NGES "Feeds" folder only, and leave "Search in Subfolders" turned on
  • Click "OK" on the warning that you have not specified any criteria
  • When the search folder populates, make sure that the view is arranged by folder (top of the view)

Voila! A single folder with all of your unread RSS/Atom feed items. You can select a "feed"/"folder" by clicking on the sorting group title (which has the feed name), and actions performed against that title are performed against all the feed entries: you can catch up on a feed and delete all items, for example, by selecting the folder and hitting "Del". You can go from item to item by using the space bar. Since the search folder is just a view, whatever you do to the entries in that folder is done to the original items.

Advanced capabilities:

  • hitting the space bar will go through the items and to the next unread entry, but depending on how you have Outlook configured, the item may or may not be marked as "read" automatically (mine is configured to not "mark as read").
  • Because this is a search folder capability, you don't need to limit yourself to just one view: the filters can be customized even further, and you can have separate, independent views of your feeds. You can aggregate from all your feeds only entries with specific keywords (I have an "enterprise architecture" and an "XML" view) into a single view, or categorize your feeds into groups by using search folders that only view specific subfolders under "Feeds".
  • You can categorize and search by date, subject, author, anything you want, and the view is populated automatically by Outlook's quite powerful search folder capabilities.
  • You can see how many unread items are in your entire set of feeds (the NGES "Feeds" folder only allows you to see how many unread items there are in each feed)
  • If you're really geeky and are using the GTD Outlook add-in, you can even create tasks and events off blog entries ("Read this later", "Comment on this blog")

Important note: remember as you're investigating possibilities here, that each entry in an NGES feed is a "Post" item, not an email item. I found this out the hard way after trying to troubleshoot a search folder that relied on an email-specific property, when the fact that the icon for the entries is a "post" icon. In my defense, I had the icons turned off at the time.

Forgive me, I’m slow. I’ve finally figured out what benefit large executive groups bring to the company: they actually get everything done! They prioritize, they schedule, they raise management awareness, they communicate… they GET THINGS DONE!

We have roughly… oh, I don’t know, a few here, a few there, carry the one… seven squazillion “executive councils” of various forms or another. Boards, Councils, Leadership Committees, etc. I have always understood their value in theory (getting people to communicate, collaborate across disciplines, the ever desirable “synergy”), but the practice always seems to leave much to be desired. The group launches with laudable goals and much showering of PowerPoint visions and 12-, 18- and 24-month delivery charts. Six months later, there hasn’t been a meeting in three months, and the last one was only attended by half of the representatives, half of whom were actually just lower-level employees subbing for their very busy executives.

Bear in mind that I have nothing against lower-level employees. But they will only show up for one meeting, not understand the context of the discussion, and–because they are not executives–anything they say (no matter how insightful) will be ignored by those who are. After all, if you’re so smart, how come you’re not an executive like us?

Not that it’s a whole lot better when the executives do show up. Nothing discussed in the meetings will ever make it back to their respective groups or cascade down the information staircase. That means that everything that’s agreed to in the meetings will always take everyone else in the company by suprise when the (lower-level) employee tasked with implementation calls to ask for the impossible-to-gather information required to complete their impossible-to-complete chore.

[ Sidebar: Well, there’s one exception to this: vision statements. These are so grand and meaningless that no one will be surprised by them, and because they’re so non-controversial execs have no problem communicating them down the chain of command. And the best thing is, you don’t actually have to change anything you’re doing in order comply with the vision.

A vision statement has to be , to some degree, controversial: there has to be something in there with which a reasonale person might disagree, otherwise it’s pointless. Who could disagree with: “We will strive to excel at providing service to our customers in all our interactions, and do so in the most efficient and cost-effective way possible”? If there’s anyone in your company who is surprised or challenged by that “vision”, your HR group is asleep. If your vision statement is so vague and buzzword-laden that it doesn’t cause even the merest of doubletakes in the people reading it, then it’s meaningless and isn’t worth the PowerPoint slide it’s ignored on.

A vision statement or motto has to be something that you can compare your everyday tasks to, to see if you’re meeting its challenge. “Is this evil?” is a great daily question to ask when your company‘s vision statement is “don’t be evil”. End Sidebar.]

But here’s how these executive groups are so remarkably effective: they prioritize all work, make sure that everything has a deliverable date, they are the only bodies that ensure that all of your management is aware of the projects you’re working on, and they communicate everything to everyone.

How do they accomplish this amazing amount of work? By scheduling meetings, and putting your boss (or your boss’ boss) on the agenda. Here’s how it works:

Executives read a magazine where some new technology is lauded. Executives demand to see a presentation on how company is implementing this technology (i.e. the “strategy”). Your boss is identified as the person in the company who kinda sorta has responsibility for that technology area, especially if you squint at the org chart and look at it sideways. Your boss remembers that you mentioned this technology in passing at some point in the past. And so the work is:

  • Prioritized. Since the presentation will be next week, your work on this technology has the highest priority (at least until five minutes after the presentation ends).
  • Has a deliverable date: the next deliverable will *always* (and I can’t stress this enough), *ALWAYS* be two weeks after the date of the executive meeting. This is close enough that it feels that the company is truly making progress in this area, yet far enough away that the execs will (a) believe that there is substantial work involved (otherwise it would only take a day or two), and (b) forget about it and not follow up. Note: that doesn’t mean that the entire project has to be completed by then, just the next deliverable. The project completion date should always be 12 or 18 months away. This is called “setting a workable timeframe”
  • Has management awareness: since everyone is in a panic to get the presentation done, it’s everyone’s highest priority, from the exec all the way down to you. Now everyone knows what you’re working on, since they’re up at night sweating that you complete the presentation before the executive board meets. This is called the “status report”.
  • Communicated: this is accomplished by sticking your PowerPoint presentation up on the Portal in the home page for the executive group. Which no one ever reads… not even the execs who attended (why would they? There were there! They know what was discussed! No need to read the minutes!). This is called “collaboration”.

And that’s how things get done.

Details here and here, in case you haven’t already seen it.

According to F-Secure, they were the ones who found this and informed Symantec about it last year… and they claim it’s “not nearly as bad” as the Sony issue.

Interesting comment, coming from a competitor.

I disagree with this excerpt, though. They claim it’s not as bad because:

“The main difference between the Symantec rootkit and Sony rootkit is not technical. It’s ideological. Symantec’s rootkit is part of a documented, useful feature; it could be turned on or off and it could easily be uninstalled by the user. Unlike Sony’s rootkit.”

Regardless of whether Symantec had better “intentions” than Sony, was there ever an expectation that the average user would truly understand how this “documented, useful feature” worked, or how it exposed them to potential exploits? Do you think the average Symantec user truly understands what the risks are of leaving the feature enabled? Both companies implemented a technology that they thought would be useful, and they did it in a way that exposed the user. In fact, you could make the claim that the Symantec rootkit is even worse, since users were given a false sense of security about their systems because they software installed that was supposed to protect them from exactly this kind of exposure. And as a security company, they should have known better than implement it the way they did.
From a security perspective, the Symantec issue is just as bad as the Sony issue, since both expose the user in the same way, regardless of what the ultimate intention was. Whether you believe that Sony’s rootkit is ideologically worse probably depends on your opinion on DRM: personally, I don’t like it, and I fall on Cory Doctorow’s side of that debate. But if you’re in favor of DRM, then both companies did the exact same thing: they tried to implement functionality, and they unfortunately did so in a way that was profoundly broken.

…and in many cases, things I don’t currently have. Info based on the 14 months I’ve been working from home so far.

Some of these are very specific to working from home long-term, not just the occasional day here and there, but they are points to consider anyway.

  • A comfortable chair: get an Aeron, you’ll be sitting down a lot more when you work from home than when you’re at the office. If for no other reason, you don’t have to stand up and walk over to a conference room, since you’ll be on a phone conference. Plus, the coffee machine is closer to my desk at home than it is at work, so I’m back sitting down faster. STATUS: got one of these, probably the cheapest Aeron model, but it’s well worth the money. I’ve tried several different other models from Office Max and the like, but by the time you find one you like and doesn’t cut off circulation to your legs over the long run, you’ll have spent the equivalent of the Aero anyway. And by the way: buying Aero’s for $100 on eBay from failed dot-com liquidators? May have happened in 2001. It doesn’t any more.
  • A good desk, with plenty of space. This one is obvious and is true for the office as well, but since you’ll have more personal stuff (mail, magazines) hanging around at home, you’ll need an extra corner. Don’t get the IKEA cheapies, get something solid that won’t drop the sliding keyboard tray into your lap every third time you open in.
  • An espresso maker. If you’re a coffee drinker, you don’t want to have to get into your car every time to get to the local shop. I’m not a coffee snob, I don’t want to spend $800 for the latest Gaggia, I’m not that good at telling the difference. I use a Starbucks Barista, and it works for me. If you want regular coffee, get one of those and make Americanos instead: they taste better. STATUS: I got the Barista as a gift from someone who bought it and never used it. Score! If you can get a burr grinder, you’ll get better/fresher results by buying whole beans, but that will cost you $150 for a good grinder. Go to your local coffee shop and get espresso-ground, store it in an air-sealable container. Remember, I’m not a coffee snob, so there are probably coffee connoisseurs out there fainting away as they read this, but it tastes fine as long as you use it within 5 days or so.
  • As a counter to the previous item: a local coffee shop with free Wi-Fi. You’ll need to change your environment every so often to avoid going crazy, especially if you live in a very cold climate where you can’t leave the house for months at a time. In Minnesota, this means Dunn Bros, since Starbucks and Caribou both charge for WiFi (a business principle equivalent to charging separately for sugar these days), or–even better–your real, locally-owned coffeeshop (e.g. the Riverview). STATUS: got this, although I don’t make it out there that much these days since it’s too cold to bike. Yes, I’m a wimp.
  • A real speaker phone. I have an AT&T model phone that has a simple speaker phone in it (this one), but it’s not really suited for all-day use. I didn’t think I’d need one, but home phones do not work well for long conference calls, even with an earpiece.  STATUS: don’t have one, want one if someone else will pay for the $600+ that the decent Polycom models seem to require.
  • A bluetooth/wireless earpiece that can connect to a landline. The Jabra Bluetooth 6210 seems to fit this bill, but I don’t know what the battery life is like. STATUS: don’t have one, want one, not willing to pay $150 for one.
  • Separate systems for home and office use. Don’t mix the two, EVER! The temptation will be strong to put your software, your documents or your digital pictures on the office machine, especially if it’s the faster system or the bigger hard drive, but don’t do it. Seriously. There are so many reasons why mixing the two is a bad, bad idea.  STATUS: well, I have about 7 systems at home, and the office computer is the least powerful of them all, so this isn’t a big problem for me.
  • File/data sync software: use this to regularly copy the remote office files you need to your local system, otherwise you’ll be frustrated every time you open that 20MB Word doc across your cable modem, and you’ll kick yourself when your cable modem or the VPN are down. I use Groove, Outlook (in cached mode) and FileSync from FileWare, amongst other things. I don’t recommend the Microsoft SyncToy across a VPN connection: it’s awfully slow. To be honest I hesitate a little in recommending FileSync, since I’ve never heard back from them on 5 attempts to recover my registration info: I’m not sure they’re even still in business. But it’s a great app. I do love FolderShare (free!), and I use them for personal docs, just not on the office system.
  • Corollary #1: the copy on your home computer should NEVER be the only copy of a document. If it is, you are standing on a hill in a thunderstorm wearing chainmail and waving a golf club in the air, asking the Gods of Computer Catastrophe to pay more attention to you instead of trying to find more words that mean the opposite of “serendipity”.
  • Local backup. Just copy all the files you need to back up to the office computers, you say? Not such a good idea. If your office computer system (that one with the VPN software loaded) fails, how do you access them? I do regular backups of the docs on all my systems to DVD, so when (not if!) one of them fails I can access the docs from another one. Of course, the office DVDs need to be protected in some way: they may contain company confidential information, so it’s not sufficient to just copy all the files over: they must be encrypted and/or physically protected. STATUS: got a DVD burner on each machine. They’re cheap these days: I got Sony models for $39 after rebates. Just don’t buy too much media at once: the price on DVD+/-R media drops faster than you’ll use that 100-pack that looked like such a good deal at $80 last month.
  • Random collaboration tools: whatever you need. NetMeeting, some form of IM, video conferencing if your company supports it. You’ll need to do something to make up for the loss in face time, and in my case that means constantly reminding people of my presence via email, IM, internal blog postings… it’ll be different depending on what you do and what tools your company uses. STATUS: Outlook, Exchange IM, AIM, Trillian, iChat/AV with iSight, Polycom ViaVideo, WebEx… even PC remote control tools. I use ’em all, because I want to be available to anyone who wants to connect. Remote control tools are especially important because sometimes you just need to see what someone else is clicking on that gives that weird error, or they need to see your desktop so you can walk them through something.
  • Scanner and printer. You usually don’t need a fax machine if you have a scanner, and I forward scanned copies of bills and invoices to our Finance department using the scanner and email. They actually like it more than faxes, which are just an analog copy of something that started out in digital and will go back to being digital in the end after about seven conversions. STATUS: have’em. Got three scanners, in fact: one just for slides, but that’s for personal use.
  • A decent knowledge of computers. You’re remote: from the Help Desk’s perspective you might as well be on another planet. Learning how to troubleshoot basic computer problems will save you hours of frustration and waiting while the Help Desk tries to get their remote support tools to work over the VPN (which they were never designed or architected to do in the first place). STATUS: computers are my life. I’m OK.
  • A door. You need to be able to separate work and home life, and that means sometimes being able to shut the door. If for nothing else, to avoid your recently toilet-trained son from running in and proudly shouting what he just did in the potty… right as you start a video conference with the SVP of IT. STATUS: yes, that did actually happen to me. I’m not very good at keeping the door closed, since it starts to feel a little claustrophobic, but my family is very good at understanding that when I’m in the office, I’m at work.

I’m not including the standard software you’ll need as part of maintaining a healthy and usable PC/Mac (antivirus, anti-spyware, email, productivity suites) since that will be dependent on what your company uses, and is not specific to telecommuting. But you do need this stuff on your personal computers too: that’ll be another post.

This is probably the best-written review and preview of the functionality in Office 12 I’ve seen so far. Lots of screenshots.  Like the ribbons, but that’s going to be a major interface change for users: how many will go right in and click the “view old-style menus” on first launch?

I went to the local SharePoint user group meeting yesterday and was pretty impressed with the integration with the next version of SharePoint (which is not fully addressed here), but the workflow and wiki/blog/RSS capabilities for SP that integrate right into Office are very nice. I’m not in love with the current version of SP as a blog tool: the permissions alone to allow comments without allowing full posting rights are a nightmare that no one has been able to implement correctly (without it being a maintenance nightmare), and you can’t really blog from Outlook or any other app in Office in any way that has meaning for the average user. That doesn’t stop people trying to use it for blogging: give someone a hammer, all problems start to look like nails.  But the next version… ahhhh, the next version.

Isn’t that always the case?

Strangely enough, the presentation mentioned that it was only supposed to be shown under NDA, but no one there signed anything.  I know I didn’t.  So how much can I talk about it?  Can I mention the Deleted Items folder?  The one we’ve been asking for since, oh, the Mesozoic era?

…and a warm welcome to 2006! Not for any particular reason, 2005 was a perfectly good year. But new starts are always appreciated.

Speaking of new starts, I’ll join everyone else in the world in recommending Merlin Mann’s 43 Folders blog. His “email DMZ” post is brilliant, and a variation on what I do every so often: grab all of the emails in the Inbox that are older than X (I use 2-3 weeks) and DO SOMETHING ABOUT THEM. Delete them as a last resort, but follow up on them, create a task, create a calendar item to set aside some time to work on them, but do something. Get them out of the bottom of the pile and deal with them… I always surprised about how many emails are sitting down there at the bottom, that I couldn’t answer at the time but I can now, or that have become irrelevant in the meantime.

It takes care of a lot of emails in one fell swoop, which makes you feel better about yourself (as long as you were honest and didn’t just delete the whole lot), and gives you some extra incentive to get other work done once you’re in the groove and feeling like you’ve accomplished something today.

And of course, the biggest question so far has been… “But will it run Windows? Or Linux”

No reason they won’t apparently, in spite of the original protection that kept the beta versions from running on non-Mac hardware..

(Warning: annoying Javascript-based interface)


“Phil Schiller, Apple’s senior vice president of worldwide product marketing, said in an interview Tuesday that the company will not sell or support Windows itself, but it also has not done anything to preclude people from loading Windows onto the machines themselves.

“That’s fine with us. We don’t mind,” Schiller said. “If there are people who love our hardware but are forced to put up with a Windows world, then that’s OK.”

So how about a dual-booting MacBook running OS X and RedHat with OpenOffice?   Or triple-booting running the Vista beta and Office 12?  Joy!
Now, there’s no reason to believe that something won’t change in the future to change this (similar to how Apple has limited certain functionality of iTunes with each update), but at least it’s not forbidden out of the gate.  I assume that the similar underlying hardware architecture will mean that virtualization of Windows apps under OS X should run faster, but there are probably caveats to that, and running Virtual PC under Rosetta is going to be painful for the time being (multiple layers of virtualization, translating from Intel to PowerPC back to Intel again).

So far we do know that the MacBook won’t have FireWire 800 (only 400), and there is still no word on the battery life anywhere.  That’s troubling, and would be a large compromise to that 4-5x speed increase (on Spec numbers that Apple has long derided as irrelevant) if you can only get an hour’s worth of power on the plane.  That speed increase applies to the computer, but it doesn’t mean I can get my work done 4-5x faster in order to finish before the battery dies.

I had mentioned del.icio.us in some meetings last week at work, because I am a strong believer in their “ad hoc taxonomy” approach (which allows end users to think about classification after the data has already been entered, not before where it will raise the bar for data entry). As it turns out, Yahoo! seems to agree: they just bought ’em. Genius move for them, as del.icio.us willl integrate nicely (philosophically as well as technically, one hopes) with their previous purchases Flickr and My Web.

Note that I don’t believe that this means formal taxonomies are useless or pointless or in any way inadvisable: quite the opposite. They are necessary on one end of the spectrum (e.g. the Enterprise Portals of the world) where structured information is a must. However, they are generally too complicated for the average user who just wants to send an email or post a document, which means that a rigorous, structured taxonomy is actually a significant barrier for data classification. Users will prefer to use a collaboration mechanism that doesn’t require a taxonomy, and also unfortunately doesn’t have any public way to perform searches on the data across users: Outlook.

I believe that there is a standard bell curve on this: along one extreme, rigorous taxonomies with strict data classification that requires its adherents to fully grok both the data they’re putting in and the *whole* taxonomy (not just the little bit they are using, otherwise how would they know it’s in the right place?). Along the other extreme, completely unclassified data with no taxonomy, no useful metadata, and no search/indexing capabilities. The problem? Unfortunately, because we don’t currently implement any tools that hit the middle of that bell curve, almost *all* of our data is ending up on this extreme: un-indexed, un-searchable, un-reachable by anyone save the original data creator, in an Inbox, a home folder or a SharePoint site only a handful have access to.

In the middle, there are less rigorous taxonomies that are user-defined in an ad hoc fashion, similar to the way del.icio.us does it. The user defines the tags that are useful and significant to them, selecting not only from their own classifications but from the classifications that the masses have associated with the same or similar data. This “mob-developed” tagging definition (call it “mogging” or “mobtagging” to give it a nice trendy neologism) does two things: (a) it reduces the amount of work required to tag/classify data, which makes it more palatable to the user, and (b) it actually demonstrates to the user the benefits of a taxonomy or tagging system because they are using it directly on their own data. They participate in the taxonomy and the data classification without thinking about it, because (and here’s the important part) the tags are public knowledge.

However, even with del.icio.us I believe there’s something missing: a human eye above the morass, gently nudging the tags in one direction or another. I’m not talking about just fixing typos: it’s about noticing that particular links and particular content and particular tags are associated, so it would behoove the company to tag other, related links with the same tags, suddenly making them available via the search and tagging terms that the users are already using. This is something that is not feasible at the internet level, but is definitely achievable at the enterprise level.

Of course, there’s no one tool to get there immediately, and I don’t really believe that this “human eye” concept is automatable using today’s technology anyway: maybe Google has something in the works (and in fact, one could argue that Google Base is a step in this direction). However, the key to all of this is collaboration, indexing and search, and integrating these things across all the tools that end-users use to publish their information. It’s why you’ll constantly find me ranting and raving about collaboration and publishing information much further than we do today, in ways that the users (not the I.T. people) find easy to manage.

Indexing: X1 Enterprise Edition (indexes and searches across file stores, SharePoint, email)
Collaboration: SharePoint Portal, del.icio.us, Groove, Outlook, blogs, Flickr

Rant over. For now.

Edumification wants to be free!

Mountain Motion: The Adventure of Physics

Seems to be a pretty complete text: can’t wait to get to the chapter where they explain the whole unification theory! Ah, here it is, chapter XII… not yet available?


Here’s my favorite quote out of context: “The limit speed for Olympic walking is thus only one third of the speed of light.” Nice mix of serious, complex subject with some easy to understand examples.

Ray Ozzie does it again: he’s proposed (under a CC license, no less) a new standard that leverages RSS to allow multi-master sharing of the info that you need replicated, with appropriate filters. As an example, think of having all of your calendars (private, public, home, work, shared with spouse, shared with study group) managed through one interface, with the updates only going where you want them to go, thus keeping your worlds as separate or as together as you want them to be. And this applies to your contacts, your files, any list you have anywhere of stuff that needs to be replicated elsewhere.
It’s marvelous.
Here’s the draft spec for SSE, and a FAQ
Ray has a blog: only two items so far, but it will be one to watch.

Just some thoughts from a conversation on parenting, that I had over the weekend. Nothing in here is a final statement of belief.

Over thousands/millions of years of evolution, humans (and in fact, one could argue, most animals) have acquired a sense of need to fulfill basic requirements for survival. This need was relatively constant, since these basic requirements weren’t always in full supply. Specifically:

– Physical nourishment (food, water, as much fat as could be acquired in order to sustain through lean times or when the gazelle had too much caffeine that morning)
– Shelter (a roof over one’s head, whether that roof be shingled or the inside of a cave)

So we are programmed to always be in pursuit of those basic requirements. All the time… we don’t really have an “off” switch for it. We have an “I’m full” switch, which really only goes off way past the point when we’re full, and that is why people recommend eating your food slowly: the triggers that say “I’m full” only reach your brain about ten-fifteen minutes after the fact. However, that switch is a very short-lived on, and turns itself off after about an hour.

This serves as a possible (at least partial) explanation about obesity and consumerism in societies in which basic requirements (food, shelter) have been fulfilled. Even though you have enough nourishment and a roof over your head, that doesn’t stop the programmed “need to pursue”. Even though you do not want for anything right now, your body/mind is constantly pushing you to prepare for the time when you will, which as far as it knows (from thousands of years of evolutionary training) will most probably be very soon.

A pat little explanation for ennui as a whole. And as with most broadly oversimplified arguments, there’s some truth in it as well as some analogies that have been stretched too thin. But we have been trained for millenia to always be in pursuit, and only feel basic, temporary satisfaction when we achieve the goal. What we have now is the spilling out of that “need to pursue” beyond the realm of basic requirements, since these have been met (at least in the society in which I live). Since we have no other defined triggers and requirements to meet, it gets messy: we pursue those things that give us a quick shot of that feeling of contentment that arises when you’re sitting at home around a crackling fire, surrounded by family and the remains of a really good meal: the Rockwell Thanksgiving feeling. The easiest route for that pursuit leads to more food, an obvious choice: it’s just more of what you already acquired. Otherwise, we end up with consumerism, monomania, addiction, etc.

Makes sense to me.