What I Worry About When Web Publishing

In a word: Keyloggers. The Client Menace.

A particularly sharp chap at Tech.Ed asked me what I worried about when publishing OWA. More general than OWA, here’s what I worry about in general web publishing terms:

I’m an ISA guy (I know, ISA-related posts have been a little thin recently), so I’m used to publishing OWA (Outlook Web Access – that’s web-based email, folks) and web applications in general, from various vendors. I know you can do certain things to improve the protocol-level security of a web application, and that preauthentication is good, content inspection is great, and all that jazz. ISA is a really useful technology to implement in the defense of a web server.

Providing anywhere access to corporate resources is a Good Thing from a mobility and agility perspective, but from a security perspective, I’m hesitant to “just publish something” using domain credentials for preauthentication – even with Forms-based authentication and its nice expiring cookie system. The problem? I don’t like any situation in which I’m potentially typing my account credentials into an untrusted machine.

And that’s the key issue – you have to trust the endpoints in a given Web transaction, not just the security “on the wire”. Security on the wire is important – SSL is how you ensure that none of the myriad networks your little packet might traverse between you and the bank has an easy opportunity to steal your account details without even needing to be present – but it’s only part of the end-to-end security story, and with on-the-wire security generally accepted to be “good enough” to stop the casual hacker, my gut tells me the local endpoint – and that’s typically the client – is the most frequent point of compromise.

Tristank’s Gut: “It’s normally much easier to compromise the local endpoint! BTW, feed me.”

2-factor authentication using -say- SecurID tokens goes some way towards addressing part of the issue (as an attacker, you now need to steal the token as well as log the domain credentials), but if you have to type your domain credentials on an untrusted device at all, you’ve given the crown jewels away, and those credentials can be stored and re-used at the attacker’s leisure. Perhaps they’re even able to wander into your building, find a quiet, uninhabited cubicle, sit down and start typing. Yes, I’m that paranoid.

So how do you try to manage the keylogging risk?

 – user re-education: the users need to know that an Internet Café (or the spare computer of their mate Bob who’s a bit of an IT genius) is a less trustworthy keyboard than their corporate laptop, assuming you’re taking steps to protect the laptop. If at all possible, train users to see unfamiliar keyboards as dirty: imply that many unpleasant and ooze-inducing communicable diseases are regularly caught from unclean keyboards. Only regularly scrubbed keyboards that are tested by your IT department are immune from this (be prepared to spend some time distributing the ISOWipes to panicked users after this gets around). Depending on your level of paranoia, you might allow the use of an Internet Cafe in an emergency… but is it ever really that much of an emergency?

 – use Client Certificates to authenticate users, so that they (typically) can’t connect from an untrusted device, and don’t get the opportunity to type in their domain credentials anywhere until the client cert is verified. If the certificate’s associated private key isn’t marked as exportable, all but the most gifted users won’t be able to move it. Requiring a client certificate typically means that at some point, the device the user is connecting from was *probably* in a trustable state, assuming your certificate issuance policies are reasonably strict (and unless the haxx0r has worked out how to get the user’s client certificate and associated non-exportable private key off the box, but at that point you’ve already allowed the haxx0r access to the box, and you may have already lost), so it’s probably one of your corporate laptops. Cost: Running and managing PKI.

 – use ActiveSync with a mobile device. Smartphones and PocketPC devices are typically less susceptible to keylogging because they stay with the owner and are at least somewhat protected by another PIN, but have a different set of challenges associated with their management (for example, how to physically chain one to the employee so they can’t lose the thing, but still permit them to take a shower occasionally without damaging the device). The Remote Wipe stuff in Exchange 2003 SP2 looks really good, too.

Each solution has an associated management overhead or technology cost. At the end of the day, it’s a business decision as to what level of value/risk is applied to a remote web access scenario, and what methods of mitigating that risk are acceptable. The tyranny of management is typically that the business will scream at IT to implement something simple, and easy to use, and accessible from anywhere, so the risks need to be clearly articulated back to them (to avoid the louder screaming when you implement what they asked for, and it’s your fault that XYZ happened/was compromised/went wrong)…

If you want to help me sleep at night – and stop my gut talking to me – let me know how you manage this risk, especially if there are other interesting products and/or techniques I haven’t covered above…

Comments (7)

  1. Anonymous says:

    A keylogging comment dissected. And hopefully addressed. But mainly dissected.

  2. Anonymous says:

    I had the unthinkable yesterday, I couldn’t find my phone!  This was slightly complicated by the…

  3. Anonymous says:

    If elected I solemnly promise no further technical content until FY07.

  4. Andrew Dugdell says:

    My first thought is ammend your security policy (if one exists) to have users change their password after public confences/road trips. If you need some incentive to convince them, the defcon wall of shame (www.techfreakz.org/defcon10/?slide=38) does wonders. And I totally agree with "user re-education/awareness", training users to change their password at *trusted* locations. But these are just my personal thoughts. I’m keen to see what others think as well.

    PS: do you trust this keyboard: http://dennisjudd.com/albums/funpics/tastatur.sized.jpg

  5. Tristan K says:

    That keyboard is about right for the mental image of keyboard dirtiness I’m trying to conjure up!

  6. Doofusdan says:

    Very much on point.

    Re the certificate solution you mention – yes, maybe that means the machine was ONCE trusted. But given the current state of Windows in particular, if a machine is not patched, does not have current AV software running, does not have current antispyware protection running, and does not have a good software firewall protecting it, I am not sure I can trust it NOW.

    At the same time, I don’t want to lock out access from clients that are not necessarily my company’s to control. For maximum usefulness, people should be able to securely access web services from the widest range of machines, from potentially trustable ones (employees’ personal computers for example) to completely untrustworthy ones – for example, those at a net cafe which pays no attention to security, because it doesn’t seem to enhance the bottom line. In the middle of the spectrum are public internet access terminals at places which should be reasonably secured – like at a conference or university – but unfortunately it is so much work to keep clients secure that many such well-intentioned places do not necessarily cover all the bases.

    That’s my problem. How do I address it?

  7. Tristan K says:

    Hi Dan – the response ballooned, so I posted a new entry for it here:


    Hope there’s something useful in the mess that helps clarify it all.

Skip to main content