Comments, administrivia, and the future of the “infosec professional”

Back when the spam was spiraling out of control, I configured my blog to close comments after 90 days. I’ve removed the limitation now, for two reasons: the spam is under control, and I wanted to reply to a comment made to my post on IPsec/IPv6 direct connect.

On 13 August, jcorey asked about how to deal with those who firmly believe that the only answer to any security problem is to inspect everything at the edge. This is an important question, and I wanted to give Joe an answer. (You might have to scroll down when you click the previous link, it seems that linking to individual comments is broken.)

Today, 15 October, I wrote a little thesis as an answer to his question. I’m calling it out in a separate post because I want to make sure those of you with aggregators that don’t update when posts receive new comments still have a chance to reply with your thoughts. I’ll also repost it here:

jcorey-- You've nailed the biggest obstacle to deploying something like direct connect. Many security professionals have been taught that there simply is, and never will be, a process or technology that allows you to trust anything that originates from outside your corpnet. These professionals cling to this belief, and have been the cause that allowed the whole “detection” market to bloom.

Let me be clear: this total lack of trustworthiness is no longer absolutely true. Of course there will be times when unknown machines will be used by known and unknown people to access your information. But what about one particular subset -- known humans, with known portable computers -- can't we do something better than treat them as toxic invaders?

Indeed we can. And that's what I'm proposing with direct connect. The technology -- managed, of course, with the right processes -- exists so that you can extend the trust to known computers even though you don't trust the network they're connected to. This is because you have mechanisms that:

1. Allow you to configure the machine according to your requirements (domain join, group policy)

2. Dictate computer and user authentication requirements (IPsec policies, smart cards)

3. Limit what the users of these machines can do (UAC, non-admin, Forefront Client Security, Windows Firewall, even software restriction policies)

4. Validate the health of machines initiating incoming connections and remediate if necessary (NAP, System Center Configuration Manager)

5. Limit the threat of attacks against stolen computers (domain logon, smart cards, BitLocker with TPM)

With the robust authentication, validation, configuration, and control mechanisms available to you, I simply don't see that there's any need to fall back to “detection” now. Detection technologies were -- and remain -- necessary for the times when we have no clue about the health of client computers and when we had no way to gauge the intent of the users. But it is truly reflective of a head-in-the-sand mentality to assume that this is a complete description of what's capable today.

You know, someone once asked me what it takes to be a security professional. I answered that there are two primary elements: become a networking/packet wonk, and be willing to change your opinions when the right evidence comes along. Indeed, I suspect that many security folk have forgotten the need to keep their wonikness updated, which in turn makes them resist new ideas regardless of the strength of the evidence. I'm not very proud of what I just wrote, because I loathe generalities, but I'm not sure what else to think here. Sigh.

Joe’s question is important and strikes at the foundation of what it means to be a security professional today. I’m eager to continue this conversation, because it’s reflective of what I sense to be a radical shift in our jobs—we are, or should be, no longer the wolf-crying propeller-head who sits in the basement and twiddles with the firewall. Instead, our job should be defined as one who’s charged with protecting the organization’s information from attack, while maximizing its utility to authorized users, according to the principles of least privilege. Your thoughts?

Comments (14)
  1. Anonymous says:

    MikeS– true, direct connect works best when the clients are Windows. However, we can still support heterogeneous environments with third-party support for NAP and group policy provided by partner-created add-ons. And for those instances like you mentioned — you simply can’t make configuration decisions about computers you don’t own — there’s always Terminal Server. In the next security newsletter, I’ll have an article that covers this briefly.

    Mikko– again, I’m not discounting Terminal Server; indeed, it’s a critical part of the complete design. In the detailed documention I intend to start writing later, I will include that.

  2. Anonymous says:

    Marta– I’m sorry but there’s not a whole lot I can do to help you here. How do you know your passwords have been stolen? What evidence can you describe that supports this? What harassment are you receiving?

    Perhaps the best thing to do is simply close those accounts down. Log into them, change their passwords, log out, and never check them again. Eventually (I think after 90 days) they will deactive themselves.

  3. Anonymous says:

    gbromage– No, the validation isn’t against threats, but rather it’s validating that the computer is configured the way you want before a connection is made. NAP gives you some of this, SCCM is more thorough (mostly through inventorying).

    No configuration can completely protect you against zero-day exploits and rootkits. Most of these have to run as local admin to spread beyond themselves; that’s why it’s important that people run as standard user and that UAC remain enabled.

  4. Anonymous says:

    Thanks much … And yeah, I know, the blog spam is getting bad again. I can’t believe that there’s any kind of economic gain from it, I just don’t get it.

  5. ColonelBlinky says:

    Totally agree on the last paragraph, add to it based on sound strategies, policies and procedures supported by the business (ie C-Level) we are no longer part of a Break/Fix department living in a place with no sign of daylight, but an intergral business unit.


  6. Orin Thomas says:

    I was thinking about your comment about opionion revision in relation to philosophy of science. Scientific theories map against observations of a static reality – that is reality doesn’t change even though the theories we use to describe it do. Information security is even more complex because the things that we are modelling in our head do not remain static – unlike the speed of light which is the same today as it was 100 years ago, the way that computers and networks interoperate does change over time.  The modelling in our heads needs constant revision because the thing that we are modelling does not remain static (and even if it did, history of science shows that things a lot of smart people once believed were fundamental truths turned out not to be so fundamentally true when someone else came along with a better explanation). Of course new evidence can suggest that one doesn’t understand something as well as one thought one did, or it could mean that the thing that you did understand was actually flawed in some way.

  7. MikeS says:

    Steve:  I agree with you completely, but unfortunately I think there’s one element you didn’t include in your "thesis":  the mixed-client corpnet and extranet.

    I understand that if all the correct (Microsoft) pieces are in place, then we can start to calm down the "enemy at the gates" approach to security.  However, even though we are thoroughly Microsoft on the inside of our company, many of the people who use our systems are NOT Microsofties, and aren’t very tolerant of us telling them how they should/shouldn’t configure their networks, PC’s, firewalls, etc to use our connected services.

    Overall, I really, really like where Microsoft is going with their security technologies and emphasis, but I fear that in the near future the only people who will truly and completely benefit from this "new era" will be the folks with a self-contained corpnet that have gypsies who venture beyond the gates.  

    Those of us who have to deal with B2B service delivery to diverse customers will still see limited benefit from these new technologies.  The main reason?  As with any larger company, it’s nearly impossible to push through such sweeping changes unless the ROI can be demonstrated.  And despite the obvious benefits to our OWN people, reconfiguring our network in such a manner would NOT generate a high-enough ROI for the beancounters to approve such a major change.

    So, I guess we’re stuck with evolutionary (not revolutionary) baby steps in this direction.  I look forward to it, though, and to your future posts.


  8. Paul Y says:

    I gave up worrying about the device and netwrok several years ago. Today I have only worry about 2 things.

    1. Who are you, and what should you be allowed to access.

    2. How do I manage the bandwidth.

    Devices don’t access data, the most they can damage is point 2. Bandwidth impacts are irritating, but not long term critical. Data loss is the problem, and it’s device and network agnostic.

    Of course – there is the problem of sensitive data accessed on an untrusted device, and that device using those credentials or storing that data. I haven’t seen ANY good answers for this space yet.


  9. mikko says:

    Hi Steve

    I saw your demo at teched US and it was interesting and cool and as MikeS says it works if you have all the MS part in place… this should however not be the only way of access, you can have the opposite as well with terminal server technology where no data ever leaves the data center and no client ever has straight connection to the data center. So for your trusted MS clients you could use the "new" way and for the rest you could use the TS way or Citrix if you prefer that. one don’t have to rule out the other, witch is a bit of the feeling you get when you are so exited about this "new" thing Steve 🙂



  10. Marta Guillen says:

    Mr. Steve,

    I have to appologize for using this way of communication with You, but after hours and hours of searching the Web for technical support in order to get help and solutions to my problem, I bumped into Your blog which I found very interesting.

    I am facing a security problem, where somebody has stolen passwords for old hotmail accounts of mine and is using them to harass me and harm me in many ways.

    I don’t seem to be able to find answers anywhere and don’t know how to stop it. Would You be so kind to help me with this problem if you could?

    Thank you very much, Best Regards.

  11. Greg Bromage says:

    Steve, I did enjoy your presentation at TechEd EMEA in Barcelona on this.

    I still have a concern over the portion of "Validate the health of machines initiating incoming connections and remediate if necessary "

    Validating the health would mean validating again known threats, surely?  It’s the unknown ones that concern me more.

    I would think that there is a risk that people might make a basic assumption that a "trusted" client is automatically trust-worthy.

    A zero-day exploit would not be picked up by health certificate, and once the client is compromised that negates the BitLocker and client-side firewall mitigations.

    Further, if a client was root-kitted, it would not be detected by a server-side validation because how would the server tell, if the client (at the kernel level) is unaware that it has been compromised.  This could lead to an administrator assuming a trusted client is safe (for a given definition of "safe") and exposing more information then they should.

    It might be better to still consider these clients as manageable, but inherently untrusted.

    I do realise that this is more of an implementation/assumption risk than an inherant design flaw, but it does still need to be considered.

    P.S. – there’s comment spam above…..

  12. Sanjay Tandon says:

    What the mind doesn’t know, the eyes can’t see.

    Corollary: If you don’t know what attack surface you’re exposed to, how can you adequately defend yourself?

    Further: If your people don’t know what they’re protecting your organization against, they can’t adequately protect you.

    In other words: …

    Know what I mean? 🙂

  13. .NET Library Developer says:

    You are doing a great job. Thank you.

    P.S. Are you checking your comments? There are several ads comments posted in last two days.

  14. bred says:

    it’s nice site!!! <a href=" ">adipex cheap</a>  6317

Comments are closed.

Skip to main content