A comment from my diatribe about keyloggers from DoofusDan inspired more words (and unfortunately for this time of night, ideas) than I was comfortable putting in the comments section, so a new post was warranted. Numbers are key points I’d like to try to address.
Re the certificate solution you mention – yes, maybe that means the machine was ONCE trusted. But given the current state of Windows in particular , if a machine is not patched, does not have current AV software running, does not have current antispyware protection running, and does not have a good software firewall protecting it, I am not sure I can trust it NOW. 
At the same time, I don’t want to lock out access from clients that are not necessarily my company’s to control. 
For maximum usefulness, people should be able to securely access web services from the widest range of machines, from potentially trustable ones (employees’ personal computers for example) to completely untrustworthy ones – for example, those at a net cafe which pays no attention to security, because it doesn’t seem to enhance the bottom line.
In the middle of the spectrum are public internet access terminals at places which should be reasonably secured – like at a conference or university – but unfortunately it is so much work to keep clients secure that many such well-intentioned places do not necessarily cover all the bases. 
That’s my problem. How do I address it?
Glad you asked: I’m here to sell you a solution*! Actually, I’m not, which is probably why I worry about it. Let me preface the rest of this with: Dan, I wish I had better answers for you, but between the Ten Laws and a rock, you have a very hard place to inhabit. Of the Laws, almost all (perhaps not 4, 8, 9) apply to this scenario.
The key problem is: what do you trust, why do you trust it, and how do you mitigate the risk from untrusted items, weighed against their utility or value?
 On Certificate-based authentication (CBA), I agree, and you’ve nailed what I was trying to convey: you’re essentially making the claim that at the point the certificate was installed, the machine/user was trustworthy. What happens once that machine/user goes out into the real world is the ongoing threat management piece of security protection: maintaining patch compliance; keeping AV software up to date; user education on the dangers of malware and phishing, and so on.
There’s a secondary benefit to CBA which is that for pre-authentication, no username or password is required to be typed – the certificate provides the credential. However, using out-of-the-box software on Windows, that credential isn’t then delegable (if that’s a word) – to my knowledge – to the back-end server.
Two-factor auth (token/smartcard) can also address the core issue, as long as it avoids the fundamental problem of typing a reusable username and password (and accessibility from a remote machine). But there are sometimes other drawbacks associated with such an approach, particularly around interop – perhaps your One Time Password (OTP) doesn’t actually let you sign on to your mailbox without also requiring your domain credentials… in which case, it’s preventing remote access to that particular system, but what else is left open?
 On the “clients that aren’t mine to control” front, I think you’re describing a subtly different problem from the one I’m ranting on: I think you’re describing generally offering some type of web-borne service to clients, and the question for me is then whether any such service can be offered securely.
If you’re protecting the service with Windows domain credentials – or to be more general, with any generally-useful username and password for your environment, let’s not leave out Netware or NIS shops here – then by requiring the user to type their credentials in order to access the service, you’re already (implicitly) trusting the endpoint on which they’re typing those credentials.
But going a step further back, you’re already trusting that user not to pass on, duplicate or publicize those credentials, and in extreme cases, you might not even know who that user is ahead of time, or there might be a shared username and password! So how much do you trust the user? Enough to let them authenticate with your production forest (or tree, or realm, etc)? Are you giving them an account with permission to sit down in your building, and start using one of your computers? Or would an “extranet” forest or domain be more acceptable for managing risk? Could you require a one-time password for access to the resource and still make it useful enough to be used? What’s the cost of implementing a more secure solution? What’s your service worth?
After all is said and done, I’d argue that the level of security required by the service dictates the endpoints at which the service will be available, and the mechanisms that are required to access the service.
 In short: I think the problem with this assertion is that we’re assuming the staff at the conference/university can be trusted. If they set the machine up, that’s the primary risk, to my mind. The risk of others abusing the machine is somewhat secondary to the owner doing so, but if everyone owns the machine but you, you’re likely to be the victim. (Note: I’m not advocating 0wning every machine you sit down at, just making a point!)
 And just as a note – Windows is called out above, but the device trust problem I’m describing isn’t specific to Windows. To provide an obscure and contrived evil example: just say you used an OpenBSD-based system at an Internet cafe, that had a big red sticker on it: “OpenBSD – the most secure *nix variant! Set up to be secure!”.
The problem is that you’re choosing to trust the Internet Café staff. You have to trust that they’re not the people installing the keylogger on this machine, enticing high-value credentials with the promise of better security! Can you trust a client device based on the OS alone? It’s certainly part of the determination process, but it’s not the whole story, there’s a ton of other context that applies.
Epilogue: To try to come back to some sort of belated central theme that answers your question (not with an answer, but with a question) – the objective is really to define the service you want to offer, work out who needs to use it, then work out how you can offer that service only to those entities.
Keylogging as a risk makes it harder to trust the client device, and it’s not safe to assume that a client actually needs to be pwned in order to be evil – nasty folk in positions of authority are more capable of building them bad as the hax0rs are of retrofitting the evil later. OTPs can be used to mitigate some of that risk, but again, you need to balance the utility with the capabilities and the cost…
So yes, it is a problem, but it’s not a new problem per se – there’s just no simple solution. 2-factor auth and OTPs go some way to addressing the issue, and are certainly worth considering, but there’s an associated cost, both in technology and management terms.
As an aside: If you’re interested in digital identity and to some extent the future of Microsoft’s identity technology, I find Kim Cameron’s Identity blog to be a fascinating source of insight into identity, privacy and security (not just authentication) – definitely worth subscribing to.