Going through UAG traces can be a tedious task, especially when the server is busy (which could generate over 10,000 lines of trace in a single second). If you have a failed request that you need to trace, the log could be overwhelming, even if the server is completely idle otherwise, but understanding the UAG engine can make this easier.
The way the engine works for application publishing is the 4-step request process. 1st, a client is sending a request to UAG, which looks like this in the trace*:
You can see how the request specifies what it’s doing (“request from client to filter”), and has a unique PFC code that follows that request until its conclusion (that’s useful, because the server may handle several requests at once, and the upcoming trace blocks may refer to them, so the PFC allows you to discern). The info:URL line shows what URL was requested, and it’s followed by the headers, the cookies and the hostname of the server. Following this, you start to see the various functions that the filter runs, but I won’t get into that here.
Later on, after UAG is finished processing the data (working with the cookies, the header, the application matching etc), it requests the URL on behalf of the user, and that would typically look like this*:
You can see here how the PFC code remains. The rest of this data isn’t very useful, but the next step is the response received by UAG from the back-end server*:
The PFC is beyond the edge of the screenshot, but it’s still here on the 1st line, and then, you can see important info about the backend. You can see, for example, that it’s by IIS 7 running SharePoint 14 (which you may know as SharePoint 2010). It’s interesting to see here that the content-length is ZERO, which means that this is not an actual response, but an authentication negotiation. A little later in the log, we will see this*:
This is almost identical, but this time, the WWW-Auth shows an encoded string, which is an NTLM header. A few milliseconds later, we finally see the actual response*:
This multi-stage process is common, and now UAG does it’s parsing on the file’s content, and finally, delivers the file to the client*:
Following the headers is just a little piece of the tracing process, and you shouldn’t expect them to immediately jump-out with a solution, but understanding the breakdown will often allow you to discover where the issue is. For example, you may notice a 500 error somewhere in the process. Here are two examples*:
Can you tell the difference between them?
The top screenshot shows UAG trying to fetch a file for the client, but the back-end server responds with a 500 error, which UAG delivers to the client. On the other hand, the bottom screenshot shows UAG trying to process a file but UAG is the one generating the error because of a buffer exception, and it is delivered at the 3rd stage as an error page (error500.html transmitted to client).
Back to the topic of the day….
A tool I often employ when trying to analyze big traces is a script that parses the trace and looks for requests, and then shows them as a list, with the line number that corresponds to them. I can then open the trace, and jump directly to where the action is. It’s not a very complicated parser – it just looks for the string, and moves on. Here it is, for your enjoyment:
Set objArgs = WScript.Arguments
Set oFSO = CreateObject("Scripting.FileSystemObject")
To use this, save it as a .VBS file, and run it using CSCRIPT and with the name of a decoded UAG log:
*The images may be a bit hard to read in the blog post. If you are having problems, open the image in a new window. If still unclear, contact me via the “contact me” form on the blog, and I’ll send you the original file, with the high-res images!