Do you dislike having to collect diagnostics logs manually from each server in order to troubleshoot an issue or establish a server performance baseline? Did it ever happen to you that you collected the logs, but then found out that you needed more logs from the time the issue was happening, but when you went back to try to collect the needed logs - you found out that they have been over-written?
I have been working on a script that allows you to collect all the Exchange logs that you need and copies them over to a different location. The script can even zip up all the files for you so all you have to do is run it on each server and upload the data once it is done. By default, the script will only collect your App/Sys logs from the server, as we want to specify through switches (please see the download page for a list of them all) what data we want to collect so we are able to collect only the relevant data needed for the issue at hand. If you don’t know which logs you need, you can just use the switch AllPossibleLogs and it will collect everything that you need based off the version of Exchange and the role(s) that you have on the server.
Currently the script only supports Exchange 2010 and 2013, as there isn’t much logging in Exchange 2007 running by default. With Exchange 2010 we by default started to log more information to help us troubleshoot issues (which we then increased again in Exchange 2013), however, all the common logs that Exchange Support does look at in order to troubleshoot issues are collected based on the version of Exchange that you run the script on. One of the major differences in the script run between 2010 and 2013 is the way you need to zip up the logs. In 2010 we don’t have .NET Framework 4.5 installed, so I rely on 7za.exe utility that you can use to have the script zip up your files (or you can also specify that you do not want the files to be zipped up at all). In order to take advantage of this, you would just need to have the 7za.exe file in the same directory as the script, and use the SevenZip switch when executing it. The location of where to download this utility is within the header of the script if you need to download it.
Why was this created?
The reason why I created this script was because when I was troubleshooting a performance issue with a customer, we were collecting GBs of data (from all relevant servers) every day in order to get a baseline of the server’s performance and compare that to the time when we were having issues. Since in this environment the customer had to collect the data manually, sometimes all the data that I requested wasn’t there or was already overwritten by the time they got to collecting the logs. By creating this useful script, we were able to just run it each day and it would collect all the correct information, move it over to another drive on the server and zip it up for us so we didn’t have to worry about logs being over-written. Then all the customer needed to do was upload the data to me once it was finished.
How to run the script
Download the script from here. Place it on your Exchange server and open up an EMS or PowerShell session as an Administrator. If you don’t run it as an Administrator, it will error out:
Then you just need to determine the location you would like the data saved to, and what logs you would like collected.
If you don’t specify a location, the script will automatically select “C:\MS_Logs_Collection” as your location to copy the data over to. There is also an automatic check that verifies that the drive letter has at least 15 GB of free space (if you are not zipping up the data, this requirements goes to 25 GB), as we don’t want to fill up any drives on the server.
The data that you want to collect is really dependent on the issue that you are having. If you don’t know then it is just best to collect it all and use the AllPossibleLogs switch. This would be recommended if you ran into an issue and you would like to investigate the root cause of, at a later time. If you do know which logs you would like to collect, you can use the switches for each set of logs. For example if you know you want to collect the IIS logs there is a single switch called IISLogs. This switch is going to go through and collect the IIS and HTTPErr logs based off the role(s) that you have installed on the server. It is a common mistake to not collect both, or that the FE and BE IIS logs get mixed together and those things can cause more delays in getting the data reviewed. As we don’t need to whole directory for the IIS and HTTPErr, the script will only go back X number of days (3 is the default) and you can specify how far you would like to go back with the DaysWorth switch. If you would like to know more about the available switches that are in the script and what they do, please look over the script download page.
Here are a couple of examples of the script working and how it collects the data
.\CollectLogsScript.ps1 -FilePath C:\MyLogs -IISLogs -EWSLogs -DaysWorth 1
As you can see, we do have progress bar of where the script is currently at. Keep in mind that for large amounts of data, it can take a while to complete a compression step before it moves on to the next set of logs. After we collect one type of log, we go through and zip up that folder in order to save on drive space, then remove the original folder. This is only done for the original sub folders of the main root folder, which is the name of your server. This root folder will also get zipped up and append the M/dd of when you ran the script so you can just have one .zip file, but the original root folder will not get removed from this location.
Once the run is complete you will have something like the following:
Looking inside of the folder:
If you are collecting additional data like experfwiz, Netmons, ExtraTrace, etc. and that data is in the same directory, you can include it into the collection as well by using the CustomData switch:
.\CollectLogsScript.ps1 -CustomDataDirectory C:\PerfCollection –CustomData
That example will collect everything in the C:\PerfCollection folder and sub directories. You would use this if you wanted to collect an additional directory of data that Exchange doesn’t log by default, or other types of logs that this script doesn’t collect. However, I would not recommend to use this for very large files like process/memory dumps, as it can take a very long time to copy, compress, and then upload larger files. Experiment with what will work for you!
I also included a switch called IISLogDirectory in the script for those environments where the default location of IIS (C:\inetpub\logs\LogFiles) has been moved. Now all you need to add in this parameter is the Parent directory of “W3SVC1” or “W3SVC2” and the script will automatically add the correct IIS sub directory to collect. Here is an example of how to use this:
.\CollectLogsScript.ps1 –FilePath D:\MS_Logs –IISLogs –IISLogDirectory E:\IISLogs\LogFiles
That would collect the IIS logs from the directory called E:\IISLogs\LogFiles\W3SVC1 and/or E:\IISLogs\LogFiles\W3SVC2 if you are on Exchange 2013, with both roles installed.
In the script there is also a switch that is called DatabaseFailoverIssue, which allows you to collect the correct data from your server that recently had a database failover. It collects the performance data, the clustering event logs, the managed availability logs, as well as the Application and System logs from the Exchange 2013 server that you run it on. If you are on Exchange 2010 then it will collect just the Application, System, and clustering logs. If you are running this switch because you did have a failover issue recently, I would highly recommend that you run this script on all servers within the DAG. This should provide you with enough information to help best determine the cause of the issue, as the database could failover for multiple reasons so by collecting all this information, we are covering all possible areas of default logging.
Hopefully providing a common way for you to collect data from your servers will help reduce the amount of manual work that you would need to do to collect the information needed to troubleshoot various issues or periodically collecting logs to establish baseline server behavior. It should also reduce the amount of times that we didn’t collect the correct information, then go back to try to collect it again and it is already overwritten.
Keep an eye out on the download location for newer versions of the script, as I will continue to fix issues that may come up that are reported to me and continue to improve and add features to the script that are needed to help improve the process of collecting data from the servers.