Realistically this process should work for other AV clients as well but I’m doing it in the context of the one I support. Although it isn’t extremely common we do run into scenarios where customer has issues with the FCS client taking up large amounts of CPU on a system. Sometimes it is constant and other times it is more intermittent. Regardless of the intermittency the key thing to remember is that this typically means that FCS engine is actively scanning files and quite possibly either a file that is complex or having to repeatedly scan the same file over and over again. I say complex as certain file types have much more complexity to their internal structures then others for example your notepad .txt file is extremely non-complex as opposed to say a .pdf or a .xlsx file which can be extremely complex. You may have a 100mb text file that scans quickly but a 1mb .pdf that takes much longer. The key point here is that file size does not equal time that it takes to scan the file but that internal complexity of the file is key to scan times.
The first thing to check when you see performance issues with the client is for “expensive” files. If you browse to C:\Documents and Settings\All Users\Application Data\Microsoft\Microsoft Forefront\Client Security\Client\Antimalware\Support on your client system look for a file called MPLog-########-######.log the #’s are date stamps I believe for when it first started. Open this file up in the text editor of your choice and search up from the bottom looking for the word “expensive”
An example is shown above, in this case it’s the service pack 2 file for SQL 2005 and it’s about 290Mb big so yes it probably does take a while to scan that file. You could consider an exclusion for this file specifically however I probably wouldn’t recommend it due to PE reinfector virii such as Sality that infect legitimate .exe’s. On the other hand if the expensive file is a .log or .txt or some other normally non-executable file on the file system and you see this file repeatedly in the mplog then it would be a definite candidate to add to your exclusions list via FCS policy.
The other situation that causes performance issues are where FCS requires repeated scanning of the same file(s) on the file system. As is typical with most AV system FCS uses a type of fingerprinting of the file system to know which files it has already scanned since the service was last restarted and does not scan those files again as long as they are not written to. For example if I create a new .exe on a file share FCS will scan this on the original write however if 50 other people run that .exe afterwards the FCS client will not scan this .exe again for their reads of the file. If however something attempts to write to the file FCS will be forced to scan this file again to verify the new contents of the file are not malicious. Typically what we see with multiple scans of the same file are processes that are writing to log files that will open the file / write to file /close the file and then repeat this process again in a very short order of time. Since they do not hold a handle open to the file but instead close the file FCS sees this as a new file modification and thus is force to rescan the contents of this file. Unfortunately these do not always show up in the MPLog as it may be fairly quick to scan the file it just does not hit the threshhold (3 seconds scan time) for us to determine this as an “expensive” file.
What we typically use to find these issues is Procmon. Get a copy of procmon loaded on the system and start it up. When the Filter window comes up you will want to remove the filter for “Process Name is System Exclude” as we do want to see activity from the System process as well.
Once you remove that filter let Procmon run for a couple of minutes while the high cpu for msmpeng.exe is occurring on the system. Stop the capturing in Procmon and then click on Tools>Cross Reference Summary this will generate a list of File locations that are being read/written to by different processes. We are looking for ones that are being written to by various processes but also being read by msmpeng.exe.
In my case I only had one on my system where Outlook is writing to my .ost file and it appears that msmpeng.exe has tried reading the file. If I doubleclick on this entry Process Monitor will apply a filter back to the main window that will show me all operations where the path is my .ost file. When I actually looked at it however there were only about 6 times that msmpeng.exe tried to read it and I don’t believe it actually did so probably not a good candidate for exclusions although I may do it anyway. What we typically see though are log files where Process X is continuously writing to the file followed by msmpeng.exe reading the file and then Process X writing again etc. In those cases it would be a good idea to add that file location to your exclusions to prevent the increased CPU load on the system for rescanning what is probably not an executable file and one that you may be able to determine as not needing AV scanning.
One item to note in regards to procmon and FCSv1 however is that even once you have put exclusions in place you will unfortunately still see msmpeng.exe as a reader in the cross-reference summary. This is due to the way that the mini-filter for FCS and exclusions in the client take place. When file IO happens the minifilter passes a memory mapped portion of this to the AV engine (msmpeng.exe), keep in mind this is not the whole file but rather information about the file, the engine in turn checks that against exclusion lists etc and if it determines it to be excluded will not request the whole file for scanning. From what I understand in v2 the client will operate somewhat differently so that we will not see that behavior which will make it slightly easier to know that exclusions are working properly.
Another item to look for in your procmon logs :) Add a filter for Result is “FILE LOCKED WITH WRITERS” and take a look at what processes are getting this. If msmpeng.exe is getting this frequently take a look at which files it’s getting it for. In my case I had 2 client AV on the system I didn’t know about the other and they were competing with each other for file access and locking each other out .. bad mojo. In another case I saw recently it was due to an app that apparently didn’t release files properly or something didn’t totally figure that out but we made and exclusion for the log file in that case and the performance issue went away.