Today, let’s talk about Audit Collection Services component. If you have SCOM up and running for monitoring you already:
· have an infrastructure to provide security audit collection;
· have a monitoring for this audit collection.
But ACS component is designed for short-term event collection, so if you configure it to store events for more than 30 days you are to provide enough storage which can be a serious problem. For example, I want to collect and store security logs from 10 domain controllers for a year (which is an average task, some security teams consider storing information for 3 and more years). If I assume that one domain controller generates 40 events per second, I will have a daily database growth about 13 Gb. And total database size will be more than 4 Tb! Such amount of storage is not cheap to acquire and support.
What can a SCOM administrator do to avoid generation of so much data and fulfill security team requirements at the same time?
Enterprise organizations usually manage audit policy across all their servers. Easiest way to do this is to use Group Policy. Default audit policy on Windows Server defines events that should and should not be logged. If there are events logged which have no use for you, it is best to adjust policy so no log entries for them are generated. To read about Audit Policy, go here.
Audit Server Filter
Audit Collection Server receives all events sent by Audit Collection Forwarders. But you can control which events are stored in database. By AdtAdmin.exe utility you can administer your Audit Collection Server. It is located under %SystemRoot%\Windows\System32\Security\AdtServer.
To get current query:
To set your custom query:
AdtAdmin.exe -setquery -query:<YOUR_QUERY>
Query is written in WQL and should output all events you want to store in database. Other events will be dropped. For example, to drop events about network share access to IPC$, NETLOGON, SYSVOL shares use:
SELECT * FROM AdtsEvent WHERE NOT (EventID=5140 AND (String04=” OR String04=‘\\\\*\\IPC$’ OR String04=‘\\\\*\\NETLOGON’ OR String04=‘\\\\*\\SYSVOL’)
To store “Audit log was cleared” events only:
SELECT * FROM AdtsEvent WHERE EventId=517 OR EventId=1102
By default database queue query is:
SELECT * FROM AdtsEvent
Which means to store all events sent by Audit Collection Forwarders. This is the easiest way to significantly reduce ACS database size.
If you use SQL Server Enterprise Edition to host ACS database you can use a Data Compression feature to reduce its size. I should warn you that this is not tested by Microsoft and may be an issue for support. Nonetheless, because of simplicity and effectiveness of this way it should be considered. So, if you are running Enterprise or Dev edition of SQL Server you can use sp_estimate_data_compression_savings procedure to understand how much storage space can be saved.
Hint: you can backup and then restore ACS database on Enterprise or Dev edition of SQL Server Instance to be able to use this procedure.
Compression is used on tables and no built-in method exists to compress the whole database. There is a way to create such method but it is an overkill for our ACS database. In fact, in ACS database there are small tables to store configuration which are used rarely and tables to store event data. Among them biggest tables are “dtEvent” and “dtString” types – they hold majority of data. By default ACS partition duration is 86400 seconds. This means that every day new data tables are created to store events for that day. Old tables that exceed event retention period are dropped. How can we manage compression in such circumstances? A trick is to use database triggers. Long story short we can define a trigger to run after every table creation which will compress it. We can use:
· Row compression for moderate compression and a small CPU overhead;
· Page compression for better compression but bigger CPU overhead;
· Both types of compression for best storage saving. Consider this way only if plans are to store information and rarely access it or if you have excessive CPU resources.
Example of trigger for page compression:
CREATE TRIGGER CompressEventTablesAfterCreation
DECLARE @TableSchema sysname, @TableName sysname
SELECT @TableSchema = EVENTDATA().value(‘(/EVENT_INSTANCE/SchemaName)’,‘sysname’),
@TableName = EVENTDATA().value(‘(/EVENT_INSTANCE/ObjectName)’, ‘sysname’)
IF (@TableName LIKE ‘dtEvent!_%’ ESCAPE ‘!’ OR @TableName LIKE ‘dtString%’)
DECLARE @SQL NVARCHAR(2000)
SET @SQL = ‘ALTER TABLE ‘ + QUOTENAME(@TableSchema) + ‘.’ + QUOTENAME(@TableName) + ‘ REBUILD WITH (DATA_COMPRESSION=PAGE)’
So, you compress existing “dtEvent” and “dtString” tables and define compression for future tables. In my lab environment this saves 70% of storage space in general. Again, consider all consequences this method may bring to you.
Methods discussed above help in reducing ACS database size and do not address performance issues of accessing this data. Nonetheless, they can improve performance too, because, generally, less IO – faster operation. There may be other ways to work around the issue of big ACS database. For example:
· Have a retention period of 1 month and store ACS database backup made each month. This way information can be stored indefinitely, but accessing specific data is complicated.
· Aggregate information from ACS database in another database which is optimized for big size. This database can be optimized for data access also and becomes “OperationsManagerACDW”. This also will help if you have multiple Audit Collection Servers which have separate ACS databases.
Please, tell us how you work around ACS database size? Do you use one of mentioned methods or have your own? All feedback is appreciated.