One of the clients that I worked for has a fully transactional core database. This database is accessed by a web application, which has an intermediate layer of application servers which connect directly to databases.
Here we can distinguish 3 different levels: web, application and database. The right programming practices lead us to keep most of the code within the database in order to obtain a better performance by decreasing the amount of information that goes over the network, in this case through the 3 levels explained before.
However a big number of old wrong programming practices maintain over 70% of source code routines within the application servers.
This did not become a real problem until the transaction level was increasing over time, as well as size and complexity of the database had reached.
One of the company policies is not to make code changes to databases production outside the “service packs” or application version upgrades, so you can assume that each “Service Pack” carries a large number of changes that are released into the Production environment at the same time.
Having clarified the environment, programming practices and deployments policy I can expose the case I had to face to at that time in this blog. Once, a day after a “Service Packs” release, Monday, customers began to contact customer service complaining that no one could enter to any of the applications, after the LOGIN screen appears, nothing happened until a time-out pop up in response.
Immediately I had to go through the database servers and I got the unpleasant surprise that in one server the CPU was 100% of usage; just wondering what change might come with the service pack that was causing this strange behavior!
After a while, some SQL traces were captured, and obtaining information from the DMV’s and other tools, we conclude that the compiling level was much higher than historically shown before the version upgrade, but among so many changes that were pushed with the service pack we would have had the application down for too long in order to find the statement(s) responsible for such horror.
Two things were done in order to solve this issues, first, by analyzing the traces we conclude that we had to two indexes on a couple of tables that were desperately needed them, and second, by knowing that most of the executions were BATCH type, we came out with the possibility of changing the PARAMETERIZATION database option from SIMPLE to FORCED.
ALTER DATABASE <db_name> SET PARAMETERIZATION FORCED
What does this option do to a database?
Parameterizes all the values which come with selects, updates, deletes and inserts that are BATCH type, individually, this way they won’t need to be recompiled each time they are executed. The execution plan is kept at memory level.
Obviously, this option does not apply to all statements that are within stored procedures as these are compiled by the database engine instead.
There are other situations where the parameterization is not applicable, you can see all the cases by entering the following link.
By implementing these changes, the degree of recompiling decreased significantly and the CPU returned to normal levels between 20% and 30% of usage; this way all users could happily accessed the applications.
Before concluding this post I want to suggest recommending to developers to keep most of the code within the database engine as much as possible and if you came across with environments like the exposed here, I recommend changing the parameterization database option to “FOCED” before the disasters occurred.
“The opinions and views expressed in this blog are those of the author and do not necessarily state or reflect those of Microsoft”