Tom Hawthorn, Karthik Mahesh – Windows Server Performance Team
Tom Hawthorn, Karthik Mahesh – Windows Server Performance Team
A significant percentage of web sites utilize PHP as a platform for dynamic content. During the development of Windows 2008, Microsoft included improvements that enable PHP to run more efficiently than previous Windows releases. This article describes how to tune Windows 2008, IIS 7.0 and PHP for environments with a single site and high concurrent user traffic.
Windows 2008 and IIS 7.0 have new features and optimizations that allow PHP to run more efficiently and robustly. The most significant improvement Microsoft made was to update the CGI support in IIS 7.0 to conform to the Fast CGI standard. Fast CGI saves a process create/destroy operation per request by pooling and reusing multiple worker processes. This translates into significant performance improvements for PHP applications on Windows.
By default, the Fast CGI support in IIS 7.0 is configured to be conservative when allocating resources in order to best support the scenario where hundreds of web sites are running on a single physical server. This was thought to be the most important deployment environment for PHP on Windows. This article describes the differences between tuning a Windows server for a multi-site scenario versus a single site scenario assuming high overall traffic.
Multi-site versus Single-site Web Servers
Let’s talk about two typical environments for web servers where resource management and performance become top concerns: the “web hosting” environment and the “enterprise” environment.
In web hosting environments servers host hundreds of sites on a single physical machine. There are dozens of companies that sell pre-packaged web sites for under $20 per month. They achieve cost-efficiency by deploying hundreds of sites per single server machine and they place limits on the amount of traffic that each site can service. Perhaps it is not much of a surprise that a huge percentage of web sites on the internet run on shared hardware. Individually, hosted sites are low traffic but the aggregate traffic adds up to some serious load. Administrators must take care to isolate the sites from each other for security reasons and must ensure that no single site can consume all the resources on the machine. The default configuration values in Windows 2008/IIS 7.0 are optimized for the web hosting environment.
The enterprise environment is virtually the polar opposite from the hosted environment from the perspective of how software should manage resources. Instead of strictly limiting and aggressively reclaiming the resources per-site, an administrator wants to give all of the machine’s resources to a single site. Administrators achieve scale and robustness by load balancing the internet traffic across multiple machines serving the same web site content. This article describes how to tune Windows 2008 and IIS 7.0 for the enterprise environment.
Web requests are made from the user’s web browser to a web server. The server receives the request, processes it and sends back some data. An HTTP request is usually small, maybe a few hundred bytes. However, the response may be large and the ephemeral memory required to generate the response can be even larger. During the period in which the web page is executing code or waiting for a response from somewhere else (i.e. a database, disk, another web site, etc…) the memory associated with the web request cannot be released. Once the request is completed usually memory can be released with the exception of cached items.
I refer to the term “request concurrency” to describe the number of requests being processed on a web server at any given moment in time. As average request concurrency grows on a web server, so does the average memory utilization. In order to conserve memory a web server can limit request concurrency by queuing new incoming requests rather than servicing it immediately if the number of in-flight requests exceeds some limit. This approach has the side effect of increasing latency because user requests may need to wait for in-flight requests to complete before they are handled.
Increasing the Default Concurrency Limits
On Windows 2008, an HTTP request will be handled by multiple software component layers beginning with the network stack and travelling up into IIS and then sometimes into third party technologies such as PHP. Each layer will perform some work on the HTTP request before handing the request on to the next layer. Each time a layer receives a new HTTP request, it has the option of queuing it or processing it. Therefore, increasing concurrency limits involves modifying configuration associated with multiple layers. This section describes configuration parameters in http.sys, IIS 7.0, FastCGI and PHP.
This parameter controls the maximum number of requests that IIS 7.0 will allow to be queued simultaneously. It allows the system to be more robust in handling spikes in request concurrency beyond the configured limits.
Normally, if a web request is received by a web server and its queue is full the web server will return an HTTP error 503 (service unavailable). Increasing the queue limit value has no impact on a web server that does not exceed its queue limits under normal conditions. On web servers that experience occasional bursts of requests that would exceed the default queue limits, increasing the limit may allow the server to satisfy all requests without error but with a higher latency. On web servers that are overloaded during steady-state operation increasing this value may have a detrimental effect.
appcmd.exe set set apppool “DefaultAppPool” /queueLength:65535
This parameter controls the maximum number of in-flight requests in the IIS 7.0 layer. This includes requests that are being processed or are queued by the CGI layer.
Increasing this value on a web server that never experiences more than 5000 concurrent requests should have no impact. On web servers that receive very large numbers of concurrent requests and that have available resources during steady state load, increasing this setting will allow the server to fully utilize its memory and CPU. Servers that are already 100% utilized may be negatively impacted by increasing concurrent request limits.
appcmd.exe set config /section:serverRuntime /appConcurrentRequestLimit:100000
This parameter controls the maximum number of concurrent TCPIP connections that HTTP will allow.
By default, only 5000 concurrent TCPIP connections are allowed by the HTTP driver in Windows. There is typically only one outstanding HTTP request per connection, therefore increasing any other concurrency limit is pointless unless the maximum number of concurrent connections is also increased. Each connection maintained by Windows will use some kernel memory and requires some CPU to maintain state. I don’t recommend increasing this limit on 32 bit machines because of the limited kernel address space.
reg add HKLM\System\CurrentControlSet\Services\HTTP\Parameters /v MaxConnections /t REG_DWORD /d 1000000
This is actually two parameters, the first is the maximum concurrent requests and the second is the number of requests that can be executed by a fast CGI process before the process is recycled.
The CGI model requires only a single concurrent request per pooled process. So the max instances parameter tells IIS how many processes to start up. Each process will consume significant resources on the server so the initial recommendation of 32 is somewhat conservative. Increasing the number of requests that each process can handle before being recycled merely decreases the rate of process creation/destruction and reduces the average CPU required to process each request.
1. notepad %windir%\system32\inetsrv\config\applicationhost.config
2. find the “fastCGI” element, change it to the following (assuming php-cgi.exe is in c:\php)
<application fullPath=”C:\PHP\php-cgi.exe” instanceMaxRequests=“10000” maxInstances=”32“>
<environmentVariable name=”PHP_FCGI_MAX_REQUESTS” value=”10000”/>
Tuning a Windows 2008 machine for PHP performance in enterprise environments is all about increasing the default concurrency limits. Remember, if you try out some of the tunings in this article make sure to test the effects of the changes in a controlled environment before deploying them to your front line servers. Increasing the concurrency limits will generally have the effect of increasing the steady state memory utilization and CPU if concurrency is a bottleneck on your system. If you don’t have enough memory or your CPU is already fully utilized, don’t increase the concurrency limits! Finally, the tuned values in this article are values that I found empirically in my own test environment. They may or may not be the right values for your environment so play around with them to find out what works for you. Happy tuning!