Pushing the Limits of Windows: Virtual Memory


In my first Pushing the Limits of Windows post, I discussed physical memory limits, including the limits imposed by licensing, implementation, and driver compatibility. Here’s the index of the entire Pushing the Limits series. While they can stand on their own, they assume that you read them in order.

Pushing the Limits of Windows: Physical Memory

Pushing the Limits of Windows: Virtual Memory

Pushing the Limits of Windows: Paged and Nonpaged Pool

Pushing the Limits of Windows: Processes and Threads

Pushing the Limits of Windows: Handles

Pushing the Limits of Windows: USER and GDI Objects – Part 1

Pushing the Limits of Windows: USER and GDI Objects – Part 2

This time I’m turning my attention to another fundamental resource, virtual memory. Virtual memory separates a program’s view of memory from the system’s physical memory, so an operating system decides when and if to store the program’s code and data in physical memory and when to store it in a file. The major advantage of virtual memory is that it allows more processes to execute concurrently than might otherwise fit in physical memory.

While virtual memory has limits that are related to physical memory limits, virtual memory has limits that derive from different sources and that are different depending on the consumer. For example, there are virtual memory limits that apply to individual processes that run applications, the operating system, and for the system as a whole. It’s important to remember as you read this that virtual memory, as the name implies, has no direct connection with physical memory. Windows assigning the file cache a certain amount of virtual memory does not dictate how much file data it actually caches in physical memory; it can be any amount from none to more than the amount that’s addressable via virtual memory.

Process Address Spaces

Each process has its own virtual memory, called an address space, into which it maps the code that it executes and the data that the code references and manipulates. A 32-bit process uses 32-bit virtual memory address pointers, which creates an absolute upper limit of 4GB (2^32) for the amount of virtual memory that a 32-bit process can address. However, so that the operating system can reference its own code and data and the code and data of the currently-executing process without changing address spaces, the operating system makes its virtual memory visible in the address space of every process. By default, 32-bit versions of Windows split the process address space evenly between the system and the active process, creating a limit of 2GB for each:

 image

Applications might use Heap APIs, the .NET garbage collector, or the C runtime malloc library to allocate virtual memory, but under the hood all of these rely on the VirtualAlloc API. When an application runs out of address space then VirtualAlloc, and therefore the memory managers layered on top of it, return errors (represented by a NULL address). The Testlimit utility, which I wrote for the 4th Edition of Windows Internals to demonstrate various Windows limits,  calls VirtualAlloc repeatedly until it gets an error when you specify the –r switch. Thus, when you run the 32-bit version of Testlimit on 32-bit Windows, it will consume the entire 2GB of its address space:

image

2010 MB isn’t quite 2GB, but Testlimit’s other code and data, including its executable and system DLLs, account for the difference. You can see the total amount of address space it’s consumed by looking at its Virtual Size in Process Explorer:

image

Some applications, like SQL Server and Active Directory, manage large data structures and perform better the more that they can load into their address space at the same time. Windows NT 4 SP3 therefore introduced a boot option, /3GB, that gives a process 3GB of its 4GB address space by reducing the size of the system address space to 1GB, and Windows XP and Windows Server 2003 introduced the /userva option that moves the split anywhere between 2GB and 3GB:

 image

To take advantage of the address space above the 2GB line, however, a process must have the ‘large address space aware’ flag set in its executable image. Access to the additional virtual memory is opt-in because some applications have assumed that they’d be given at most 2GB of the address space. Since the high bit of a pointer referencing an address below 2GB is always zero, they would use the high bit in their pointers as a flag for their own data, clearing it of course before referencing the data. If they ran with a 3GB address space they would inadvertently truncate pointers that have values greater than 2GB, causing program errors including possible data corruption.

All Microsoft server products and data intensive executables in Windows are marked with the large address space awareness flag, including Chkdsk.exe, Lsass.exe (which hosts Active Directory services on a domain controller), Smss.exe (the session manager), and Esentutl.exe (the Active Directory Jet database repair tool). You can see whether an image has the flag with the Dumpbin utility, which comes with Visual Studio:

image

Testlimit is also marked large-address aware, so if you run it with the –r switch when booted with the 3GB of user address space, you’ll see something like this:

image

Because the address space on 64-bit Windows is much larger than 4GB, something I’ll describe shortly, Windows can give 32-bit processes the maximum 4GB that they can address and use the rest for the operating system’s virtual memory. If you run Testlimit on 64-bit Windows, you’ll see it consume the entire 32-bit addressable address space:

image

64-bit processes use 64-bit pointers, so their theoretical maximum address space is 16 exabytes (2^64). However, Windows doesn’t divide the address space evenly between the active process and the system, but instead defines a region in the address space for the process and others for various system memory resources, like system page table entries (PTEs), the file cache, and paged and non-paged pools.

The size of the process address space is different on IA64 and x64 versions of Windows where the sizes were chosen by balancing what applications need against the memory costs of the overhead (page table pages and translation lookaside buffer – TLB – entries) needed to support the address space. On x64, that’s 8192GB (8TB) and on IA64 it’s 7168GB (7TB – the 1TB difference from x64 comes from the fact that the top level page directory on IA64 reserves slots for Wow64 mappings). On both IA64 and x64 versions of Windows, the size of the various resource address space regions is 128GB (e.g. non-paged pool is assigned 128GB of the address space), with the exception of the file cache, which is assigned 1TB. The address space of a 64-bit process therefore looks something like this:

image

The figure isn’t drawn to scale, because even 8TB, much less 128GB, would be a small sliver. Suffice it to say that like our universe, there’s a lot of emptiness in the address space of a 64-bit process.

When you run the 64-bit version of Testlimit (Testlimit64) on 64-bit Windows with the –r switch, you’ll see it consume 8TB, which is the size of the part of the address space it can manage:

image

image 

Committed Memory

Testlimit’s –r switch has it reserve virtual memory, but not actually commit it. Reserved virtual memory can’t actually store data or code, but applications sometimes use a reservation to create a large block of virtual memory and then commit it as needed to ensure that the committed memory is contiguous in the address space. When a process commits a region of virtual memory, the operating system guarantees that it can maintain all the data the process stores in the memory either in physical memory or on disk.  That means that a process can run up against another limit: the commit limit.

As you’d expect from the description of the commit guarantee, the commit limit is the sum of physical memory and the sizes of the paging files. In reality, not quite all of physical memory counts toward the commit limit since the operating system reserves part of physical memory for its own use. The amount of committed virtual memory for all the active processes, called the current commit charge, cannot exceed the system commit limit. When the commit limit is reached, virtual allocations that commit memory fail. That means that even a standard 32-bit process may get virtual memory allocation failures before it hits the 2GB address space limit.

The current commit charge and commit limit is tracked by Process Explorer in its System Information window in the Commit Charge section and in the Commit History bar chart and graph:

image  image

Task Manager prior to Vista and Windows Server 2008 shows the current commit charge and limit similarly, but calls the current commit charge "PF Usage" in its graph:

image

On Vista and Server 2008, Task Manager doesn’t show the commit charge graph and labels the current commit charge and limit values with "Page File" (despite the fact that they will be non-zero values even if you have no paging file):

image

You can stress the commit limit by running Testlimit with the -m switch, which directs it to allocate committed memory. The 32-bit version of Testlimit may or may not hit its address space limit before hitting the commit limit, depending on the size of physical memory, the size of the paging files and the current commit charge when you run it. If you’re running 32-bit Windows and want to see how the system behaves when you hit the commit limit, simply run multiple instances of Testlimit until one hits the commit limit before exhausting its address space.

Note that, by default, the paging file is configured to grow, which means that the commit limit will grow when the commit charge nears it. And even when when the paging file hits its maximum size, Windows is holding back some memory and its internal tuning, as well as that of applications that cache data, might free up more. Testlimit anticipates this and when it reaches the commit limit, it sleeps for a few seconds and then tries to allocate more memory, repeating this indefinitely until you terminate it.

If you run the 64-bit version of Testlimit, it will almost certainly will hit the commit limit before exhausting its address space, unless physical memory and the paging files sum to more than 8TB, which as described previously is the size of the 64-bit application-accessible address space. Here’s the partial output of the 64-bit Testlimit  running on my 8GB system (I specified an allocation size of 100MB to make it leak more quickly):

 image

And here’s the commit history graph with steps when Testlimit paused to allow the paging file to grow:

image

When system virtual memory runs low, applications may fail and you might get strange error messages when attempting routine operations. In most cases, though, Windows will be able present you the low-memory resolution dialog, like it did for me when I ran this test:

image

After you exit Testlimit, the commit limit will likely drop again when the memory manager truncates the tail of the paging file that it created to accommodate Testlimit’s extreme commit requests. Here, Process Explorer shows that the current limit is well below the peak that was achieved when Testlimit was running:

image

Process Committed Memory

Because the commit limit is a global resource whose consumption can lead to poor performance, application failures and even system failure, a natural question is ‘how much are processes contributing the commit charge’? To answer that question accurately, you need to understand the different types of virtual memory that an application can allocate.

Not all the virtual memory that a process allocates counts toward the commit limit. As you’ve seen, reserved virtual memory doesn’t. Virtual memory that represents a file on disk, called a file mapping view, also doesn’t count toward the limit unless the application asks for copy-on-write semantics, because Windows can discard any data associated with the view from physical memory and then retrieve it from the file. The virtual memory in Testlimit’s address space where its executable and system DLL images are mapped therefore don’t count toward the commit limit. There are two types of process virtual memory that do count toward the commit limit: private and pagefile-backed.

Private virtual memory is the kind that underlies the garbage collector heap, native heap and language allocators. It’s called private because by definition it can’t be shared between processes. For that reason, it’s easy to attribute to a process and Windows tracks its usage with the Private Bytes performance counter. Process Explorer displays a process private bytes usage in the Private Bytes column, in the Virtual Memory section of the Performance page of the process properties dialog, and displays it in graphical form on the Performance Graph page of the process properties dialog. Here’s what Testlimit64 looked like when it hit the commit limit:

image

image

Pagefile-backed virtual memory is harder to attribute, because it can be shared between processes. In fact, there’s no process-specific counter you can look at to see how much a process has allocated or is referencing. When you run Testlimit with the -s switch, it allocates pagefile-backed virtual memory until it hits the commit limit, but even after consuming over 29GB of commit, the virtual memory statistics for the process don’t provide any indication that it’s the one responsible:

image

For that reason, I added the -l switch to Handle a while ago. A process must open a pagefile-backed virtual memory object, called a section, for it to create a mapping of pagefile-backed virtual memory in its address space. While Windows preserves existing virtual memory even if an application closes the handle to the section that it was made from, most applications keep the handle open.  The -l switch prints the size of the allocation for pagefile-backed sections that processes have open. Here’s partial output for the handles open by Testlimit after it has run with the -s switch:

image

You can see that Testlimit is allocating pagefile-backed memory in 1MB blocks and if you summed the size of all the sections it had opened, you’d see that it was at least one of the processes contributing large amounts to the commit charge.

How Big Should I Make the Paging File?

Perhaps one of the most commonly asked questions related to virtual memory is, how big should I make the paging file? There’s no end of ridiculous advice out on the web and in the newsstand magazines that cover Windows, and even Microsoft has published misleading recommendations. Almost all the suggestions are based on multiplying RAM size by some factor, with common values being 1.2, 1.5 and 2. Now that you understand the role that the paging file plays in defining a system’s commit limit and how processes contribute to the commit charge, you’re well positioned to see how useless such formulas truly are.

Since the commit limit sets an upper bound on how much private and pagefile-backed virtual memory can be allocated concurrently by running processes, the only way to reasonably size the paging file is to know the maximum total commit charge for the programs you like to have running at the same time. If the commit limit is smaller than that number, your programs won’t be able to allocate the virtual memory they want and will fail to run properly.

So how do you know how much commit charge your workloads require? You might have noticed in the screenshots that Windows tracks that number and Process Explorer shows it: Peak Commit Charge. To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for). If you want to have some breathing room for potentially large commit demands, set the maximum to double that number.

Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).

Paging file configuration is in the System properties, which you can get to by typing “sysdm.cpl” into the Run dialog, clicking on the Advanced tab, clicking on the Performance Options button, clicking on the Advanced tab (this is really advanced), and then clicking on the Change button:

image

You’ll notice that the default configuration is for Windows to automatically manage the page file size. When that option is set on Windows XP and Server 2003,  Windows creates a single paging file that’s minimum size is 1.5 times RAM if RAM is less than 1GB, and RAM if it’s greater than 1GB, and that has a maximum size that’s three times RAM. On Windows Vista and Server 2008, the minimum is intended to be large enough to hold a kernel-memory crash dump and is RAM plus 300MB or 1GB, whichever is larger. The maximum is either three times the size of RAM or 4GB, whichever is larger. That explains why the peak commit on my 8GB 64-bit system that’s visible in one of the screenshots is 32GB. I guess whoever wrote that code got their guidance from one of those magazines I mentioned!

A couple of final limits related to virtual memory are the maximum size and number of paging files supported by Windows. 32-bit Windows has a maximum paging file size of 16TB (4GB if you for some reason run in non-PAE mode) and 64-bit Windows can having paging files that are up to 16TB in size on x64 and 32TB on IA64. Windows 8 ARM’s maximum paging file size is is 4GB. For all versions, Windows supports up to 16 paging files, where each must be on a separate volume.

Version

Limit on x86 w/o PAE

Limit on x86 w/PAE

Limit on ARM

Limit on x64

Limit on IA64

Windows 7

4 GB

16 TB

16 TB

 

Windows 8

 

16 TB

4 GB

16 TB

 

Windows Server 2008 R2

   

16 TB

32 TB

Windows Server 2012

     

16 TB

 

Comments (103)

  1. Anonymous says:

    In the Virtual Memory dialog box, how is the Recommended value calculated?  I have read that it is total physical RAM * 1.5, but I have 8192 MB installed and the recommended value is 12141 MB, which is one hundred and forty some odd percent.  Is there a variable of the formula that I am missing?

  2. Anonymous says:

    Love your blog! The technical detial is mind blowing. I am not sure if this is the place to ask a question.

    I am running Vista Ultimate 64-bit edition on a Intel Core 2 Duo 2.53 Ghz with 4GB RAM and a Nvidia Gefore 9600GT graphics card.

    How can I squeeze maximum performance out of Vista Ultimate on my hardware ie. speeding up boot time, response time of system services?

  3. Anonymous says:

    "I do think that setting the pagefile to a fixed size is better because of fragmentation (and therefor performance) reasons. "

    Just defrag the pagefile and then you don’t have to worry about it…

  4. Anonymous says:

    Hi,

    According to this post, when running a large address aware 32-bit application in a 64-bit windows 7 the application could use the full 32-bit address space (4GB). This makes sense to me, however when running Visual Studio 2010 the limit is reached around 2GB.

    Why is this happening? I checked with dumpbin /header and the devenv.exe it can handle large addresses (when run "editbin /largeaddressaware devenv.exe", don't know how it was before).

    I would like to understand why.

    Thanks,

    Nuno Pereira

  5. Anonymous says:

    @DW: You could push it to 3300 with the /UserVA=3300 boot.ini switch (or bcdedit equivalent). The extra 300Mb isn't much more though (compared to /3Gb), and you'll put a lot of pressure of NPP and PP to fit in the remaining 700Mb.  Note, if you use an x64 OS, you'll still only get 4Gb (with the x86 program).  You need to get an x64 app on an x64 OS to really benefit from extra (>4Gb) memory.

  6. Anonymous says:

    I don't understand why people say that, in conditions where analysis shows you don't need a pagefile (you fully operate in RAM with room to spare) you should still create a pagefile to free up RAM for other purposes ("making that memory available for more useful purposes "). If all of my purposes fit in RAM, what exactly am I freeing memory for?

  7. Anonymous says:

    Large address awareness is specified with the IMAGE_FILE_LARGE_ADDRESS_AWARE flag in the IMAGE_FILE_HEADER structure of the PE header:

    http://msdn.microsoft.com/en-us/library/ms680313.aspx

  8. Anonymous says:

    Robert: You can do that, but you need to set the DedicatedDumpFile option to redirect the kernel dump from c: to the pagefile on the other disk.

    support.microsoft.com

    New behavior in Windows Vista and Windows Server 2008

    In Windows Vista and Windows Server 2008, the paging file does not have to be on the same partition as the partition on which the operating system is installed. To put a paging file on another partition, you must create a new registry entry named DedicatedDumpFile. You can also define the size of the paging file by using a new registry entry that is named DumpFileSize. But we cannot dump to a spanned volume like a stripeset or RAID5.

    To create the DedicatedDumpFile and DumpFileSize registry entries, follow these steps:

    Click Start, click Run, type Regedit, and then click OK.

    Locate and then click the following registry subkey:

    HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlCrashControl

    On the Edit menu, point to New, and then click String Value.

    In the details pane, type DedicatedDumpFile, and then press ENTER.

    Right-click DedicatedDumpFile, and then click Modify.

    In the Value data box, type <drive>:<dedicateddumpfile.sys>, and then click OK.

    Note <drive> is a placeholder for a drive that has enough disk space for the paging file, and <dedicateddumpfile.sys> is a placeholder for the dedicated file and the full path.

    On the Edit menu, point to New, and then click DWORD Value.

    Type DumpFileSize , and then press ENTER.

    Right-click DumpFileSize, and then click Modify.

    In the Edit DWORD Value dialog box, click Decimal under Base.

    In the Value data box, type the appropriate value, and then click OK.

    Note The size of the dump file is in megabytes.

    Right-click DumpFile, and then click Modify.

    In the Value data box, type <drive>:<path>Memory.dmp, and then click OK.

    Note When the system crashes, this is the location where the memory dump file is created by using the dedicated file instead of by using the Pagefile.sys file.

    Exit Registry Editor.

    Restart Windows in order for your changes to take affect.

  9. Anonymous says:

    @Brian:

    The limit is per process and not per computer. Windows itself (the Kernel) seems to use AWE.

  10. Matthew Reynolds says:

    Brilliant post Mark. Thanks for writing this.

    As an Active Directory guy I wanted to remind readers to beware of configuring DCs without any page file, even on DCs with lots of RAM and where the commit charge peak is low relative to the commit limit. The database engine depends on the page file per KB 889654.

    Cheers

  11. Michael Sainz says:

    Damn

    Love reading your posts, keep it up!

  12. Pavel Lebedinsky says:

    I was involved in choosing the default min/max sizes for system managed pagefiles in Vista, and I’m pretty sure those numbers were not just copied from some magazine :)

    The 1 GB minimum was chosen based on the actual commit charge observed on small machines (512 MB of RAM). The 3*RAM maximum might seem excessive on machines with lots of RAM, but remember that pagefile will only grow this large if there is actual demand. Also, running out of commit (for example, because of a leak in some app) can bring the entire system to a halt, and a higher maximum size can make the difference between a system that does not respond and has to be rebooted and a system that can be recovered by restarting a process.

    I will admit that scaling the maximum size linearly with the size of RAM is somewhat arbitrary. Perhaps it should have been a fixed constant instead.

  13. Hugo Peeters says:

    Great post (as always)! I love the amount of detail in your posts.

    Everyone looks at task manager and tries to draw some conclusions, but thanks to you I am starting to really understand what the numbers mean.

    Hugo

  14. Pavel Lebedinsky says:

    A few more points:

    1. Reserved memory does contribute to commit charge, because the memory manager charges commit for pagetable space necessary to map the entire reserved range. On 64 bit this can be a significant number (reserving 1 TB of memory will consume approximately 2 GB of commit).

    2. The Private Bytes counter is named a bit misleadingly, because it includes more than just committed MEM_PRIVATE regions. It’s better to think about it as the process commit charge. Besides private committed pages, it includes things like copy-on-write views and pagetable pages.

    3. Configuring a system with lots of RAM to run without pagefile may have either negative or positive perf impact depending on what the system is doing. The general recommendation in this case is to create a reasonably sized pagefile (for example, 4 GB) and increase it if the Paging file% Usage counter gets close to 100%.

    Note that this counter is completely different from what task manager calls "pagefile usage" (which is actually the system commit charge). Paging file% Usage of 100% would mean that some unused pagefile-backed pages are sitting on the modified page list, unnecessarily taking up RAM. If pagefile was larger, those pages could have been written to disk, resulting in more RAM available for other purposes.

  15. Osvaldo Tabasco says:

    I think I never read a more fundamental article about managing the Windows Virtual Memory. Thank you very much for this, I think many people will now be able to reasonably set the size of the Virtual Memory.

    "I guess whoever wrote that code got their guidance from one of those magazines I mentioned!" made me giggle! :-)

  16. David Rawling says:

    It’s worth noting that previous Windows versions (as late as Windows 2000) permitted the creation of multiple page files on a single volume with a registry edit.

    This was primarily used when the formal recommendation from Microsoft was for 1.5x RAM (Exchange Server anyone?) and the system had more than 4GB of RAM. Using the "one page per volume" strategy required 2 or more drive letters.

    Instead, each pagefile could be 4GB and placed in a separate directory.

    I think it was "HKLMSYSTEMCurrentControlSetControlSession ManagerMemory ManagementPagingfiles" but my memory is not perfect.

  17. robad says:

    nice one!

    but what do i do with the pagefile on a 64 bit server 2003 system with 32gb ram, with a memory-intensive application? … let’s call the apllication … exchange 2k7 :-)

  18. TekGems says:

    I just bought a Gigabyte i-RAM RAM DISK to put my "virtual memory" on a RAM DISK. I didn’t realize accessing virtual memory is application specific, so this is great information! Thank you.

  19. o.s. says:

    "To take advantage of the address space above the 2GB line, however, a process must have the ‘large address space aware’ flag set in its executable image."

    Hmmm, I looked around and the closest I could find in the PE executable file format was from

    this:

    http://msdn.microsoft.com/en-us/library/ms809762.aspx

    "1. The PE Header

    1c. IMAGE_OPTIONAL_HEADER Fields

    1c-27. DWORD SizeOfHeapReserve

    The amount of virtual memory to reserve for the initial process heap. This heap’s handle can be obtained by calling GetProcessHeap. Not all of this memory is committed (see the next field)."

    So windows uses something like the SizeOfHeapReserve field in the executable file to be able to tell whether the process should be allocated more than 2GB of virtual memory to the usermode process? Interesting post as usual Mark, I learned quite a bit.

  20. asf says:

    o.s.: no, its a bit flag in DllCharacteristics IIRC

  21. Lester Eliasquevitch says:

    Another fantastic post!! The detailed level of your articles is completely awesome!

  22. o.s says:

    Thanks Mark for the reply. I was so busy researching asf’s comment that I hadn’t checked back on the latest comments. Thanks everyone.

  23. Ben Voigt [C++ MVP] says:

    <quote>I was involved in choosing the default min/max sizes for system managed pagefiles in Vista, and I’m pretty sure those numbers were not just copied from some magazine :)

    [snip]

    Also, running out of commit (for example, because of a leak in some app) can bring the entire system to a halt, and a higher maximum size can make the difference between a system that does not respond and has to be rebooted and a system that can be recovered by restarting a process.</quote>

    I think you got that backwards.  Running out of physical RAM without running out of commit is what brings the system to a halt, because the system is paging in and out continually and thrashing the disk I/O subsystem, and may be unrecoverable without a reboot.  OTOH running out of commit generally causes a fatal error to the application performing the more allocations, allowing other applications to continue normally and at full speed.

    And it scares me that the person in charge of setting pagefile limits at Microsoft has such a fundamental misunderstanding of the problem.

    Pagefiles are a hack to permit applications written to process data in batches but without regard to memory usage (i.e. data once processed should be freed immediately, but isn’t until the task is completed) to run on larger datasets which don’t fit fully in RAM but the working set does.  If the working set doesn’t fit in RAM, theoretically the pagefile allows processing to continue but practically the 1000000X slowdown from paging means this isn’t really enabling anything.  And an application which is designed to handle large problems can handle data sets bigger than memory as long as the working set fits without any pagefile whatsoever (allocate only the working set).  The OS disk cache is likely to make the "read the data from disk on each (non-localized) use" perform better than slurping the whole thing into committed memory.

    An application that wants to be really fancy could determine whether the data will fit and use cached or non-cached I/O accordingly.  But that’s something that only server-class software like SQL server would want to worry about.

    Or at least that’s my understanding.

  24. Joe Butler says:

    Is this a fair summary of why the 3GB is an option that is off by default:

    Either lots of 2GB processes at the same time with a 2GB system virutual memory to manage all those resources, or a limited number of 3GB processes with a smaller system virtual memory of 1GB to manage them?

    And what is wrong with just letting Windows manage the page file?

    Won’t it choose a reasonable size and grow it if necessary?  I always just assumed it would never shrink, but Mark seemed to suggest the page file will be truncated when the commit charge drops. Is setting a fixed size to avoid the stall as the swap file is grown or shrunk in size, or to avoid the on-screen message that, ‘Windows will increase the size of the virtual memory file’ – message that cannot be clicked if running on an unattended server?

    On a workstation, is there really any point?  Will people notice a difference?  

  25. m^22 says:

    Great article, but I there’s one important thing that should be mentioned: address space fragmentation. So I recommend reading also http://forall.ru-board.com/egor23/online/FAQ/Virtual_Memory/Limits_Virtual_Memory.html

  26. Adi R says:

    three things

    1. In the last paragraph, the maximum page file size on 32 Bit windows and 64 Bit windows seem to be the same, at 16 TB. Is this really right?

    2. How is memory managed for DirectX (aka Games!). Obviously they still go through Kernel and such, but I know games often allocate memory to try to force only physical memory usage. Will having paging file help at all? How does OS behave?

    3. On Vista 32-bit, is 3GB option enabled or disabled? And if disabled, how do I get it going on my 4gb PC?

  27. Louis says:

    Hello and thank you for this excellent post.

    I have a question and comment about virtualized server environments

    To avoid swapping within a virtual W2K3 VM, I configure them with a chockfull (1,2,4GB) of RAM and let the hypervisor manage the pseudo physical allocation.

    With your post, I am thinking I might still need to provide some paging within the VM.

    Any comments about either statements?

  28. Miral says:

    Thank you for this excellent post; very informative.

    Are you going to go on to discuss other topics (eg. the paged and nonpaged pools of kernel memory and how to figure out what’s going on with them)?  I ask because my PC currently seems to have a nonpaged pool leak (which I’ve traced to pool "TdxA", but I’m not sure where to go from there).  My computer (Vista32) gets dangerously flaky once the non-paged pool gets to around 1GB.  (It’s also not happy when the total handle count gets above 45,000 or so, but that’s less serious.)

  29. PeteR says:

    Great article! Maybe a stupid question but what is the disadvantage of setting a too big paging file?

  30. vadim says:

    Pavel, you said that:

    3. Configuring a system with lots of RAM to run without pagefile may have either negative or positive perf impact depending on what the system is doing.

    Could you please give me some examples of negative performance impact, if I have 4GB XP32 w/o pagefile?

    And similar question: my colleagues was once set and experiment with 2GB pagefile on the 4GB RAMDrive on the 8GB XP64. I think that this doesn’t make sense at all. What do you think?

    Sorry for probably bad English =)

  31. Pieter says:

    I do think that setting the pagefile to a fixed size is better because of fragmentation (and therefor performance) reasons.

  32. Clay says:

    Excellent post, can’t wait for the next edition of Windows Internals to hit the shelves.

  33. Eran says:

    Great article!

    May I suggest a topic for a future article? I would love to see discussion about Windows memory management policies. E.g., when pages get swapped out and how that is decided; if and how those policies are customizable; etc.

    I often see situations where recently-inactive pages are swapped out too soon, when there is plenty of available physical memory. This is especially problematic with Java (or any other garbage-collected environment for that matter) applications, where most of the pages have to be swapped in for each garbage collection.

  34. Hofi says:

    Thank you for the great article.

    Just a small tipo,

    Testlimit is also marked large-address aware, so if you run it with the –m switch…

    should be -r switch (or the screenshot command should have been -m :)

  35. mrogi says:

    I am running Vista with 4GB of physical memory. I also have a USB drive with Readyboost enabled. Is Mark saying it is foolish to allow Windows to manage my virtual memory? Am I supposed to manually tweak my pagefile size for maximum performance?

  36. Frank Wilhoit says:

    Is it [still] true that, when the low-memory resolution dialog appears, some random thread of some random process has already been commandeered for that purpose and therefore the system must axiomatically be regarded as unstable?

  37. Jamie Hanrahan says:

    To Ben Voight:

    > Pagefiles are a hack to permit applications

    > written to process data in batches but

    > without regard to memory usage

    I don’t think of it that way. Demand-paged virtual memory OSs take advantage of the "80-20 rule" (or 90-10, or whatever): Most workloads spend most of their time accessing only a small portion of their address space. There’s no particular need to keep things whose access is not in your performance path in RAM all the time.

    Evidence for this can be found by increasing RAM while keeping the workload constant. You will reach a point of diminishing returns after which adding more RAM won’t improve performance. And you’ll find that this point occurs well before everything is in RAM.

    (Of course, a lot of code from the exe’s and dll’s you’re running will *never* be in RAM, unless you happen to execute *all* of it.)

    This is also the argument against disabling the pagefile completely. Doing so will force the OS (any virtual memory OS) to keep all pagefile-backed v.m…. or rather, everything that’s supposed to be pagefile-backed… in RAM all the time, no matter how long ago it was referenced. This of course cuts down on the RAM available for code, the file cache, etc. It’s usually a net loss. Not absolutely always, but usually. (And then of course there are the apps that simply won’t work at all without pagefile space.)

  38. Wayne Robinson says:

    Brilliant read as always Mark. I adore the work you do, the blogs you write, the apps create and Events you speak at. I am waiting in anticipation for the 5th edition of windows internals to drop through my door. Sadly its 2 months away yet :'(

    Cheers!

  39. Ben Voigt [C++ MVP] says:

    <quote>(Of course, a lot of code from the exe’s and dll’s you’re running will *never* be in RAM, unless you happen to execute *all* of it.)</quote>

    Of course this is true, and you don’t need a pagefile for the VM system to page these out, because they are discardable sections which can simply be reread from disk.

    The 80-20 rule may hold, but the simple matter is that reading the input file into memory to process is at best as good performance as reading memory-mapped or on an as-needed basis, and usually worse.  Why?  Because the file cache will satisfy your I/O from memory anyway for workloads that fit, and for workloads that don’t fit the initial loading step reduces to a disk to disk (file to pagefile) copy of your data, after which the OS still has to read the data back in from the pagefile on demand.

    Do you agree that the "bringing the system to a halt" is not due to running out of commit, but due to excessive paging in the process of allocating up to the commit limit after RAM is exhausted?

  40. AlanH says:

    Even though my earlier comment never got posted, here’s an update for whoever is reading/moderating…

    I had mentioned that 32-bit testlimit on my 4GB XP64 system was reporting something just a bit shy of 2GB when it should’ve been reporting just shy of 4GB. I finally discovered that I was using an ancient version of testlimit that was not marked large-address aware.

    I downloaded the latest version of testlimit (v5) and it is now reporting 4GB as expected.

    Alan

  41. Pavel Lebedinsky says:

    @Ben: I’m not in charge of setting pagefile limits, I was just one of the people who reviewed proposed changes for Vista.

    I do however have some experience with debugging machines that can’t make forward progress because of resource allocation failures, so I’ll comment on this:

    > running out of commit generally causes a

    > fatal error to the application performing

    > the more allocations, allowing other

    > applications to continue normally and at full speed.

    This might be true for a short period of time, but if the leaking application keeps allocating more memory (especially if it does this by commiting one page at a time), you’ll quickly get to a point where you can’t start new processes anymore. So unless you happened to have task manager running, you won’t even be able to kill the misbehaving app.

    Cleanly exiting an existing process (for example, saving a modified document in Word) will also be impossible becasue doing this would require allocating more memory.

    If you wait even longer, you’ll see more problems, such as screen repaint issues, not being able to switch desktops, crashes or weird behavior from apps that do not handle errors correctly, etc. Code that doesn’t allocate memory or call any external APIs might be able to continue running at full speed, but for all practical purposes the machine will be unusable.

  42. Pavel Lebedinsky says:

    By the way, there are actually 2 separate reasons why pagefiles are necessary.

    The first reason is to allow dirty pages that are never (or very rarely) referenced to be moved to disk, freeing up more RAM for other purposes.

    The other reason is to enable better use of *virtual* memory, given that physical memory is allocated on demand. Remember that when a process calls VirtualAlloc(MEM_COMMIT) there are no physical pages allocated at this time. Physical pages are only allocated when the app accesses virtual pages for the first time. This is good because it makes committing pages a relatively cheap operation, so apps can commit memory in bigger chunks, without having to worry about each page they may or may not use.

    Now, even though committing memory does not allocate physical pages, it still guarantees to the application that reading from/writing to the committed pages will never fail (or deadlock). It might be slow if other physical pages have to be moved to disk in order to make room, but it will eventually succeed.

    In order to make that guarantee the memory manager has to assume that every committed page in the system might eventually be written to. And that in turn means that there has to be enough space in the physical memory and all the pagefiles combined to hold all the resulting data. In other words, the total commit charge has to be less than the system commit limit. Once the limit is reached, the memory manager will refuse to commit any more memory, even if there is still plenty of unused (free+zeroed) physical pages, or plenty of unused space in the pagefile.

    In a sense, pagefiles are like stormwater ponds. Most of the time they are (almost) empty, but they have to be large enough in case a big storm happens.

  43. dave says:

    Good article – thanks.

    And now a carping criticism for which you personally are not responsible: why is it that Task Manager seems to want to mislabel important system metrics?

    Like, for example in XP, displaying the commit charge as ‘PF Usage’ (was ‘Mem’ in Windows 2000).

    I suppose whoever’s in charge of these things has decided that the average user can’t understand ‘commit charge’, but is lying to them going to help?  I’ve lost count of the number of times I’ve had to point out that just because Task Manager says you’re using 1GB of pagefile, it doesn’t mean you’re using 1GB of pagefile (on your system that has no pagefiles configured, even).

    dave

  44. Greg says:

    Mark – could you talk about observing the size of applications using AWE?

  45. Marc Brooks says:

    In the last paragraph, the phrase "32-bit Windows has a maximum paging file size of 16TB" surely should have been 32GB, not 32TB, right?

  46. Rik Mayell says:

    I concur with Mark regarding judging the size of the page file.  Measure the maximum commit charge and set accordingly.

    This would support the somewhat vague scientific maxim that judgement without observation or measurement is, in effect, meaningless (unless, of course, you’re Einstein.)

  47. Luke Skywalker says:

    Everyone downloaded the 5.0 Version of Testlimit?

    It seems to have a bug….

    I am running on a 32Bit WinXP SP3 Machine and when I use Testlimit with the -s or-m switch and so on it doesnt take the MB Parameter and allocates the maximum instantly….

    Solutions?

  48. stephc_msft says:

    Regarding the comment/question above

    32-bit Windows, running in PAE mode, does has a maximum paging file size of 16TB.

    (or in practice the size of the underlying disk)

    Most 32bit systems run in PAE mode by default these days.

    Without PAE mode, the limit, for any single pagefile, is 4GB

  49. JCAB says:

    Pavel: "enable better use of *virtual* memory, given that physical memory is allocated on demand" and then "reading from/writing to the committed pages […] might be slow if other physical pages have to be moved to disk in order to make room"… and/or if the page had been swapped out so it needs to be brought back from disk.

    The crux of the problem with virtual memory is in there. What I see happen often is a system that is working fine… but where some components are chuggish to respond. I open Windows Explorer and it takes 15 seconds to show up. Or I click on the start button and it takes 5 seconds to respond. Or I interact with any application and it takes "too long" to do what I ask of it.

    It’s like part of the application’s memory is swapped out, and now Windows has to bring in the needed pages on demand, as it sees them being accessed, and oftentimes this can take seconds to complete. Running without a page file or with a small one minimizes the occurrences of this sort of issue. At least, it certainly did on Windows 2000, which was the last time I tried it, I must confess. And this sort of issue is very common in my experience, even today on Windows Vista. I think I’m leaning towards Ben on this one, with the caveat that having Pavel’s "stormwater ponds" sounds like a good thing. Then again, I’m on videogames, where responsiveness is paramount, so I might be a trifle biased. Paradoxically, most videogames tend to constantly exercise the same memory in a tight loop, so this sort of issue doesn’t crop up so often, there.

    In any case, I’ve always suspected that swapping out application memory to make space for a bigger file cache is not such a good idea in many scenarios, although it definitely improves performance for certain tasks (mostly background and batched tasks, IMHO). The question is: is the current startegy used by Windows the best strategy to provide the best user-interaction responsiveness on consumer machines? Somehow, I don’t think so. The problem is certainly very complex, so maybe I’m just deluding myself in thinking that there must be a better way.

  50. Todd Bandrowsky says:

    I seem to recall the reason for the 3x paging file recommendation was made by Intel long back in the days when the 80386 was first introduced.

  51. John Westher says:

    Typo I believe:

    The figure isn’t drawn to scale, because even 8TB (should be 8GB instead of TB) , much less 128GB, would be a small sliver. Suffice it to say that like our universe, there’s a lot of emptiness in the address space of a 64-bit process.

    Great article!

  52. Kerry C says:

    I have a couple of minor nitpicks:

    LSASS runs on every Windows-based machine, not just domain controllers (go ahead and kill the LSASS process on your computer and enjoy your reboot).

    Esentutl.exe is a database repair tool that isn’t specific to Active Directory. Its file description says "Server Database Storage Utilities".

    These are fairly common things one should know about Windows, IMO. As a result of finding those inaccuracies I have to fully test everything written here before I can trust any of it. Your conclusions are probably right, but you can see how including incorrect information can tarnish an otherwise perfect article, right?

  53. John Cairns says:

    Intel’s 32 bit processors since the 80386 all address 48 bits of logical memory (2^48) via a 16 bit segment register and 32bit offset pointer.

    IMHO, the modern generation of operating systems does not exploit this because 32 bits of addressable memory is "more than anyone would ever need" and dealing with "NEAR" and "FAR" pointers was a major fiasco in the early days of Windows coding.

    Thought I would point it out since you failed to mention this.

  54. tygrus says:

    The blub "virtual-memory-related limits in Windows, that includes information on how to track down virtual memory hogs and how to size the paging file." is not fully covered by article. I still can’t track down memory hogs as claimed.

    I installed Virtual Server and created a VM with 256MB RAM. My system now seems to use an extra 1GB RAM (eg. 1.4GB after vs 400MB before) that is not hidden/un-accounted for. I would have thought it would be closer to 256MB + VM server/helper.

    The two articles still don’t help to calculate the memory used by numerous OS structures, where the disk cache fits in or the problem with only 1GB for OS.

    32bit OS system calls use 32bit pointers and the memory windowing of PAE I think works for apps not OS or drivers. It’s very messy to copy data between windows of ?GB because they can’t be addressed at the same time or shared between programs (eg. OS and app).

    Other system resources (eg. handles, desktop heap) are often exhausted by loading multiple small programs (eg. IE windows) before 2GB VM is used.

    Some people are confused by "Available Physical memory" which is also used by the "system cache" and is not the total addressable RAM. You can have physical memory available but cannot allocate additional memory for OS/app because it’s in the other half (eg. 1GB OS, 1GB apps split of 2GB).

  55. Doug Piercy says:

    The section on how to size the pagefile is the most concise explanation I’ve seen on the subject and easily worth the price of admission.

    I was charged with figuring this out awhile ago for a system that consisted exclusively of boot-from-SAN iSCSI storage with dedicated pagefile volumes (to avoid SAN replication) so we didn’t want to waste expensive SAN storage space on large pagefile volumes. Most web articles on the subject are years behind the times, when everything was 32-bit and 1GB of RAM was a luxury and 4GB was unheard of.

    After much reading and observation, I basically came to the same conclusion as Mark (Peak Commit Charge + Fudge Room) with a 1GB minimum, since I remember reading that the OS generally wants at least SOME pagefile space for basic housekeeping. Can anyone explain what that housekeeping is?

    The hardest part of all this research was convincing IT monkeys, who apparently read the same magazines as the MS engineers and are firmly convinced that pagefiles must always be 1X-3X RAM, that it doesn’t work that way when you have an x64 SQL Server with 32-64GB of physical RAM!

    Thanks for a great read Mark. I’m looking forward to Windows Internals 5th Edition.

  56. Michael S. says:

    @Kerry C,

    I don’t think he even implied that LSASS was AD specific.  It’s pretty clear that it’s not to even a novice user.  That’s the same with Esentutl.  

    Giving an instance of how these things matter for the given context is right way to write a paper.  There would be no reason for Mark to talk about how these things are not useful to the discussion to make his point.

    I thought this was extremely informative.  Just my 2 cents.

  57. rakesh says:

    Thanks Mark for a informative article in Windows Virtual memory. Would u please provide some information about the virtual memory limits of Windows 7

  58. Aurea Walker says:

    I simply cant find any resources to extend or update my virtual memory and the system is well up to date with a new windows XP/ home edition we have absolutely scoured, the entire network for the fix,to absolutely no resolve.We have to have more virtual memory,we have three 128 memory sticks so that is ok,but this virtual memory being too low is driving us up a huge hill.Microsoft should have it but if they do they will not provide us any downloads. for the fix.Thanks for being there for us and for responding,we do not expect to have the funds for the fix.

  59. Paul says:

    This really helps with setting the pagefile size on netbooks that use Solid State Drives with limited space.  Typically we can’t afford to waste space on an overly large page file created on a system with 2GB RAM and an 8 or 16GB drive.

  60. Rich S says:

    I am so glad this article is out, along with the comments.  It clarifies much while illustrating how complex this subject is.  I think this needs to be republished every year to counter the utter drivel on the web and magazines.

    For my part, I have been charged with configuring and supporting servers of various sizes, needs, and flavors the better part of 30 years (remember the pdp-11/70?).  I have simplified support 1000% and DR with this rule:  No paging file unless proven first that it is needed.  Paging to a disk file is almost always transient data, there is no need for OSes these days to page programs/librarys/data that it has read from disk already.  Like getting an error in your program, just one is already too many, having your application write to a page file just once is very expensive, so just dont do it.

    To those many, MANY thousands of so called professionals who believe a server reboot should be avoided at all costs, sing to the window patching group.  A server that can automatically reboot in 20 seconds is almost always more cost effective than 1 or 3 technicians standing around a console for a couple of hours trying to get a handle on a errant application (just how do you quickly prove killing a process does not have unintended consequences).  

    I had one vendor scream at me they had to have 6x the size of physical RAM.  I asked them to write to me how long would it take for them to write and then read 48GB of paging file, and then explain how that would affect their guarantee of 500 transactions per second.  I set no paging file and we did 3000 transactions per second for life of the application (78 months).

    I have played with both 32-bit (2GBRAM) and 64-bit (4GB) Vista for more than a year, they have minimum page file set — mostly to avoid the nag pop up.  I have yet to find anyhing that fails or slows when it does not exist.

  61. Leon Orlov says:

    Great read Mark.  Thank you bunches.

  62. rakesh says:

    Thanks Mark for a informative article in Windows Virtual memory. Would u please provide some information about the virtual memory limits of Windows 7.(This is the second time i am posting this.pls comment on it )

  63. DarekS says:

    @ John Cairns,

    Windows would gain nothing by using non-flat memory model (a model with 48-bit segment:offset addresses), because segment:offset address is always translated to 32-bit virtual address – if it’s larger than 4 GB, MMU truncates it to 32 bits, so you are still limited to 32-bit address space.

  64. Alessandro Angeli says:

    <<<I’ve always suspected that swapping out application memory to make space for a bigger file cache is not such a good idea in many scenarios […] maybe I’m just deluding myself in thinking that there must be a better way.>>>

    I work with digital video instead of videogames, but the issues are mostly the same you describe. And there used to be a better way, aka Win9X, where you were able to configure a maximum disk cache size and prevent ahead-of-time swap out. NT4 seemed to have similar settings (Mark even created a cache config utility for NT4).

    The current policy (swap out early and cache all you can) works well enough when you randomly access many small files (servers, batches, office workstations) or the application used to access very large files manages its own cache (databases).

    But when you sequentially scan a file that’s several GB in size, which takes a while, the system will end up swapping out practically every other process, including most of explorer.exe, and fill up the physical memory with useless disk cache (the same sector is never access twice). After that, using your system requires great patience. Even simply watching a DVD or looking at your holiday pictures makes it hard to multitask, and those are not niche scenarios.

    On Win9X, enabling the conservative swap usage and limiting the cache to a reasonable size for most office-like activities made all of the above disappear and the system was always responsive (even the MSDN Library viewer and VisualStudio were much faster than on Win2K/XP/2K3).

    Unfortunately, in its infinite wisdom (aka the "one size fits all" policy), MS dropped those settings in NT5/6, which is practically the only reason why I kept a Win98SE system as my main personal and dev machine until 2004.

  65. kostas says:

    Great thanks for this great guide.

    It seems though that I’ve been doing something wrong.

    I’ve got xp sp3 with 2 gb of ram.

    So, following what mark suggests,

    I’ve opened all the programs that I use,

    then checked the Peak Commit Charge value,

    and it was about 1,8 gb.

    So, I set Virtual memory size to min:200-max:1000.

    But then, after some time, after having LESS programs opened now,

    I always got a popup saying that system run out of memory and that performance may degrade…

    So I tried to increase it’s max limit by steps of 100, but even when I got to eg, 1400 (with eak commit charge always equal or less than 2 gb), I still got that warning about the memory.

    So, I had to set virtual memory to Auto,

    to make these popups stop.

    Any syggestions?

  66. Darryl Miles says:

    Hi Mark,

    Do the same guidelines apply for a Win2K8 system running a Hyper-V role?  Likewise if it’s a server-core install?

    I am wondering whether the pagefile on a Hyper-V host would have any value (other than the standard paging maintenance purposes you mention)

    Darryl

  67. Genetti says:

    Any ideas on this issue…

    I noticed that once I opened a number of applications on my Windows XP SP2 (32bit) PC — latest updates from MS, including IE7, explorer began to choke… I could not open

    any more windows and the menus and options would not render anymore, even though there was plenty of free memory available — windows task manager reported that I was still

    using less than 50% of the RAM / Virtual Memory that was available. After some additional investigation, I found that this sort of issue occurrs on a variety of machines.

    The first system I tried had 2GB of ram and a 1GB pagefile. Internet explorer/file explorer/"filling in" of prompts will STOP working when your total ram usage is at or

    near 1.1GB.

    HP 7800 Dual Core PC // 2GB RAM / 1GB PF (winXP (32bit) SP2 / IE7)

    – Issue Occurs at about 1.1 GB Commit

    The main problem on machines with 2GB of system ram is that explorer stops working at or around 1GB ram used, topping out at 1.1GB.  You can use programs up and past the

    1GB mark but once you open a few explorer windows you can open no more.  Once this issue occurs, even some completely unrelated programs stop working.  The strangest one

    occurs if you open a cmd window, then type ipconfig –> ipconfig fails and returns no output… unless you close some IE windows first.

    Additional Tests administered: (2gb RAM / 1GB PF // HP 7700 )

    A) Opened, photoshop, dreamweaver plus a few other programs running to get memory usage to 1GB. Then opened an Internet Explorer instance, and a c:program files through

    file explorer. These should open fine, now try opening another 1 or 2+ windows of Internet Explorer. It won’t let you open anymore windows, after 2 or 3. Photoshop also

    will not fill in forms, or open new images- it does not produce an error, just doesn’t do anything. And you’ll probably top out at 1.1GB memory used.

    B) Opened as many Internet Explorer windows as you can. When it hit the 1.1GB mark (commit), I could open no more.

    C) I opened some programs, and 33 Firefox windows, after opening C:program files and a c:windows trying then to open another c: I get a partially filled window. (ie:

    menus and other objects not rendering) So this is not an Internet Explorer issue.

    I have also tested these scenario B and C on the following machines /configurations:

    Lenovo P4 Single Core PC // 1GB RAM / 1GB PF (winXP (32bit) SP2 / IE6 and IE7)

    – Issue Occurs at about 800 MB Commit

    HP 7800 Dual Core PC // 2GB RAM / 1GB PF (winXP (32bit) SP2 / IE7)

    – Issue Occurs at about 1.1 GB Commit

    HP 7700 Dual Core PC // 4GB RAM / 1GB PF (winXP (32bit) SP2 / IE7)

    – Issue Occurs at about 1.4 GB Commit

    Generic (Intel Motherboard / Kingston RAM) INTEL DUAL CORE PC // 8GB RAM / 2GB PF (Win2003 Enterprise R2 (32bit) SP2 / IE7)

    – Issue Occurs at about 2.7 GB Commit

    Steps already tried:

    Pagefile: Various sizes, including no pagefile at all

    Boot.ini: adding /3G option (this did not change my results at all, still have a barrier)

    If anyone can shed some light on this issue or has a solution, it would be much appreciated!

  68. Larry S. says:

    @Michael s. "I don’t think he even implied that LSASS was AD specific."

    I have to agree with Kerry C. here. The statement: "Lsass.exe (which hosts Active Directory services on a domain controller)" is somewhat ambiguous. It might have been better written as: "Lsass.exe (which <among other functions> hosts Active Directory services …"

    I’m definitely a novice and the statement confused me because I know I have lsass running and I do not have AD on a single user XP Pro box. It’s a truly minor point in an excellent article but a legitimate one IMO.

  69. Will says:

    I noticed the same problem on not being able to open more IE windows after a certain numbers of windows open or after a certain period of time on my newer pcs. I had 2 Dells running intel C2D/2GB that have this problem and i thought it was the pc itself. And then i built another 2 systems using AMD 64 X2 with Abit MB and 4GB ram and they does the same thing. But, i have 2 other older pcs running AMD Athlon XP with 1GB on one and 768mb on another that does not do this. These systems are all running XP Pro SP2(updated to SP3 lately). Still trying to figure out why it does that.

  70. Darren says:

    So I have just upgraded my laptop from 2GB RAM to 4GB on a 32-bit Vista installation.  I may move to 64-bit at some point, possibly as part of an upgrade to windows 7, but my maximum memory use is closer to 3GB than 3 1/2 GB, so there’s just no pressing need at the moment.

    Given that I now have more memory than my computer can possibly use (after subtracting various graphics, driver, and legacy sections) given a 32-bit address space, what cost is there to me of disabling the page file entirely?  Swapping in this case does not increase the total amount of memory I can use, and should not make my system more stable.  

    As far as I can see, it should only serve to make my system slower, by aggressively swapping out stuff I still want in memory.

  71. Virtigo says:

    Hi MR, thanks for the wonderful post as per usual.  I have a production SQL Dell 1950 server with 16 GB RAM and I am upgrading it to a capacity of 32 GB RAM.  It currently has the following:

    Peak Bytes: 33,325 Mb

    Commit Limit: 37,062 Mb

    Commit Peak:  34,269 Mb

    Paging Files as follows:

    C: Min: 2,048 Mb

    C: Max: 4,096 Mb

    D: Min: 6,144 Mb

    D: Max: 10,240 Mb

    P: Min: 6,144 Mb (Dell MD 3000 SAN)

    P: Max: 10,240 Mb (Dell MD 3000 SAN)

    Paging File Current Allocation: 14,336

    I would like to know how I can tweak my paging files to make sure I am using the RAM optimally once I install the additional 16 Gb

    Regards in advance

    Virtigo

  72. Darren says:

    Just curious, I have also noticed this Limit with I.E. Dont think it is IE, just an interface that it uses. However, I also do not have this issue on an older machine. Is it possible that this is related to 64bit proccessors? Even if the OS is 32Bit most modern Proccessors are 64 bit capable?? Could there be something screwy happening here?

    Just as an aside, get to your limit with I.E, now close the last window and open Calculator, you can open this 3 or 4 times. I have looked at handles, threads, proccess, memory, nothing is coming close to its limit (Okay the physical memory was 1,7-1,74 GB but that is for me to arbitrary. If it is always 1,73 then its a point. but random values say somehting else to me.

  73. Darren says:

    okay,  maybe a little misleading in the previous post. It looks like it is reaching the maximum amount of User Handles/Objects and this is causing the strange behaviour. Calculator requires a lot less User Handles than IE 😀

    I am interested though why I get more user handles in a machine with more RAM? I thought this was a set limit? can there be something with the way Windows is handling user handles when mor RAM is available? (Paging or not reading or whatever?)

  74. Darren says:

    😀

    okay, this IE thing, it aint IE thing.

    Correct me where I am wrong and please fill in the blanks;

    Every Proccess in Windows (since i.E NT) has a hard limit of 10000 Handles. Wonderful

    Every Windows System (XP+) has a system wide limit of around 1,3 Million Handles. Brilliant

    Somewhere is a "User" limit, and I cannot find it. If I open IE winodws until nothing more runs then close 3, start a CMD and run testlimt -h I get 10000. Now open another IE Window and run it again. You get around 6000   Why?! do it again and you get maybe 3k. If these are process limits, why are they getting capped by another process??!! There is another limit that I am missing. Sorry If I cant read and it is described already here.

    Is this a limit of the desktop heap or a User Handle limit? I am experiencing this problem on a citrix server and am trying to trouble shoot it. headache.

    maybe I am running in the wrong direction, but i got a warm feeling when I saw that handle limit suddenly drop.

  75. Richard says:

    I have a Windows 2003 R2 Enterprise server with 32 GB of RAM currently running a 32 GB pagefile.  The Peak Commit Charge is around 5 GB.  What size should I make the pagefile?

  76. Jamie Hanrahan says:

    MarkR: Regarding setting the pagefile size, I must disagree with the advice in the article.

    You suggest subtracting RAM size from the peak observed commit charge and setting the pagefile size to the difference.

    Result: your pagefile plus RAM have enough space to accommodate your peak commit charge.

    However this does not leave any room in RAM for code! Or any other mapped files. Or for the operating system’s varous nonpageable allocations. Remember, "commit charge" does not include these.

    My recommendation is to set up your maximum workload, then use the very convenient performance monitor counter, Page file / %usage peak.

    Your pagefile should be large enough to keep this under 25%, 50% at worst.  Reason – lots of breathing room helps avoids internal fragmentation of the pagefile.

    You really don’t want to run the pagefile close to 100% full, especially with Vista and its much larger pagefile writes.

  77. Dick DeFer says:

    We recently got new Dell Workstation pcs, running 32 bit XP-sp3, with 4 GB ram (dual quad processors).  Due to frequent memory error messages, we had virtual memory set to 12280-12280.  This problem occurred using MS Word, MS Outlook, Adobe Acrobat Professional (reading 40 mb pdf), ArcMap and Microstation.

    No memory problems since.  Users can’t run defrag or make system changes, so this was the administrator solution.

    Note: I was previously using a Dell Optiplex 620, same operating system, with 4 GB ram and never had virtual memory error messages.

  78. JamesW says:

    As a relative newcomer to computers I am curious,

    Given that 32bit windows XP/Vista/Windows 7 cannot address much more than 3GB RAM, and that hardware considerations may reduce this, If a machine has 4GB memory installed (e.g. 2x2GB) Is it possible for the remaining memory (if any) up to 1GB to be used as a RAMDRIVE.

    External Constraint prevent me from using a 64bit OS for now, and I feel it would be beneficial to make use of any unused portion of memory. It can be used to speed up systems perhaps by caching commonly used files following system startup or used for Virtual Machines as a container for Virtual Page Files rather than storing these on the Disk.

    Thanks for any constructive help and guidance.

  79. SuperGumby says:

    No James, XP/Vista/Win7 (x86) all happily address 4GB RAM, I think you are confused about the 2/2 or 1/3 address allocation available to each process.

    If instead you are questioning that part of 4GB that cannot be used by the OS due to memory mapped devices, there’s no help here either. The device can (or cannot) be mapped above 4GB to make memory within the 4GB available but there’s not much a ‘user’ can do about this.

    /out on a limb a little

    people have crazy ideas about ramdisks, put their paging files on ramdisk. What this actually does is cause the windows memory manager to page more often to (admittedly) faster ‘space’. It may be beneficial in some circumstances but I don’t think anyone should generalise a rule about it.

  80. Bob says:

    I’ve seen a couple of things here on SQL, but I am still questioning the recommended pagefile size for a MS SQL server (SQL 2005, 64-bit) with 32 GB of physical RAM.  1.5 x says it should be 48 GB, which seems huge to me.  I’ve looked at the perfmon stats and the pagefile useage is usually less than 10% of my current 4 GB pagefile, so why would I need a bigger pagefile than that?

  81. David M says:

    Hello,

    The way I looking at is that long time ago when x64 did not exist was needed apply the 1,5 figure. Nowadays this is no longer needed. With such as amount of memory and having x64, this is the most important, 4Gb of page file is more than enough.

    I strongly recommend you take a look at:

    http://support.microsoft.com/kb/889654

    Hope this clears up your question.

    Cheers.

  82. JamesW says:

    Thankyou SuperGumby,

    You are right I am somewhat confused by the 2/2 1/3 rules but this is in itself complicated by virtue of the fact that many (but not of course all) so called "Authorities" including Magazines and related Authors, Manufacturers of Computers and Computer Accessories in particular Laptop’s and smaller computers to name a few who state that 32bit Windows on most if not all Computers cannot address much more than 3GB of physical memory.

    There may be genuine reasons such as chipset limitations or limitations imposed by drivers, the kernel in relation to hardware, where certain hardware may not access or address all memory above around 3GB.

    It would be helpful if the article could be updated to give examples where there might be hardware, kernel, driver or other limitations rather than just a general Operating Systems limitation.

    In my case I intend to install and use more than 4GB of RAM on hardware which supports it, as I would be dual-booting between 32bit XP and 64bit Linux and running Virtual Machines from either. The only reason for using XP is that certain software particularly that which relates to hardware such as hardware support tools from manufacturers, and certain applications for playback of copywrited materials such as some DVD’s and BlueRay disks cannot be run from Linux or within a Virtual Machine (As far as I am aware)

    As I can apparently address the full 4GB from within 32bit XP then I will increase the memory allocation to the Virtual Machine and reduce or remove dependancy on Page Files/Swap Space for Virtual Machines.

  83. Raptor007 says:

    Hey, just wanted to thank you for an excellent guide!  You really cleared up some of the mystery of how Windows handles its virtual memory addressing.

    I had another question that I couldn’t quite figure out.  In 32-bit Windows, is there a maximum of 2GB (or higher with 4GT) of user address space PER process, or TOTAL for all processes?  More simply, could I run two memory-hungry applications and let them each allocate 2GB, provided I have sufficient physical and pagefile memory available?

    The reason I ask is that it would help me pick the ideal pagefile size and the 4GT setting.  If the user address space is a total for ALL applications, that would suggest 4GT would assist with any amount of physical memory and that there’s no point to having more than 4GB of physical + pagefile memory.  Conversely, if EACH process can have a full-sized user address space of its own, 4GT would be pointless on a system with 2GB or less of physical memory, and pagefiles could conceivably be well-utilized at any size (depending on how many processes you are running).

    I’m guessing based on your article that 32-bit Windows memory management is able to keep more than 4GB of total virtual address space, separated into chunks that each fit into a 32-bit space.  But I’d like a definitive answer from someone more knowledgeable in this area.

  84. Jamie E Hanrahan says:

    > In 32-bit Windows, is there a maximum of 2GB

    > (or higher with 4GT) of user address space PER

    > process, or TOTAL for all processes?  

    The former. Hence "user address space" is also called "per-process" address space. You get 2GB of it per process. Or 3GB with the /3GB option.

    > More simply, could I run two memory-hungry

    > applications and let them each allocate 2GB,

    > provided I have sufficient physical and

    > pagefile memory available?

    Yep. In fact you would not need physical + pagefile space for 2GB x nProcesses if they’re doing a lot of file mapping instead of VirtualAlloc and similar – see below.

    > The reason I ask is that it would help me pick

    > the ideal pagefile size and the 4GT setting.  

    The ideal thing is to stop worrying about "ideal" pagefile size – just make it big enough to keep actual pagefile usage fairly low. I like to see it below 50% absolute worst case, ideally below 25%. With modern hard drive prices this should not be difficult.

    > Conversely, if EACH process can have a full-

    > sized user address space of its own, 4GT would

    > be pointless on a system with 2GB or less of

    > physical memory,

    No, not at all. Not everything has to be paged in (that is, in physical memory) just because it’s defined.

    Remember that the whole point of paging and virtual memory systems (well, one of the main points) is something akin to a 90/10 rule: most programs spend 90% of their time referencing 10% of the address space they have defined. The rest can be kept out on disk. Or 80/20, or whatever – but for almost all programs the ratio is not much lower than that.

    For example, if you never use the "spell check" function in Word, that stuff never gets paged in. But if it was linked with the app at link time, it occupies virtual address space whether it’s used or not.

    So a process might take advantage of the (strangely named) "4GT" option and use up to 3GB of v.a.s., but to run efficiently might need only 300 MB of RAM ("working set" in the Vista Task Manager). You might have a several processes like that and they would likely be just fine in 2GB RAM.

    Another point re. pagefile size is that code (as opposed to process-private writeable data) normally never gets paged out to the paging file – but of course when it’s being executed it does have to live in RAM and it does occupy process virtual addrsss space. The total virtual address space defined by a process includes both "pagefile-backed regions" (also called "process-private", "private bytes", and "committed" memory)  and "memory mapped regions". The latter is how program code is accessed, and it can also be used for data files.

    Thus the total v.a.s. possible on a system is NOT limited to RAM + pagefile size; it may be much larger. The "extra" is in all the mapped files, which include every exe and dll currently in use.

    > I’m guessing based on your article that 32-bit

    > Windows memory management is able to keep more

    > than 4GB of total virtual address space,

    > separated into chunks that each fit into a 32-

    > bit space.  But I’d like a definitive answer

    > from someone more knowledgeable in this area.

    Yes. You can demonstrate this easily with the Performance Monitor tool. Click the big Plus sign (Add counter to chart), select the Process class of objects, select the "virtual bytes" counter (this includes both pagefile-backed and mapped address regions), and select the "Total" instance. On any reasonably busy system you will likely get a number quite a bit larger than 4GB.

    The URL I’ve given here is not my URL, but is a link to a forum post at arstechnica.com that illustrates how the per-process address spaces look in a memory map. Also read the next post by the same author (two posts down).

  85. Raptor007 says:

    Thanks for the great post Jamie!

    You bring up an excellent point about applications allocating more memory than they will use frequently, so I think I will try setting the 4GT on my desktop PC to 2.5/1.5, even though it has only 2GB RAM.

    I’ve given both of my machines fixed 4095MB contiguous pagefiles, so regardless of how much or little paging is necessary at any time, there should be no fragmentation caused by the pagefile changing sizes.

    Now, one last question… is Windows smart enough to avoid paging out when there’s plenty of physical memory to go around?  In other words, is there any disadvantage to having a large pagefile?

  86. Jamie Hanrahan says:

    > is Windows smart enough to avoid paging out

    > when there’s plenty of physical memory to go

    > around?  

    In general yes. If free RAM is plentiful processes will be allowed to grow their working set (roughly "what they have paged in") above the usual limits. If RAM later becomes scarce, one of the first reclamation mechanisms is that these "extended" processes are shrunk back down.

    However that little experiment with the virtual bytes counter should show you that you can’t expect to keep everything in RAM all the time, or even everything that’s useful in RAM.

    The page I/O rates, also visible in Performance Monitor, will tell you how much paging is happening, but it’s very difficult to tell how much paging is due to low memory conditions and how much is due to the fact that, in a virtual memory OS, paging happens. All code and pre-initialized data is brought in via paging, for example. So are the contents of all data files that are opened without bypassing the file cache.

    Even the page write rate is not directly useful, because you don’t know how many of the pages written are going to be needed again soon, or ever.

    A "page re-read rate" would be a really useful counter to have: How many pages are being faulted in from disk that were previously pushed out of RAM? Alas this counter does not exist and I know of no way to calculate or infer it from existing data.

    One other hint: Your paging I/O rates do not reflect only the pagefile, because all mapped files (exe’s, dll’s, data files accessed via the file cache as well as through direct file mapping) are read and, if appropriate, written by the pager. If you want to know the page I/O rates to just the pagefile, the only way I know of is to put the pagefile by itself on a partition and then use the partition (logical disk) I/O counters. And this is about the only good reason for putting the pagefile in a partition of its own on a multi-partition disk.

    > In other words, is there any disadvantage to > having a large pagefile?

    This is really a larger question, but no, not to my knowledge. In particular, increasing the size of the pagefile will not "attract" more page-out activity than would otherwise occur, all things being equal… and it will help keep the internal fragmentation of the pagefile low.

  87. Gordon says:

    Utmost respect to Mark, but I’d like to invite him round to my house where I’ll demonstrate two scenarios:

    1) Vista Home Premium used as a living room Media Center (2GB, page file enabled).

    See the sluggish response!

    Listen to the almost constant thrashing of the hard disk!

    Hear the wailing of my wife who can’t understand why the system is so slow sometimes!

    Witness the grimace on my face as I’m forced to make excuses for the damn thing!

    2) Vista Home Premium used as a living room Media Center (2GB, page file disabled).

    Watch as new browser tabs open in an instant!

    Marvel at the lack of noise from the box in the corner!

    See the contented smile on my wife’s face!

    Observe the peace and love in my home!

    Now this box only ever runs Media Center, Chrome, Logmein and anti-virus so the memory demand doesn’t really vary much and as such may be a bit of a marginal case, but the fact is that running without page file on this machine makes the whole thing far more pleasant to use.

    For a normal machine then I see the point, but disabling the page file on my media center was like swapping in an SSD; it instantly became quieter and more responsive.

    Note that I had to change a couple of registry entries to make this work without constant dire warnings, details here http://superuser.com/questions/41789/how-to-suppress-low-memory-warnings-in-vista-home-premium-when-running-without

  88. Chris Allen says:

    > but what do i do with the pagefile on a 64

    >bit server 2003 system with 32gb ram, with a memory-intensive application? … let’s call

    >the apllication … exchange 2k7 :-)

    >Wednesday, November 19, 2008 7:57 AM by robad

    Was this ever answered. I do not see the answer in the thread. My scenario would need this answered as well. What we are trying to do is get a full Ctrl-ScrollLock Memory dump due to some system hangups we have been experiencing. With 32gb of RAM on the system, for a full dump to be possible, besides setting CrashDumpEnabled in HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlCrashControl and CrashOnCtrlScroll HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesKbdhidparameters in the registry, to allow for a full dump, we would also need a page file equal to RAM +100mb per Microsoft’s own performance team. Is this accurate? Are there any pitfals to setting a page file to this size?

  89. Another Voice says:

    One Question. I have 4GB installed, bios detects them, windows xp 32bit still has the 2gb user space limitation. Ok. I know about virtual address-space, I am a c++ progammer. Assuming I have 4 apps using 1gb of memory each (lets assume a huge working set of unpaged memory), shouldnt the computer be able to profit from 4gb physical ram vs 2gb physical ram?

    thanks for answering

  90. Cuong says:

    Replying to "Another Voice".

    We need to keep things straight between virtual address space, physcical memory limit, process size limit, etc.

    Your computer is fully benefited from the 4G of physical mem (2G is allocated to apps, and 2 G to the kernel).  Although at the default, 2G allocated to the kernel may not be fully used. This is a question of efficient use of physical memory when using the default setting of an OS, such as XP, and hence this type of discussion.

    We have a default 2G user space (for physical mem), but you can push it up to 3G so that you can make use of the additonal 1G physical mem for apps processes. The kernel uses the remaining physical mem which is bounded between 3G to 4G.

    The four apps each uses 1G of address space in the virtual address range.  The computer still benefits from the available 4G physical memory. The kernel has the last 1G and the four apps shares the use of the first 3G (minus some for BIOS and device drivers) of physical mem.  The only difference is that all 4x1G apps can not be loaded into physical memory at the same time because all your apps can only be loaded into the first 3G of mem.  So if there is no shared resources (lib, etc.), the round robin and LRU algorithms would control which libs and files will be paged out at any instant of time.  If each of the 4 apps has only a single process of size 1G and non of the files are shared, then only 3 apps can be in physical memory at any instant of time (this does not necessarily mean that only 3 apps can be ‘running’at the same time.  In fact all 4 can be running at the same time, but only 3 can be in physical memory at any time.)

  91. Shawnb says:

    I am using Process Explorer and having an issue.  I am trying to calcualate the commiting current.  When i go into add columns i do not see "commit charge".    which values do i add to get commit charge?

  92. Victor says:

    Very very great post. But Mark, there's a question concerning this section "In reality, not quite all of physical memory counts toward the commit limit since the operating system reserves part of physical memory for its own use". For this reason, the system commit limit should be a little less than the amount of physical memory and page files.

    But in my Windows 7 system, I have 1,833,392KB of physical memory (from Process Explorer) as well as a page file of also 1,833,392KB. At the same time, Process Explorer shows the commit limit of 3,666,784KB, just double 1,833,392.

    And another implicit question is, if the commit limit is equal to physical memory plus page files, how it could be "Not all the virtual memory that a process allocates counts toward the commit limit"? Where are those other virtual memories from, e.g. reserved virtual memory?

  93. Ramazan Can says:

    as always – thanks for your time and the very helpful article 😉

    Best Regards from Germany

    Ramazan

  94. Ralph Finch says:

    Great article, but what about multiple pagefiles, which Windows allows?  How are those used?  I have this question using SSDs for a boot drive. Apparently it's recommended to not have a pagefile on the SSD, which have a write-limited life. Alternatives are RAMdisks or another spinner (traditional HDD). But I have a netbook with no other drive than the SSD, and only 2GB of RAM. On that I have 2 pagefiles, one in a RAMdisk, the other on the SSD.  I'd like to know which one will be used first and most.

  95. Josef Makower says:

    Practical comment. Photoshop CS3  

    refuse to run without PF.

  96. Cr1ms0n says:

    So if I have 1.5 GB of RAM in my HP laptop, what shall be the Custom initial size and maximum size, considering I want to get the best I can, out of my computer?

    Thanks in advance..

  97. Whatnow says:

    As non-enterprise user:

    If I have 8GB RAM, the 1.5 "rule" isn't necessary so I won't have to waste 12GB just for the pagefile, right?

    I'd be better off setting it to 4GB (or 6GB if I run into problems)?

    Also what about SSDs?

    An MSDN blog advises to leave it on your SSD, as it is a perfect match:

    blogs.msdn.com/…/support-and-q-a-for-solid-state-drives-and.aspx

    Corsair however advises against it because of flash wear:

    http://www.corsair.com/…/How_to_Optimize_Your_SSD_Boot_Drive.pdf

    There it states you should move it to a spinning HDD as the performance decrease is negligible, but the SSD will be spared (which in turn should mean higher performance over a longer period of time as it isn't worn down as quick).

    On the other hand, this is probably meant for 60 and 120GB drives that run out of usable flash earlier – I'd like to go with a 256GB, so shouldn't wear less of a problem here?

    So…8GB RAM+256GB SSD = ?

    Leave a 4 or 6GB pagefile on the SSD, move it to HDD(and possibly increase it to 12GB since enough space is available) or disable it completely(probably not so hot this idea)??

    Any advise is welcome.

  98. Master D. Homie says:

    After reading this article, I have been running a machine with Windows 7 Ultimate x64 with 8GB of RAM and a 16MB paging file. The virtual lack of a paging file has shown me that it's not always necessary. I'm now back reading this article again contemplating the use of a reasonably small paging file for a Windows Server 2008 R2 machine with SQL Server 2008 R2 and 24Gb of RAM running under Hyper-V Server 2008 R2.

  99. Robert Carnegie says:

    On my last machine, I put page file into a separate partition so that it wasn't included in my system partition backups, and I used FAT32 because I wanted to.  I also decided that the file might as well be 4 gigabytes, because that was plenty, I could afford to give up the disk space, and on FAT32 it couldn't be any bigger.

    Now I have a new machine to set up…  and someone is telling me that I should still have at least 300 MB of pagefile on volume C.  Is there a justification for that?  Something to do with successful system crash dumps, maybe?  I previously did what I liked and it seemed to work OK.

  100. DW says:

    We are using software that is tied to 32-bit Windows operating system and we are running into major issues as we hit the PVM limits of the system.  We already have the large address aware enabled to allocate 3GB address space for our application.  Is there anything we can do to work around these limits (from the OS perspective), or are we stuck with our only options being upgrade to 64-bit OS or break-up the application into multiple processes?

    Thanks in advance!

    DW

  101. TomKan says:

    @DW: If you want to use more than 3GB (3.3 with /USERVA= switch) from within x86 application, you should use EITHER  multiple processes OR use some mechanism p. e. AWE to indirectly manage further memory pages.

    (note: I'm not a Mark Russinovitch so I can be wrong :-)

  102. Brian says:

    Mark,

    I've read this series of articles and the Microsoft Windows Internals book. Thank you very much. I haven't found anything this informative since Peter Norton's books (way back when).

    I have an issue I can't let go.

    By design, 32-bit Windows total Virtual address space is 4GB. Thus for me – Pagefile Size + Physical Ram must be less than or equal to 4GB total. If the physical ram is 4GB and the pagefile is 2GB, then nothing in the pagefile is used by the system during normal operations. The pagefile would only be used for a crashdump. Of course, this "might" change if using an application that is AWE aware.

    By design, x64 total Virtual address space is 16TB (8BG User/8GB System). As you say in this article, "Windows can having paging files that are up to 16TB in size". KB Article 294418 lists the maximum paging file size as 256TB. They assume the 16TB limit and allow for 16 pagefiles. This doesn't make any sense to me as the virtual space is limited by the operating system at 16TB. I've seen this 256TB pagefile notation several times. I can only think that you would need a pagefile over 16TB if x64 supported some sort of AWE mechanism. So far, I haven't heard about one.

    If you are following up at this late date, I want to understand. Again Thanks.