Inside Vista SP1 File Copy Improvements

Windows Vista SP1 includes a number of enhancements over the original Vista release in the areas of application compatibility, device support, power management, security and reliability. You can see a detailed list of the changes in the Notable Changes in Windows Vista Service Pack 1 whitepaper that you can download here. One of the improvements highlighted in the document is the increased performance of file copying for multiple scenarios, including local copies on the same disk, copying files from remote non-Windows Vista systems, and copying files between SP1 systems. How were these gains achieved? The answer is a complex one and lies in the changes to the file copy engine between Windows XP and Vista and further changes in SP1. Everyone copies files, so I thought it would be worth taking a break from the “Case of…” posts and dive deep into the evolution of the copy engine to show how SP1 improves its performance.


Copying a file seems like a relatively straightforward operation: open the source file, create the destination, and then read from the source and write to the destination. In reality, however, the performance of copying files is measured along the dimensions of accurate progress indication, CPU usage, memory usage, and throughput. In general, optimizing one area causes degradation in others. Further, there is semantic information not available to copy engines that could help them make better tradeoffs. For example, if they knew that you weren’t planning on accessing the target of the copy operation they could avoid caching the file’s data in memory, but if it knew that the file was going to be immediately consumed by another application, or in the case of a file server, client systems sharing the files, it would aggressively cache the data on the destination system.


File Copy in Previous Versions of Windows

In light of all the tradeoffs and imperfect information available to it, the Windows file copy engine tries to handle all scenarios well. Prior to Windows Vista, it took the straightforward approach of opening both the source and destination files in cached mode and marching sequentially through the source file reading 64KB (60KB for network copies because of an SMB1.0 protocol limit on individual read sizes) at a time and writing out the data to the destination as it went. When a file is accessed with cached I/O, as opposed to memory-mapped I/O or I/O with the no-buffering flag, the data read or written is stored in memory, at least until the Memory Manager decides that the memory should be repurposed for other uses, including caching the data of other files.


The copy engine relied on the Windows Cache Manager to perform asynchronous read-ahead, which essentially reads the source file in the background while Explorer is busy writing data to a different disk or a remote system. It also relied on the Cache Manager’s write-behind mechanism to flush the copied file’s contents from memory back to disk in a timely manner so that the memory could be quickly repurposed if necessary, and so that data loss is minimized in the face of a disk or system failure. You can see the algorithm at work in this Process Monitor trace of a 256KB file being copied on Windows XP from one directory to another with filters applied to focus on the data reads and writes:



Explorer’s first read operation at event 0 of data that’s not present in memory causes the Cache Manager to perform a non-cached I/O, which is an I/O that reads or writes data directly to the disk without caching it in memory, to fetch the data from disk at event 1, as seen in the stack trace for event 1:



In the stack trace, Explorer’s call to ReadFile is at frame 22 in its BaseCopyStream function and the Cache Manager invokes the non-cached read indirectly by touching the memory mapping of the file and causing a page fault at frame 8.


Because Explorer opens the file with the sequential-access hint (not visible in trace), the Cache Manager’s read-ahead thread, running in the System process, starts to aggressively read the file on behalf of Explorer at events 2 and 3. You can see the read-ahead functions in the stack for event 2:



You may have noticed that the read-ahead reads are initially out of order with respect to the original non-cached read caused by the first Explorer read, which can cause disk head seeks and slow performance, but Explorer stops causing non-cached I/Os when it catches up with the data already read by the Cache Manager and its reads are satisfied from memory.  The Cache Manager generally stays 128KB ahead of Explorer during file copies.


At event 4 in the trace, Explorer issues the first write and then you see a sequence of interleaved reads and writes. At the end of the trace the Cache Manager’s write-behind thread, also running in the System process, flushes the target file’s data from memory to disk with non-cached writes.


Vista Improvements to File Copy

During Windows Vista development, the product team revisited the copy engine to improve it for several key scenarios. One of the biggest problems with the engine’s implementation is that for copies involving lots of data, the Cache Manager write-behind thread on the target system often can’t keep up with the rate at which data is written and cached in memory. That causes the data to fill up memory, possibly forcing other useful code and data out, and eventually, the target’s system’s memory to become a tunnel through which all the copied data flows at a rate limited by the disk.  


Another problem they noted was that when copying from a remote system, the file’s contents are cached twice on the local system: once as the source file is read and a second time as the target file is written. Besides causing memory pressure on the client system for files that likely won’t be accessed again, involving the Cache Manager introduces the CPU overhead that it must perform to manage its file mappings of the source and destination files.


A limitation of the relatively small and interleaved file operations is that the SMB file system driver, the driver that implements the Windows remote file sharing protocol, doesn’t have opportunities to pipeline data across high-bandwidth, high-latency networks like WLANs. Every time the local system waits for the remote system to receive data, the data flowing across the network drains and the copy pays the latency cost as the two systems wait for the each other’s acknowledgement and next block of data.


After studying various alternatives, the team decided to implement a copy engine that tended to issue large asynchronous non-cached I/Os, addressing all the problems they had identified. With non-cached I/Os, copied file data doesn’t consume memory on the local system, hence preserving memory’s existing contents. Asynchronous large file I/Os allow for the pipelining of data across high-latency network connections, and CPU usage is decreased because the Cache Manager doesn’t have to manage its memory mappings and inefficiencies of the original Vista Cache Manager for handling large I/Os contributed to the decision to use non-cached I/Os. They couldn’t make I/Os arbitrarily large, however, because the copy engine needs to read data before writing it, and performing reads and writes concurrently is desirable, especially for copies to different disks or systems. Large I/Os also pose challenges for providing accurate time estimates to the user because there are fewer points to measure progress and update the estimate. The team did note a significant downside of non-cached I/Os, though: during a copy of many small files the disk head constantly moves around the disk, first to a source file, then to destination, back to another source, and so on.


After much analysis, benchmarking and tuning, the team implemented an algorithm that uses cached I/O for files smaller than 256KB in size. For files larger than 256KB, the engine relies on an internal matrix to determine the number and size of non-cached I/Os it will have in flight at once. The number ranges from 2 for files smaller than 2MB to 8 for files larger than 8MB. The size of the I/O is the file size for files smaller than 1MB, 1MB for files up to 2MB, and 2MB for anything larger.


To copy a file 16MB file, for example, the engine issues eight 2MB asynchronous non-cached reads of the source file, waits for the I/Os to complete, issues eight 2MB asynchronous non-cached writes of the destination, waits again for the writes to complete, and then repeats the cycle. You can see that pattern in this Process Monitor trace of a 16MB file copy from a local system to a remote one:



While this algorithm is an improvement over the previous one in many ways, it does have some drawbacks. One that occurs sporadically on network file copies is out-of-order write operations, one of which is visible in this trace of the receive side of a copy:



Note how the write operation offsets jump from 327,680 to 458,752, skipping the block at offset 393,216. That skip causes a disk head seek and forces NTFS to issue an unnecessary write operation to the skipped region to zero that part of the file, which is why there are two writes to offset 393,216. You can see NTFS calling the Cache Manager’s CcZeroData function to zero the skipped block in the stack trace for the highlighted event:



A bigger problem with using non-cached I/O is that performance can suffer in publishing scenarios. If you copy a group of files to a file share that represents the contents of a Web site for example, the Web server must read the files from disk when it first accesses them. This obviously applies to servers, but most copy operations are publishing scenarios even on client systems, because the appearance of new files causes desktop search indexing, triggers antivirus and antispyware scans, and queues Explorer to generate thumbnails for display on the parent directory’s folder icon.


Perhaps the biggest drawback of the algorithm, and the one that has caused many Vista users to complain, is that for copies involving a large group of files between 256KB and tens of MB in size, the perceived performance of the copy can be significantly worse than on Windows XP. That’s because the previous algorithm’s use of cached file I/O lets Explorer finish writing destination files to memory and dismiss the copy dialog long before the Cache Manager’s write-behind thread has actually committed the data to disk; with Vista’s non-cached implementation, Explorer is forced to wait for each write operation to complete before issuing more, and ultimately for all copied data to be on disk before indicating a copy’s completion. In Vista, Explorer also waits 12 seconds before making an estimate of the copy’s duration and the estimation algorithm is sensitive to fluctuations in the copy speed, both of which exacerbate user frustration with slower copies.


SP1 Improvements

During Vista SP1’s development, the product team decided to revisit the copy engine to explore ways to improve both the real and perceived performance of copy operations for the cases that suffered in the new implementation. The biggest change they made was to go back to using cached file I/O again for all file copies, both local and remote, with one exception that I’ll describe shortly. With caching, perceived copy time and the publishing scenario both improve. However, several significant changes in both the file copy algorithm and the platform were required to address the shortcomings of cached I/O I’ve already noted.


The one case where the SP1 file copy engine doesn't use caching is for remote file copies, where it prevents the double-caching problem by leveraging support in the Windows client-side remote file system driver, Rdbss.sys. It does so by issuing a command to the driver that tells it not to cache a remote file on the local system as it is being read or written. You can see the command being issued by Explorer in the following Process Monitor capture:



Another enhancement for remote copies is the pipelined I/Os issued by the SMB2 file system driver, srv2.sys, which is new to Windows Vista and Windows Server 2008. Instead of issuing 60KB I/Os across the network like the original SMB implementation, SMB2 issues pipelined 64KB I/Os so that when it receives a large I/O from an application, it will issue multiple 64KB I/Os concurrently, allowing for the data to stream to or from the remote system with fewer latency stalls.


The copy engine also issues four initial I/Os of sizes ranging from 128KB to 1MB, depending on the size of the file being copied, which triggers the Cache Manager read-ahead thread to issue large I/Os. The platform change made in SP1 to the Cache Manager has it perform larger I/O for both read-ahead and write-behind. The larger I/Os are only possible because of work done in the original Vista I/O system to support I/Os larger than 64KB, which was the limit in previous versions of Windows. Larger I/Os also improve performance on local copies because there are fewer disk accesses and disk seeks, and it enables the Cache Manager write-behind thread to better keep up with the rate at which memory fills with copied file data. That reduces, though not necessarily eliminates, memory pressure that causes active memory contents to be discarded during a copy. Finally, for remote copies the large I/Os let the SMB2 driver use pipelining. The Cache Manager issues read I/Os that are twice the size of the I/O issued by the application, up to a maximum of 2MB on Vista and 16MB on Server 2008, and write I/Os of up to 1MB in size on Vista and up to 32MB on Server 2008.


This trace excerpt of a 16MB file copy from one SP1 system to another shows 1MB I/Os issued by Explorer and a 2MB Cache Manager read-ahead, which is distinguished by its non-cached I/O flag:



Unfortunately, the SP1 changes, while delivering consistently better performance than previous versions of Windows, can be slower than the original Vista release in a couple of specific cases. The first is when copying to or from a Server 2003 system over a slow network. The original Vista copy engine would deliver a high-speed copy, but, because of the out-of-order I/O problem I mentioned earlier, trigger pathologic behavior in the Server 2003 Cache Manager that could cause all of the server’s memory to be filled with copied file data. The SP1 copy engine changes avoid that, but because the engine issues 32KB I/Os instead of 60KB I/Os, the throughput it achieves on high-latency connections can approach half of what the original Vista release achieved.


The other case where SP1 might not perform as well as original Vista is for large file copies on the same volume. Since SP1 issues smaller I/Os, primarily to allow the rest of the system to have better access to the disk and hence better responsiveness during a copy, the number of disk head seeks between reads from the source and writes to the destination files can be higher, especially on disks that don’t avoid seeks with efficient internal queuing algorithms.


One final SP1 change worth mentioning is that Explorer makes copy duration estimates much sooner than the original Vista release and the estimation algorithm is more accurate.



File copying is not as easy as it might first appear, but the product team took feedback they got from Vista customers very seriously and spent hundreds of hours evaluating different approaches and tuning the final implementation to restore most copy scenarios to at least the performance of previous versions of Windows and drastically improve some key scenarios. The changes apply both to Explorer copies as well as to ones initiated by applications using the CopyFileEx API and you’ll see the biggest improvements over older versions of Windows when copying files on high-latency, high-bandwidth networks where the large I/Os, SMB2’s I/O pipelining, and Vista’s TCP/IP stack receive-window auto-tuning can literally deliver what would be a ten minute copy on Windows XP or Server 2003 in one minute. Pretty cool.

Comments (195)
  1. The improvements apply to the Shell and the CopyFileEx engine. I’ve updated the conclusion to indicate that.

  2. Anonymous says:

    Apologies Eric.  That was for Norman Diamond.

  3. Anonymous says:

    Regarding Previous Versions, I agree, that’s a major selling point of Vista, and something that sets Microsoft ahead of its most significant competitors.  I’ve never understood why versioning hasn’t caught on more on other OSes and consider Microsoft’s steps in this direction an instance of real leadership.  With that said, it could be improved, of course.  In fact, Microsoft could do worse than to clone VMS’s versioning behavior as precisely as possible for Seven and worry about more innovative improvements subsequently.  VMS has had this since, approximately, the stone age, and it’s really nice, but other systems (which in pretty much all other ways are much more up-to-date) for some odd reason have never really adopted it, until Vista.  OTOH, unless I have misunderstood badly, it’s not really an Explorer function (at least, I should hope not) but part of the filesystem driver or somesuch, so there may not be a lot that the Explorer team can do about it, other than handling how versions are represented in the GUI.

    Stephan, that’s great to hear.  Not stopping in the middle of a large copy operation just because one file (usually one you don’t actually need; e.g., when copying Documents and Settings for backup purposes on XP you run into locked system housekeeping files that you don’t actually need) is uncopyable is a huge improvement.  If that’s true, I am really looking forward to seeing Vista replace XP.  (No, I haven’t deployed Vista yet.  I’ll address that below.)  I do have a question, though.  Does it give you a separate "Can’t do this one, skipping" dialog box for each file that fails, make a list and present you with a single dialog box at the end of the whole operation, or asynchronously populate a visible "skipped file" list in the progress dialog?

    As far as being able to actually go ahead and copy the file, that would probably mean moving to an inode-oriented filesystem, which would have a *lot* of implications beyond just being able to make copies of open files.  I’m not saying it would ultimately be bad in the long run (indeed, there would be other advantages, e.g., not having to reboot when system files other than the kernel are updated), but it would be a major change for Windows, and major changes always mean bugs and incompatibilities in the short-term, so it’s not something that would be done lightly.

    We’re holding off Vista deployment where I work until at least SP1 is available, because we have had bad experiences in the past with brand-new Microsoft OSes until the service packs start coming out.  (XP for instance was not really a viable replacement for 2K until SP1 came out, and it wasn’t a really marked improvement until SP2, in my estimation.)  But that does not mean there aren’t things about Vista that I’m really looking forward to.  There are.  (UAC, for all the flack it has received, is one of the biggies.  Other systems have had something like it for a long time, and while Microsoft’s first implementation no doubt needed some adjustments, it is a huge step in the right direction.  I am very much looking forward to not having to log in as Administrator periodically just to get LiveUpdate to work correctly.)  And holding off deployment for a few months doesn’t mean we think Vista is the plague (though I do have some coworkers who think upgrades are the plague).  It just means we aren’t early adopters.  We understand that you never find all the bugs until you deploy to a bunch of real users, and we’d like to let other real users be the guinea pigs, as it were.  SP1 is something I consider to be very important, even though I haven’t actually used Vista yet (nor seen it, really, except for a few minutes in a Microsoft travelling vehicle at a tech rally), because it brings Vista closer to the point where we *will* want to start deploying it.  Microsoft needs to understand that service packs are a key factor in broader uptake of new versions (though they are not the only factor; time is also important).

  4. Anonymous says:

    Mark Anon:

    How can any zip extraction implementation be *slower* than the one in XP?  That beggars the imagination.  I expect some overhead in GUI tools like Explorer, because they do more than the CLI equivalent (my basis for comparison for zip extraction being info-zip).  For instance, they attempt to actually calculate progress, rather than just saying when each file is completed or whatever like a CLI app would do.  That’s something many users want, and so it’s worth a little overhead.  There are other things too, so like I said I expect some overhead.  It wouldn’t bother me (much) if Explorer took twice or even four times as long to extract as info-zip.  But 50+ times as long (for a zipfile containing a large number of small-to-medium files in multiple nested directories) is just completely unreasonable, I don’t care *what* extra tasks it thinks it’s accomplishing.  And you say Vista’s zip extraction implementation is even *slower*?  What’s it doing, stopping after every 60KB of extracted data to recalculate the whole zipfile’s SHA256 checksum?

  5. Mike Dimmick says:

    @Mark Anon: I think that’s got a lot to do with the ‘Attachment Security’ behaviour – if the ZIP was downloaded from the web and has the attachment security enabled (‘This file came from another computer’ in Properties), the extract engine applies attachment security to all the contents. This typically breaks CHM files.

    WinZip 11 understands attachment security and if the ZIP is marked, will do the same, which is very slow. Clicking Unblock in Properties causes it not to apply attachment security, and extraction goes back to the expected speed.

    WinZip 11 is still faster than the built-in ZIP feature, however.

  6. anony.muos says:

    Mark, what about the copy (command prompt), xcopy and robocopy in Vista? Which one of them use CopyFileEx API? Which command still copies using pre-Vista methods? Please answer this.

    Also, for Windows 7 maybe you can add pausing and resuming and even better would be allowing priorities, like copying in the background using low priority I/O or a mission-critical copy etc.

    As for the shell/Explorer some more things I don’t like are (This is if the shell team is reading this)

    1. In views like "List view", clicking on a white space between two horizontal columns, selects the item. Unselecting an item after doing whatever you have done and before doing another thing on it is very painful.

    2. The Load/Save dialogs still show URLs!! Wow! Can I save my notepad text file to What is this? Windows Live Workspace?

    3. Autosort/Autorefresh behavior for Explorer needs getting used to, however if I paste 100+ files in a folder containing 100+ files, those files are scattered all over alphabetically! Give us an option to modify this behavior.

    4. The size and free space is not shown on the status bar without selecting the files. Clearly this is a forgotten issue? Why not give us a TweakUI?

    5. When sorting by any criteria, Vista first sorts in Descending order compared to XP which first sorted in Ascending order. Many times, (My)Computer and Recycle Bin turn up at the end of the list after sorting because of this!!

  7. John Röthlisberger says:

    Thanks Mark for an excellent excellent article. I only wish you’d write more often!

  8. David says:

    Next step: Ability to copy read-exclusive locked files in Explorer. If the user tries to copy a locked file, ask if they’d like to make a temporary snapshot and copy from that, if they have the necessary permissions to create one. Like HoboCopy does, but integrated into Explorer.

    It beats failing halfway, giving the user the false impression that Windows is simply incapable of doing what every other OS can. Unreliable file copying has always been one of my biggest headaches in Windows. Something so basic shouldn’t fail so easily and hopelessly.

  9. tglascock says:

    When you speak of "the Windows file copy engine" is this in the shell only?

    Do applications that use CopyFileEx benefit from any of these changes?  Has this API changed since XP?

  10. Xepol says:

    It would be interesting to see if the long standing File move on the same partition when the destination already exists has been fixed.

    Normally, when you move a file on a partition, the OS just changes the directory links – it is quick and efficent, certainly much MUCH faster than a file copy (and file size has no impact on how long it takes).

    However, if you copy file.txt from directory a to directory B, and directory B already contains file.txt something very different happens.  Yes, Vista looks at B’s copy of file.txt, and yes, it can be slow if its video, but that isn’t what I mean.  If you confirm that you want to overwrite the destiation, you get a slow block by block copy INSTEAD of the system just deleting B’s copy and moving A’s copy in the normal fashion.  Attempts to move multi-mb files can go from lightning fast to dog slow (worse than from hd to hd, as one hd is getting nailed with TWICE the traffic as it reads and writes each block – caching probably just makes it worse)

    Any hope they fixed that?

  11. zzz says:

    Biggest performance improvement you can make for same volume copies is hybrid differential copy. At first the copy is link to the original with different acl, then the modifications are stored (differential copy) and when there’s enough modifications in the copy to affect performance then a true copy is created.

    Another huge Vista issues have been

    1. the time it takes to open explorer (clean install). The more drives you have in the computer the longer it takes. In 2003/XP Explorer SNAPS open (<100 ms) in Vista it’s clearly slower, 200-300 ms atleast with some users reporting 1000 ms or more.

    2. the inability to remember the explorer settings such as if you want to have every folder to have a detailed view akin to XP/2003 (columns: name,size,type,date modified).

    You can set this from the customize folder however it doesn’t apply it to all folder types globally AND it even forgets the setting quite soon.

    3. Moving files in same volume is also perceivably slower (maybe 0.3 seconds or so but easy to notice)

  12. As always, a very interesting article. It was sounding like all the file copy problems of Vista had been resolved until the end of the article where you gave two examples where Vista SP1 performance would be less than what the original Vista was: file copies from Windows 2003 servers over a slow network, and large file copies on a Vista client. Unfortunately these are two very common scenarios, so it would seem that despite all the changes, users are still going to experience slow copies on Vista SP1 and probably even slower than on Vista RTM. Is this something that will be resolved, or is it just accepted that users will have to put up with it?

  13. Neil Prestemon says:

    <i>After much analysis, benchmarking and tuning. . . </i>

    One would hope, that after what, 30+ years of OS-writing experience? . .  Microsoft would have this kind of thing down cold. . . 🙂 – a little rigorous test process, and documentation, and they wouldn’t have to keep re-inventing their own wheel every 6 years.

    Anyway – another great article.  Once again, Process Monitor Pwns Vista.

  14. Devin says:

    How does the cut/paste algorithm difer from copying?

  15. Mark Farina, Jr. says:

    Next add the ability to pause and resume copies :). Then merge that into a crash protection scenario…if explorer crashes while moving files…just resume no data loss :).

  16. Norman Diamond says:

    "In 2003/XP Explorer SNAPS open (<100 ms)"

    In my experience that is true in 2003 but not in XP.  Explorer in XP goes zzz for a while before displaying its window in the first place, zzz again before displaying the list of drives in My Computer, zzz again before expanding the C drive (if I have a C drive), etc.

    "the inability to remember the explorer settings such as if you want to have every folder to have a detailed view akin to XP/2003 (columns: name,size,type,date modified)."

    For me that’s a pain too.  But Microsoft already answered that this behaviour is BY DESIGN.  As Microsoft pointed out, wording that used to say "apply to all folders" now says "apply to folder" and it is not supposed to apply the same view to other folders.  So the new bug is that Vista Explorer sometimes does part of what you and I want, for a short time, when it shouldn’t be doing that at all.

    XP’s bug is what you and I thought it is, i.e. that it’s supposed to apply the same view to all folders, but about twice a day it forgets its settings.

  17. Norman Diamond says:

    "That’s because the previous algorithm’s use of cached file I/O lets Explorer finish writing destination files to memory and dismiss the copy dialog long before the Cache Manager’s write-behind thread has actually committed the data to disk"

    In that case I’m sometimes enormously glad that XP or 2003 sometimes refuses permission to disconnect a USB hard drive.  When we think writing has finished, writing really hasn’t finished.  But why doesn’t this finish after, say, another 3 minutes?  Sometimes XP or 2003 never relinquishes, it just always keeps refusing permission to disconnect, so it’s necessary to shut down.  Can the cache manager be told to finish writing its cache?

    Also by the way, what happens in Vista when a user tells Vista to shut down?  Vista no longer gives applications a chance to delay the shutdown.  The user thinks writing has finished but the cache manager still needs 3 more minutes.  What happens when Vista forces the shutdown to complete within 20 seconds?

  18. Pavel Lebedinsky says:

    To zzz: The problem with explorer view settings changing on their own has been fixed in SP1.

  19. dvy says:

    Thank your analysis very much!!vista sp1 is very good!!!

  20. phx says:

    Next add queueing per local drives/destination in explorer. Number of times my girlfriend is copying music off a USB drive and has 20 odd file copy windows open…

  21. Mark Anon says:

    1. Good article as ever. The frustrating thing about this is that many Vista beta testers including myself pointed out the sluggishness of file copying, through mutiple bug reports with lots of votes.

    2. Also, what about zip file extraction, which is also painfully slow on Vista RTM, and still slower than XP on Vista SP1. It is no surprise that the channel 9 videos often show MS Engineers with Winzip, though maybe that’s for other features.

  22. El Guapo says:

    The Vista file copy has got to be the most frustrating and annoying part of Vista, especially over a network. How many times have I stared at the thing saying "0%… Calculating Time Remaining" and it just sits there. Horrible, horrible, horrible, how could they screw this up this badly?

  23. tom says:

    Thank you for being honest about the memory pressure issue.  This has always been a source of great frustration for me in Windows.  Why can’t the cache manager cap the amount of memory it uses to a reasonable (and preferably tweakable) value?  I think it’s really disgraceful for a file copy operation to "force other useful code and data out" of memory, especially to the extent that it does.  Valuable memory contents are discarded just to cache hundreds of MB of copied file data that is almost never accessed before it leaves the cache.  Why…

  24. ryan says:

    Excellent article, as always!  This is vaguely related to file copying and performance, but one of the features I like most about Vista is Previous Versions.  I was quite surprised–and thrilled–to find out that this was accessible remotely via the admin share.  What a great admin tool–it has already saved the day several times!  

    If I could make a request, it would be for a continual file change monitoring system (rather than preset intervals), at least for user files.  Maybe this would be better implemented within the application itself, but it would be awesome if Windows took this to the next level and had some kind of continual, point-in-time restore feature for user files.

  25. Ger Ger says:

    slow file access/copy speeds with SP1 are only slightly better and still exist! 4-16 MB/s MAX. with any hd (disk to disk, disk to all external usb 2.0 disks every tried). all patches and latest drivers installed. f.i. with all sony vaio sz 1-7 notebooks so far. read all the issues:

  26. Joseph Moore says:

    "In Vista, Explorer also waits 12 seconds before making an estimate of the copy’s duration"

    Question on this. Until SP1 for Vista comes out, is there a Registry modification we can make on Vista that would decrease this value? Thanks!

  27. boe says:

    Vista sucks – plain and simple.   While file performance is one of the worst parts of Vista, network throughput is pathetic and gaming performance sucks quite a bit as well.   Sure you can buy faster hardware but then why would you throttle it with Vista – why not use a faster OS such as XP or Linux?

    Yes, I truly hate Vista but I don’t hate MS and I haven’t given up on MS either.  I certainly don’t think Vista can be fixed any more than a house full of mold should be fixed – far better to tear it down and start again.  I really hope MS learns SOMETHING from the spectacular failure that is Vista.

    I bombarded them from numerous clients about how bad Vista performed and MS kept insisting – it must be your drivers, your hardware, your antivirus, your antispam, indexing should be turned off, aero should be turned off-  pretty much -make it even simpler than XP and you still won’t get the performance of XP – I don’t know if MS needs to hire just MIT people but I think they should stop hiring community college drop outs if they can’t figure out with a ton of feedback that Vista is SLOW.

  28. Guti says:

    Excellent article Mark.

    I was quite interested in the copy algortitm improvements on SP1, and now I have the clue.

    That’s very interesting, because some bad minds, thing the slowness during copy is a bug on Vista.

  29. stephen says:

    One of the biggest improvements I noticed with Vista’s copy operations (at least in Explorer) is the fact that if it encounters an error during the process, it doesn’t cancel the entire operation as it would in previous versions of Windows. You may have to skip the file, but I’m glad it doesn’t kill the entire transfer. I’ll take the slower performance if I get a more stable copy operation.

    It’s the little improvements that nobody ever mentions that make Vista better than XP. Still, people continue to treat Vista like the plague. I’m not a big fan of Microsoft, but I have to admit that *gasp* I actually LIKE Vista. That’s a first for a Microsoft OS in my case.

  30. Johan says:

    I’m running a Dutch Vista Ultimate version. I wonder if i could install SP1 when i change the language to English. As usual Mark’s article is good.  Somehow i managed to fix 80% of the copy/move problems back in august with some tweaking and pre-sp1 updates.

  31. Brandon Clinger says:

    While Vista SP1 may copy faster, it seems like one cannot do anything while the system is copying. IE takes ages to load, things just seem slow. I hope that is addressed and fixed later, along with the Windows rot over time.

  32. BBuGG says:

    vista’s Windows Explorer user experience is truly the most notable improvement over other windows versions as far as an everyday work flow is concerned. so many detailed and small improvements that i cannot live without it anymore. readjusting the folder view positioning automatically, etc. brilliant.

    performance, though, needs to be improved.

  33. Vabokner says:

    It’s possible that a XX-billion dollar company can somehow miss and not realize the simplest of truths that a single individual home or business user knows with his/her pocketbook.

    This is called missing the mark.

    I’ll make this very clear.

    My statement is the single most correct, absolutely pure, direct, straightforward possible wording that can be realized.

    Due to the strangest of chaotic influences, it’s possible that the next release of Windows can actually manage to *at the very least* not meet these simple executive orders from a lowly end-user.

    Here’s the rules:

    Do *NOT* ship us a new OS unless:

    1) File copies and general IO ops are faster than or equal to the previous release

    2) Application loading times are faster than or equal to the previous release

    3) Frames per second in games/3d is faster than or equal to the previous release

    4) CPU usage on 2d drawing is lower than or equal to previous release

    5) UI responsiveness is faster than or equal to the previous release

    6) Boot time is lower than or equal to previous release

    7) General performance (ie, heap manager) is faster than or equal to previous release

    Before shipping windows, go through a list of these.

    If each point is not met,

    Do *NOT* ship Windows.

    I repeat.

    If fps in games is lower, do *NOT* ship Windows. There’s nothing else to analyze, consider, or figure out. We do not want the product if FPS has lowered.

    How can it be any more clear?

    It doesn’t get any simpler.

  34. Kebabbert says:

    I wonder, is it really necessary with a copy engine in an OS? I mean, Linux doesnt have one and fares well in benchmarks? Isnt a copy engine overkill? Whats the point?

  35. Bluvg says:

    Vabokner: maybe *you* should write the next version of Windows, then.  I think you’d quickly see why that is far, far from the level of simplicity you suggest.  

  36. charles says:

    Re: Vabokner

    I disagree that EVERYTHING has to be better in every aspect.  Sometimes, a cleaner, more extensible implementation will not be included due to a slight performance penalty which is incurred on older, but not noticed on newer, hardware.  Which is better, and why is one implemented over the other?  What are the prospects of bringing the more abstracted implementation up to performance par with the original implementation?  These are all questions which must be answered well, and I don’t think you’re really the person to answer them.  If you can intelligently express why the new version of Windows does not meet your needs, and express it to Microsoft, and your need seems weighty enough or common enough to make a difference in revenue to MS, then I think you’ll find them QUITE responsive to your needs.  Don’t be so presumptuous to think that Microsoft exists to meet your needs.  If your needs can be reasonably met, they’ll do everything they can to do so.

  37. WigF1 says:

    An interesting insight Mark. Once SP1 hits Technet I’ll be rolling it out to test to see if we can deploy en mass finally…

  38. Vabokner says:

    Sorry to those who responded, if my points were unclear.

    Obviously the points I mentioned aren’t all that comprises windows. I didn’t mean to say that.

    I didn’t mean to say that it’s complete.

    But they’re a small slice of *bare minimum* of requirements.

    The points are RELEASE BLOCKERS.

    Microsoft should not even consider it vaguely possible, professional, or profitable to ship a release which misses these and many more points.

    I didn’t want to spam the blog with 100 points, so I gave a few examples that are transgressions against an OS release.

    Vista broke various of the simple rules.

    MS should fill the list out to the proper 100-point extent, and follow it tooth and nail.

    "_we_ _cannot_ _ship_ _if_ _fps_ _is_ _lower_"

  39. Norman Diamond says:

    "If each point is not met, Do *NOT* ship Windows."

    That’s excessive.  It’s like saying that no planes should be allowed to fly until Biman/Garuda/USAir/whoever starts operating properly.

    Let Vista ship.  Let XP ship.  Let Linux ship.  Let those who want to buy a computer with XP buy a computer with XP without paying a Vista monopoly tax.  Let those who want to buy a computer with Linux buy a computer with Linux without paying a Vista monopoly tax.

    Imagine if Microsoft had to pay for a Canadian postage stamp every time Microsoft sends e-mail, and suppose the rate of loss of Microsoft’s e-mail would be the same as letters that pass through Canada’s post office.  Microsoft would be clamouring to end the monopoly.

  40. Norman Diamond says:

    "While Vista SP1 may copy faster, it seems like one cannot do anything while the system is copying."

    That is true.  When the OS finally gets fast enough to keep up with disk speeds, the new bottleneck is disk speeds.  Perhaps you’re saying you want to specify which disk operations should be low priority (this time request that these Windows Explorer file operations should run slowly) and which disk operations should be high priority (this time request that these Internet Explorer file operations should run quickly … together with these Windows Explorer file operations that load iexplore.exe into RAM …).

  41. Erik says:

    Re: Vabokner

    Agree completely, its that simple.


    "I disagree that EVERYTHING has to be better in every aspect."

    It doesn’t have to be better,  just not that much worse. 1% worse might bee ok, but a 10-30% drop in framerate for games scares a huge market.

  42. Trevor Morris says:

    Mark, thanks so much for another timely and most enlightening article.

    Have SP1 changes done anything to stop MMCSS from decimating throughput so severely on gigabit LANs?

  43. manta says:

    Game speeds seem a little off topic but st they’ve been brought up … isn’t this very similar to what was experience when XP was released? Gamers stayed with 2000 et al for quite a while as game developers learnt to optimise for the new OS. It amazes me how quickly people forget the issues that always exist with the release of a new OS. We would all still be using DOS if initial speed was the sole criteria for releasing updates.

  44. indeed356 says:

    Well game speeds are really off topic as they depend largely on graphics driver performance. It will take some more time until Nvidia and ATI/AMD have optimized their Vista drivers up to XP level.  

  45. Martin says:

    Do you know if the "Out of memory" error when copying large numbers of files has been fixed too?

  46. Anon says:

    "Vista Improvements to File Copy"

    The thing is that from many user’s perspective, that statement is a lie; their experiences showed them that file move and copy operations were seriously degraded in Vista.

    Maybe if you expressed it some other way to highlight the improved theory without implying an improvement to the end-user’s experience.

    To the end user, what you call "the perceived performance" is the performance.

    SP1 "…can be slower than the original Vista release in a couple of specific cases." Who cares, people are comparing the speeds to non Vista OS not to pre-SP1 Vista.

  47. Bryan says:

    Wow – quite a few people seem to fail at reading comprehension.  It’s sad when he addresses the specific point a majority of people are deriding Vista about in his original blog.

    At any rate, fantastic insight 🙂  It’s good to know why Vista seemed a lot slower in the File Copying aspect.

  48. n0rt says:

    First things first.. THERE SHOULDN’T BE ANY PROBLEMS WITH FILE COPYING. This article is just an excuse for the problem. No commercial products should have this many flaws, especially an Operating System. I would expect flaws like these on an independent freeware OS such as ReactOS. I still cannot understand how can this product be marketed.. or even sold!

  49. Rune3 says:

    I agree with indeed356. The major bottleneck is nVidia/ATI drivers. Some games do run faster under Vista, others don’t. There are some known issues with Vista itself, but the most relevant bottlenecks seems to be drivers. nVidia dragged their feet on Vista drivers, and delaying Vista’s release by a year would have allowed nVidia to postpone their Vista driver similarly. Better to release early so that the hardware OEMs are forced into coughing up some drivers. (I am still waiting for a decent 32-bit XP/2003 nVidia driver — one that properly supports PAE again)

  50. davidwr says:

    Nice writeup.  Now only if there were a way to 1) tune the behavior and 2) provide an override so a specific copy operation used specific copying behavior.

  51. slorb says:

    Have any of theese improvements resulted in any regressions in other parts of the system? The copy engine sounds like some DRM stuff to protect streams. Why is it neccesary?

  52. Martyn Byll says:

    So much for the "commit all memory to caching" (or SuperFetch/PreFetch stuff also): Guys @ Microsoft – "it’s NOT working" fellas. People are NOT going with VISTA for reasons like this one, problems galore.

    The only sales MS is generating is that from NEW PC’s with VISTA pre-installed, & unless the person has NEVER used a PC before (a rarity today), they will also run into things like the rather foolish changes to say, control panel & UAC, which NO ONE REALLY LIKES period!

    (Why on EARTH Microsoft ever changed from "CLASSIC VIEW" which everyone & their brother know by now, from Windows 3.x onwards basically)

    VISTA is a failure guys.

  53. colinnwn says:

    <quote>That’s because the previous algorithm’s use of cached file I/O lets Explorer finish writing destination files to memory and dismiss the copy dialog long before the Cache Manager’s write-behind thread has actually committed the data to disk</quote>

    Are you saying XP dismisses the copy dialog before the file is copied to the disk? If that is true, it seems to be a HUGE violation of dialog intention.  A dialog shouldn’t dismiss until the subject action is totally complete.

    Something else is going on here anyway.  At MOST, XP for me would continue writing to the disk for 1 second after the copy file dialog dismissed.  Vista is tens of seconds or more slower than XP at doing file copies.  The perceived speed due to I/O caching is NOT the biggest difference in faster file copies for XP vs. Vista.

  54. terry says:

    Linux and OSX releases get…

    dun, dun, dun,

    queue music…


    Each release. How unheard of?

    Microsoft has lost its genius and they’re just a bunch of newbies relative to the Cutler days of NT. They have 50 thousand employees or whatever, and they can’t even manage to make an OS release faster than the previous one. This boggles the mind. Severe mismanagement.

    The most ABSURD thing about the whole waterworld-OS situation-Windows-Vista, is that Microsoft employees ACTUALLY think we don’t care about performance. I swear.

    I am not kidding. That’s really what they think. Straight from the horses mouth. They think that we won’t notice, that general consumers are clueless. But what MS doesn’t realize is that the informed users who are strongly looking at performance, are the ones originating the seed of opinion into the rest of the market.

    Simple point: If the market didn’t care about performance, then why does Intel have a strong business?

    Microsoft vastly underestimates the complete domination of perception of their product by hard-numbers/sheer performance. They’re being fed kool-aid from a bunch of marketing and analysis idiots. But it’s not reality…

    MS forgot speed.

    Vista is too slow to be taken seriously.

    I’m skipping Vista for my company of 60.

    We’ll see if a shining star can rise out of Microsoft to speed up their future with Windows 7.

  55. alex says:

    It’s amazing how most people are totally missing the point of the original post – local file copy in Vista RTM was never really broken! Developers of Vista made a brave choice of displaying actual copy speed to the users by disabling caching.

    In XP and (I presume) in Vista SP1 the process worked like that:

    1. file was read into memory cache

    2. some time later (ideally when no other applications are writing to disk) memory cache was written to disk

    Only step 1 was shown to the user as file copy. This made file copy to appear somewhat faster (as it actually was spread in time), but had two serious drawbacks:

    a. if PC would suddenly lose power before completion of step 2, the file would not be completely copied and there will be data loss (and since step 2 wasn’t shown to the user and could be postponed, that was a frequent source of problem)

    b. if you were copying a huge file (larger then the amount of free RAM your PC had), the entire system would start swapping and slow down significantly (because of memory cache taking as much RAM as possible)

    Vista solved both problems by actually displaying entire file copy to the user and not using memory cache. I guess people at Redmond hoped that modern hard drives are fast enough…

    I’m not sure what approach is best for most people, but I liked Vista one much more. Too bad that it will be disabled in SP1 🙁


    As for the rest of the comments, people are typically deluding themselves… h-m-m, where do I start…

    – copy engine is part of every modern OS, including Linux or MacOS X (and "engine" is just a nice word for a set of functions 😉

    – and no DRM in sight

    – neither MacOS X nor Linux are getting faster with new releases

    – and FPS in Vista are mostly fault of driver writers, and not Microsoft’s issue really

    – etc

  56. Joh says:

    Oh come on… watching with Filemon you can see that Vista touches every file before copying a directory for creating an estimate of how long the copy-process will take and take a count of the files it has to copy.

    On larger codebases (50.000+ small files) this created a situation here. We’ve actually measured the time that it takes to boot Windows XP to make a backup copy before booting back into Vista to continue development and that’s actually FASTER than waiting for Vista to get the job done.

    Matrices?! Caching?! A TEAM of people wrote this code?

    But whatever, the thing that bugs ME most about Vista is that Microsoft wrote a huge lump of code degrading performance to implement encryption technology that actually does nothing but STOP the user from doing things with his very own computer at the request of Hollywood… but that’s not the topic here either :-).

  57. sk says:


    MS’s "brave decision" resulted in my pc being damn near unusable any time I’d move/copy/unzip any medium sized file for 5 to 15 minutes. Something I do dozens of times a day.

    The fact that I’m less likely to lose that data if the power goes off during that operation, something that has only happened to me twice in 15 years of computing, impresses me not at all…

    And the trade off of slowing down all these operations to speed up those rare occasions when I move a file 2 gigs or larger? Please…

    I put up with it for one week in the hope that MS would put out a hotfix, then made the brave decision to delete Vista and switch to Linux, and will NEVER go back.

  58. just me1 says:

    but my network isn’t high latency or "slow", it’s a GigE network between two machines that are about 12 inches apart

    So if I copy a file to a hard disc not in my pool the transfer rate over my GigE network to WHS has shot up with Vista Ultimate and SP1, this used to be much slow pre-SP1 and I was lucky if I got over 10% utilistation, I now see around 30% utilisation!

    Wow great I thought SP1 has sorted that annoying problem, well that until I tried copy a file from the hard disc (not in the pool) back to Vista, I now get around 7% utilisation, pre SP1 this was higher approx around 25/30% of GigE.

    I’m just about ready to throw in the towel!

  59. Anonymous Coward says:

    This article lacks the uncompromising edge of all your other material, and instead comes out as "famous IT guy gives support for Vista SP1 using technical lingo" piece.

    The reason for this is that everything in the article is purely *theoretical*. Caching *should* improve local copies… Unbuffered copies *should* improve network copies… We don’t come to your blog for Microsoft sponsored guesswork! We come for the hard unfiltered facts.

    Please take a minute to reply to the technical questions posed in the comments to the article.

  60. mgb says:


    No, Linux and MacOS doesn’t have a copy engine. Have a copy program. Well a lot… That can do (I fact, they do) all of that optimization tricks at will…

    The thing is, thats a kernel operation?

    I don’t know windows kernel but if that’s inside the kernel seems overkill to me.

    Anyway, anyone can do a "SuperCopy" app that do the work better, but is pretty pathetic for a company that so simple operation can be done better.

    My real question is why the kernel allows so abusive use of memory simply for file copying. I think that’s the real root of the problem. In all systems I know, cache is a "surplus", something you have but you don’t have to really count on it.

  61. Mark Anon says:

    For those saying that the only difference is in perceived copy speed, I am not sure that is correct.

    On multiple occasions, i have run accross the situation where moving a folder with a large number of items (500 – 1000) in produces an estimated copy time of ~3 million hours. This has been mentioned by others.

    Since installing SP1 beta, I have not had any ones that were that rediculous, but i have had at least one instance of over 1000 hours.

    When you consider that the increased actual copy speed is partly due to the copy time estimation at the begginning, this just adds to the annoyance!

  62. JS says:

    I’m confused.  First you say SMB2 does 64KB I/O’s instead of 60KB, and then you show a picture of SP1 doing 1MB I/Os "from one SP1 system to another", with the cache manager doubling that in read-ahead.  Then a few paragraphs later, you say SP1 has smaller I/O’s–just 32KB.  

  63. StuartR says:

    Thanks Mark, for the great technical explanation. The number of passionate comments to this blog entry are very telling. I’ve been using Vista for 14 months now on 4 different machines (both x86 and x64)and while the file copy issues remain quite an annoyance, it’s the overall lack of OS quality that really disappoints me. I’ve seen this in other, recently released Microsoft products too. I’ll stand up here and suggest that something may be fundamentally broken at Microsoft. I noticed it personally when I was on campus recently for several weeks, as compared to a previous extended visit there 5 years ago.

  64. VistaUser says:

    Something does not sound right.  The file-copy problems I experienced were usually copies of a handful of very small files being copied from one directory to another on the same partition taking 10-20 minutes.  Usually the copy file dialog box would not show the files starting to copy for 7-8 minutes — it would just sit and hang.  Once the files did start copying, it still took a painfully long time.

    When I tested the same copies on the command line (using 4NT, which could make a difference), the files would copy in ~1 second.

    The problem was *not* perception.  There was a real, serious, problem.  I’ve had to recommend to numerous people that they wait for SP1 before upgrading to Vista.

    I’ve installed SP1 RC1 and I have not experienced those problems, so that’s good.  But passing them off as simply perceived problems concerns me.  Were they actually fixed or not?  It seems so, but what was the cause?

  65. Colin Hodgson says:

    I hope the comment regarding ‘a slower server 2003 file transfer experience’ is changed. Windows Home Server users are already complaining bitterly regarding the abysmal file transfer speeds from Vista. It sounds as though their experience is going to get worse, not better. Not a good way to try and sell what is already a flawed system. File corruption and even slower files isn’t a plus in my book.


  66. Mark L. Ferguson says:

    Copy is slow in some cohfigurations. I think the same thing causing slow router performance in some systems is at fault. I call it ‘long wire to router’ problems. When the Cat5 cable is short, no packet loss, and no speed problem. When a longer wire (say 70 meters) is used, the delay caused by the natural 2/3rds speed of light in a copper wire cause the router to lose packets and make for very poor communication (say 300 bps, vs 100 mbps). When set to ‘half-duplex’ the router loses no packets.

    I think there may be a posibility that the packet loss in the router is an example of some kind of over-run in the copy process. The destination may have USB, or some other connectors in the path.

  67. Sarb says:

    Who gives a rats ass if they improved the copy performance of Vista by 10% or even 20%? Why does anybody care?

    Vista is a SLOW os.

    So it’s like getting a 15% better fuel-delivery-line installed on your crappy economy family wagon.

    It really makes no difference. It won’t convince us to drive around in a smoldering heap of junk.  

  68. Apc says:

    Thanks Mark!

    This article pretty much explaing why my WinXP system is soooo slow in copying large files (700MB+) between volumes of the same disk and who is behind all this clicking noise from HDD seeks that immidiately fills the room 🙂

    Furthermore, it explained, why even it this situation, other task can access HDD relatively easy.

    But I want to make accent on some other point:

    From the end-user standpoint, despite all the effort being made to improve the situation, good old Norton Commander and it’s derivatives perform 1 000 000 percent better that default XP/Vista copy engine.

    Why you say?

    Because end-user doesn’t really care about all these kernel optimisations. End user cares about comfort, control and predictable results. And what we see for XP/Vista?

    1. When calculating time-to-copy Explorer just hangs there displayig nothing to the user. Now very comfy. Couldn’t the whole TEAM think of FIRST displaying "calculating the time" and THEN start the routine?

    2. After having calculated the total size of all the data to be moved the system doesn’t warn you if you have less space of destination drive – it just begins the process. Another frustration. And when the free space end it just tells you "Not enough space" and aborts the operation. See next paragraph.

    3. When the copy process is aborted (for example, when copying multiple files and one of them becomes locked) – it just breaks and you have manually compare directories to see what was copied and what was not. And what do you expect now.

    4. After the last bit of a huge file goes to the disk you get some small amount of time when explorer is unable to access the file because it’s being checked by antivirus/antispyware/etc. What’s the reason to check the file that was already checked when in was accessed by Explorer for reading?

    Good old NC overcame most of this by just persistent file selection (non-processed files were not deselected) and smartly designed user dialogs in the middle of the process. And more modern file managers like FAR or Total Commander allow you to use cached copy where you can set cache size (for ex. 15% of free memory) which makes for copying dozens of small files and copying single large files much easier and faster as you do sequential reads and writes instead of HDD head-hopping back and forth.

    What my point is? Now matter how good a kernel would be – I’t of no use to the user without a proper usable interface.

    This is why Windows beated Unix, OS/2 and other rivals in UserLand space – it was more pretty (while being pretty basic in comparison to those OSes).

    Now we see, that while improving Windows internals, the Externals of the OS are still pretty much rudimentary. And most of UI development is directed now towards comfort&control, but to more important features, like 3D Window Flipping and Sidebar Slideshow from My Pictures.

    So, if the whole TEAM works of File Copy – please, pass the message to them that they spend next week improving the user interface (or notify the appropriate team). That would increase sales much better, then another cache tweak that whould anyways be un-noticed by the user because of many other problems.

  69. Ex VistaUser says:

    $20,000 + invested in so called Microsoft Certified equipment… and I can’t copy a file from a local drive to a network drive at more than 100 KILO BYTES per second.

    XP copies/loads over  20 MEGA BYES per second.

    Upgrading effectively renders the computer unusable for business purposes.

    If microsoft cant/wont fix this, then I expect a refund on all Vista software our firm has purchased.

    Rumor has it that several suits under the federal deceptive trades act are being considered. At least they offer triple damage awards.

    Our lost productivity installing, debugging, trouble shooting, backup, restore, re-install XP or Linux… has cost well over $10,000.

    I’d bet hundreds of millions have been wasted by Vista users in similar ways.

  70. Lawrence D'Oliveiro says:

    There seems to be an awful lot of effort put into avoiding filling up the system’s RAM with cached filesystem data. But why is this necessary? Linux systems will happily cache stuff in RAM, but those cache buffers have lower priority than application memory allocations. That way, you never have app code and data forced out of memory just to make room for caches.

    With this simple design decision, you no longer need to worry about using too much RAM for cache. Linux has been doing this for years, decades. How hard would it be for Windows to do the same?

  71. Emkay1001 says:

    It’s curious that 3rd party programs like Total Commander are capable of achieving higher speeds when copying a file than explorer.exe (at least in my case). I have a laptop with Windows Vista Business 32-bit and a laptop with Windows XP Professional. When copying a single 700 MB file from XP (FAT32 partition) to Vista using Total Commander (run under Windows Vista), the Task Manager reports network usage to reach about 80-90% (100 Mbps network). When using explorer.exe, the network usage drops to 60% (plus it occasionally "spikes" from 2% to 60%) and the copy process is slower. Additionally, the hard disk heads on the XP laptop "go ape" (it might be the "out-of-order write operations" mentioned in the post). This does not occur when using Total Commander. Thumbnails on network folders are set to off. Setting autotuninglevel to disable under Vista had no effect as well as removing the RDC in Programs and Features.

    I would assume that 3rd party programs use CopyFileEx API mentioned to copy the file. So how come Total Commander appears to be faster over network – even after putting aside the fact that it calculates the remaining time faster?

  72. B. Goodman says:

    It’s great to read some of the thought process behind the design… to a point.  The bottom line, though, is that Vista copy performance may be the single most complained about "feature".  How about, just for internal MS use, you guys do some benchmarks using different OSes.  Compare copying 5,000 files of mixed content and sizes between Vista, Vista SP1, XP SP2, XP SP3, Mac OS X, Win Server 2003, Win Server 2008, etc.  See if Vista isn’t involved in most of the worst performances!

    And Mark, while you’re at it, could you please lend a hand to the WHS developers who are STILL unable to keep WHS from corrupting data????

  73. S. Ramirez says:


    Thanks for this interesting article.

    Where can I find more about the increased I/O sizes in Vista?

    It’s off topic, but I’m hoping larger I/O transfer lengths might translate in being able to access larger record sizes in magnetic tape. Currently, in XP/2003 by default we’re limited to 64kb records. Modifying MaximumSGList in the SCSI HBA service parameters gets us to 1 Mb – 4k (254 4k-pages). I’m hoping larger I/O transfers means we can break this limit.

    Thanks for article again.

  74. Firdaus says:

    Thanks Mark for the news as well as explaination regarding the improved copy in SP1.

    On the subject of copying, I noticed that when I do transfers from my SD or Compact Flash card, there would a possibility of at least one of the my photos being corrupted. Initally, I thought it was due to simultaneous usage but even if its just doing the copy. Is this a remote problem or it has been reported by others too.


  75. zzz says:

    "biggest change they made was to go back to using cached file I/O again for all file copies, both local"

    If this applies also to internally connected SATA HDD’s and large (50+ MB) moves/copies there will be a data loss/corruption risk:

    Using ESATA PCI/Front port brackets is increasingly popular way to connect up all the excess internal SATA ports. Who really has 6 internal HDD’s? I have 0 internal HDD’s personally, all my HDD’s are through ESATA using the internal SATA headers.

    In tests I did on 2003, it can take even 15 seconds before all the data was flushed down to disk after the copying was "finishes" according to explorer.

    There is no "Safely remove Hardware" icon at all even though I have few external internal HDD’s connected to the internal SATA headers as there’s no way for the OS to tell whether the HDD are externally or internally connected.

    The option to change HDDs to "Optimize for quick removal" is grayed out too, even if hot plugged.

    Now certainly I could use USB to get around this, however that would drop transfer rate from 120 MB/s (ESATA+single modern HDD) to 20 MB/s (USB2).

  76. zzz says:

    Norman wrote:

    "So the new bug is that Vista Explorer sometimes does part of what you and I want, for a short time, when it shouldn’t be doing that at all."

    So the SP1 doesn’t have any UI setting to make all folders in explorer look like they do in 2003 by default?

    Is there any way to fix this besides modifying the explorer.exe? A registry fix would be fine.

  77. reordering I/O to avoid seeks says:

    I just wonder why the double writes to the same "forgotten" gap do occur. If data is missing don’t write it, maintain a pointer for the missing area and put it with the next written area within the same transaction checkpoint.

    And anyway: why aren’t ALL I/O (both reads and writes) to perform on the same volume reordered by position on disk, so that the disk head will just need to scan the whole I/O list in several successive passes from the begining to the end before restarting from the head of a new list: as long as the current list of pending I/O is not ficnished, all other I/Os are accuulated and ordered. When the list is ninished, restart with the next ordered list of I:O in reverse order. Use the transactional model of NTFS for managing consistency. It will give equal chance for all concurrent processes or threads, either from applications or services, to make parallel read/writes, and the reordering will maximize the throughput. Some areas are probably better to serve with a better priority, notably in the MFT area: perform linear read-writes for half the data in the area before the MFT, then perform linearily the list in the MFT, then perform half the area after it, then perform the MFT, then perform back to the first area, finish the data in the MFT, then finish the area after it. May be the whole MFT area is not the best zone, but the USN journal should be given higher priority in order to terminate ther validation of NTFS checkpoints.

    What I mean here is that checkpoints should be performed in several states: one for the application level, one for the system level, one for the NTFS consistency, and a final one for the completion of pending I/O. In ordered I/O, this would also limit the fragmentation of memory (and of I/O performed for the associated paging file, if cached data must be paged out; probably this cahe can be retreived later, but if the cached data is paged out while performing in the area of the pagefile, it would still allow consistency of data at the NTFS level even if the file is not in its final position: fragments can be reread from the paged area and consolidated later in the middle of files without changing data consistency).

    For me this seems to be a strategy to minimize the latency: each volume device manager has a list of pending I/O with different priorities, not all of them have dependencies, some are associated to a list of checkpoint completions. However NTFS just performs a checkpoint every 8 seconds, and that’s too long in case of massive data copies More frequent checkpoints at the volume level (instead of the filesystem level) could boost NTFS, if those checkpoints are not spaced arbitrarily in time, but in terms of maximum accumulated pending data size; we can see the bad thing in NTFS when working with aaplications that are constantly writing many small files: they are very frequently fragemented and dispersed everywhere, and even though the I/O will complete, the next time you’ll need to use the same set of files, you’ll have to perform many random accesses to retrieve them again. anticipated reads can’t resolve the problem, but performing ordered writes and reads with the same area before the MFT or in the MFT (where most small files reside) or after the MFT or the paging file should limit the total number of seeks.

    Are there searches in progress to allow reordering read/writes, even though they accumulated from distinct concurrent threads?

    If so, the filemanager’s copy operation could start several threads when performing copies of multiple files, with one thread trying to read the directories (names and basic attributes or security attributes or file location pointer maps) in the MFT, and other threads from a pool of worker threads performing the actual data read/writes.

    Note also that directories are organized as B-trees: you don’t have to read them completely from start to end to predict the total size of it and the number of clusteres to reserve for the copies in the target MFT. This allows placing the pending data in a preordered buffer, preserving the existing B-tree structure, filling it to more effiicent fill levels near 100% in each B-tree node except possibly some final leaves or the root node.

    the target B-tree would ten be even more compact than the source one, minimizing the number of writes really needed, and allowing faster passes in the ordered queue of pening I/Os due to lower distances.

    All I/Os would then be cached by default, including memory faults and pages marked dirty by concurrent use. Also, al I/Os that are performed after a NTFS checkpoint could get a higher priority in oder to maximize the completion above some maximum size. After which it should be paced down. There’s no point in accumulating too much pending I/O after a checkpoint, give nthat it will be severely slowed down by pending I/O for paging out.

    when I look at the list of low-level I/O actually performed, all I can see is that it is not ordered as it should: it starts reading some data but does not finish it and stops at a place where continuing the read would not require any seek and would take a few dozens nanoseconds to complete, instead of milliseconds for each seek (because seeks are occuring between too distant areas back and forth again).

  78. Pras Anand says:

    I was wondering, can’t they emplot a Queued file-tranfer? If I am copying a large file, then want to copy another, I could have two options:

    1 Copy (putting the file into the queue and completing the operation as soon as the rest in the queue are done). Default

    2 Force copy (copying the file the way it does now, instantaneously).

    I used Teracopy for a while and it worked like this, by creating a queue of files to copy and proceedig through the list. Also the buffer was managable up to 16MB I think, which really helped a lot in some cases.

    Anyway, for those, like me, who don’t have any issues with that, try Teracopy. It made my life a lot easier as I was wanting a queued file-copy system and hey presto someone made it!!

  79. james says:

    I hate to be skeptical, but why do I feel like 10% of the file copy problem had to do with the smart-sounding technical jargon, and 90% had to do with a stupid Explorer estimation bug which was glossed over in a sentence?

  80. Craig K says:


    Not to take this post too off topic but I struggled with your number 2 item from your Feb 4th post.  It really, really annoyed me so I searched and hacked around and fixed it and now I get name, size, time, date modified on every explorer view.

    The gotcha to all of this is the window size is a per video resolution registry value.  I wrote a batch file to automate it as much as possible.  I wrote this batch file this morning and it works fine on all the systems I tested it on.  It assumes you still have ‘reg.exe’ and ‘shutdown.exe’ in the path.  Please examine the batch before you run so you feel comfortable with the changes it makes.  

    Make a backup the following registry key before you run just in case:

    HKEY_CURRENT_USERSoftwareClassesLocal SettingsSoftwareMicrosoftWindowsShell

    Batch file:

  81. Sean says:

    Craig K: zzz complains about this without knowing that Vista recognizes folders differently than XP . I can’t remember the details now but if one wans Detailed (or List) view for all folders this can be done but one must go to 2 or even 3 places.

  82. Mike Morrison says:

    It’s interesting to see how many people will use this article to post their "Vista suxor" rants.  As a long-time Vista user, since Beta 2 if I remember right, the one thing that really annoyed me was the long delay between the appearance of the "Copying File…" dialog and the time when it actually starts to copy the file(s).  It seems like the time spent in estimating the time to copy could be better used to actually copy the damned file.  It looks like SP1 will bring a lot improvement in this area.  I’m really stoked about it after watching Dave Zipkin’s presentation on TechNet.  That Vista/Vista SP1 cfile copy comparison was simply amazing.  Let’s hope that the finished product will actually match the presentation!

  83. Craig K says:

    Sean: Yes like you said its several places you have to change the detail layout.  Even doing that Vista forgets Explorer’s window size and position even if you hold down the CTRL key down while closing.  My batch file will make changes and will make Explorer behave like pre-Vista Explorer that will have details for all folders and the size and position of the window will be saved.  I created this because I run Vista in 1920×1200 resolution and when I hit the Alt-E key I didn’t want Explorer to pop-up in this little bitty window.

  84. David says:


    The time that it says it’s estimating the time to copy IS spent copying the damn file. They just don’t want to give an early estimate which is likely to be way off.

  85. Ilya says:


    I would like to ask you a question using someone else’s benchmarks (which I do not like as a concept), because they raise an eyebrow to me.  Adrian at ZDNet (and let me mention that I find him to be technically lacking most of the time) did some benchmarking that supposedly indicate that things like file copy and, most notably, built-in compression are 35% to 75% (!) slower on Vista SP1 vs. XP SP2.  This seems very very weird to me because I didn’t think storage subsystem was so drastically rewritten for Vista, particularly not such dramatic moves in the definite wrong direction if these benchmarks are to be believed.  Is there a technical reason for what is going on, or is there a problem with the benchmark?  

    Again, I do not like the form of this question in that I am relying on someone else’s data (someone who has at times been wrong on the technical side of things), but this great difference just raises a big question.

  86. jim says:

    "It’s curious that 3rd party programs like Total Commander are capable of achieving higher speeds when copying a file than explorer.exe"

    I get the exact opposite result, at least with Vista SP1 installed. I have Vista x64 though, and Total Commander is 32 bit. That may explain some of it.

    This pictures illustrates the difference when I copy a large file over the local network. Explorer is actually reaching the disk capacity here.

  87. zzz says:

    The key issue/bug in the explorer detail column implementation *is not* that it now remembers settings for folder (types). I actually would like the ability *to opt to* mark certain folder and all it’s sub folders in one go as "music folders".

    The bug is that I do not have photos/music in every folder in the computer and atleast pre-SP1 Vista Explorer thinks I do therefore I get the wrong column details.

    If I had implemented this algorithm I’d have set all folders to detail view globally like they are in 2003, then only if *most* of the files in the folder actually are say photos then show the appropriate attributes without removing critical file related attributes. The way it seems to work now is you need just one photo amongts hundred files of random type and it decides it’s a photo folder and you get no useful info and bunch of empty columns for every file.

    And fixing the speed of which Explorer "snaps" is trivial. If it can’t be made as fast as it used to be, it can atleast be kept open in a hidden window so that if you only want to open a single explorer window (Windows + E key once) then it just unhides this. This kind of thing makes big impact of perceived OVERALL PERFORMANCE OF THE SYSTEM. And Microsoft entirely missed it! Just unbelievable.

  88. Triangle says:

    Thursday, February 14, 2008 6:23 PM by David

    "The time that it says it’s estimating the time to copy IS spent copying the damn file. They just don’t want to give an early estimate which is likely to be way off."

    Well then the problem is that the UI doesn’t match the implementation. It’s still a bug.

  89. Nick Whittome - MVP SBS says:

    Well, I have installed and since removed Vista SP1 RTM.

    My real world test shows me that SP1 makes file copying that I do on a network MUCH slower.  Instead of 1 – 2 MB per second with RTM code, I am now getting 200 – 300K per second copying to my Windows Home Server.

    I don’t understand it, and I really don’t have time any more to try to understand it.   I’ll leave that up to you clever folks 🙂

    In the mean time, I am sticking with Vista RTM code, and I may even install XP again.

    What I recommend is a Vista Network performance tool of some kind, that can help users who simply want to use the operating system figure out how they can improve copying speeds.

    Moan over…


  90. Norman Diamond says:

    "The bug is that I do not have photos/music in every folder in the computer and atleast pre-SP1 Vista Explorer thinks I do therefore I get the wrong column details."

    zzz, that is a behaviour which you and I dislike, but Microsoft already answered me during SP1 beta testing to say that this is by design.

    On occasions when Explorer takes a setting that we applied to "folder" and applies the same setting to other folders of the same type, Explorer violates what Microsoft said its design is, so that makes it a bug.

    zzz, I think you and I both dislike the design of Me2.  I estimated that Me2 will become usable sometime around SP18, but if misdesigns aren’t fixed then Me2 will never be usable.

  91. Nathanael Jones says:

    Sorry, my e-mail is nathanael DOT jones AT won’t reach me.

  92. Does this also affect the deletion of large folders (in content). Recycling a directory with some 10,000 files takes ages ..  

  93. Nathanael Jones says:

    Funny. My second comment appeared, but my first one hasn’t yet. Probably because it has a URL.

  94. Toni says:

    I want a REAL use of copy/move file operations, on HARD-DISK. I (as a lot other people) get bored every time that I have to copy or to move several files in Vista on harddisk (IDE, or USB). In that scenario Vista is very slowly vs XP. That is a FACT.

    "Normal" people is not copying Terabytes from one machine to other machine using ethernet every day. This is not a real use.

    Other facts:

    – A non-cached disk implementation is a very idiot thing. Bill thinked it? xD

    – windows copy estimation has ever been a disaster.

  95. james says:

    Well so far I’m unimpressed with the so called improvements.

    Copying a 3GB file to USB hard drive 58 minutes and average speed of 1.30 MB/second.  

  96. Adam Graham says:

    Please don’t state your unhappiness at Vista File Copying here!

    I’ve noticed MR’s blogs becoming less frequent and it wouldn’t surprise me if this is because of all the bitching about subject he is writing about. Take your Vista’s File Copying is s@#t somewhere else.

    This blog is generally about insights into the internals of application problems and lower level OS functionality, NOT a debate about them!

    Thanks for the article Mark! appreciate it!

  97. David says:

    If you really want to impress people when copying files on a single volume, you could add copy-on-write support to NTFS. They’ll be able to copy as fast as they can move, the copies will be indistinguishable from physical copies, and it’ll save space. It’ll be one of those nice "do what I want, not what I said" kinds of features that raises the bar for everyone else.

  98. Deja Vidor says:

    Having heard and read about Vista’s hardware requirements, I went out and bought a new machine that came with Vista pre-loaded, and compared to my previous XP machine, had 4x the RAM, 4x the CPU speed, 4x the graphics power.

    I love the UI, great eye-candy effects.

    But the system is SLOW, slower than a dead dog lying in the middle of a Missisippi country road.

    Filecopy takes TEN TIMES LONGER than my previous version (i.e., what took 20 seconds before, takes about 3 minutes).

    Loading a Word document is about FIVE TIMES SLOWER.

    I have put up with this for about six weeks. I have about had it. I would give up the eye-candy and load XP but I don’t wan’t to go down the nasty road of not having the appropriate drivers for graphics, Wi-Fi, built-in camera, trackpad etc. Been there, done that, with the previous version of Windows (XP and Win2k, my trackpad still does not work).

    I am hosed and I don’t like it.

  99. Deja Vidor says:

    PS: Good article. I am glad someone at Microsoft is worrying about this issue.

  100. John says:

    "During Windows Vista development, the product team revisited the copy engine to improve it for several key scenarios. One of the biggest problems with the engine’s implementation is that for copies involving lots of data, the Cache Manager write-behind thread on the target system often can’t keep up with the rate at which data is written and cached in memory. That causes the data to fill up memory, possibly forcing other useful code and data out, and eventually, the target’s system’s memory to become a tunnel through which all the copied data flows at a rate limited by the disk."

    Here is where the old saying, "If it aint broke, don’t fix it" comes in. XP used ram to buffer large file movements, speeding things up by sparing some disk time, but the geniuses over at M$ decided that it might use too much RAM and CPU time… Are you kidding me? AFTER YOU FORCE EVERYONE TO UPGRADE TO 2 GD GBs OF RAM FOR YOUR CRAPPY SUPERFETCH, YOU ARE SO WORRIED ABOUT THE PRECIOUS PREFETCH DATA, THAT DOESN’T REALLY OPEN FILES ANY FASTER THAN XP, THAT YOU TURN OFF CACHING FOR FILE TRANSFERS PUTTING IT ALL ON THE SLOW A$$ HDD?!?!? IDIOTS! THIS SHOULD HAVE BEEN FIXED IN BETA!

    "Perhaps the biggest drawback of the algorithm, and the one that has caused many Vista users to complain, is that for copies involving a large group of files between 256KB and tens of MB in size, the perceived performance of the copy can be significantly worse than on Windows XP. That’s because the previous algorithm’s use of cached file I/O lets Explorer finish writing destination files to memory and dismiss the copy dialog long before the Cache Manager’s write-behind thread has actually committed the data to disk; with Vista’s non-cached implementation, Explorer is forced to wait for each write operation to complete before issuing more, and ultimately for all copied data to be on disk before indicating a copy’s completion."


    "In Vista, Explorer also waits 12 seconds before making an estimate of the copy’s duration and the estimation algorithm is sensitive to fluctuations in the copy speed, both of which exacerbate user frustration with slower copies."

    OH WE KNOW! EVER NOTICE HOW YOUR TRANSFER SPEED ALWAYS GOES DOWN NEVER UP? EVEN DOWN TO B/S?!? M$ TRANSLATION = "estimation algorithm is sensitive to fluctuations in the copy speed"

    "During Vista SP1’s development, the product team decided to revisit the copy engine to explore ways to improve both the real and perceived performance of copy operations for the cases that suffered in the new implementation. The biggest change they made was to go back to using cached file I/O again for all file copies, both local and remote"


    "Unfortunately, the SP1 changes, while delivering consistently better performance than previous versions of Windows, can be slower than the original Vista release"


    "The original Vista copy engine would deliver a high-speed copy, but, because of the out-of-order I/O problem I mentioned earlier, trigger pathologic behavior in the Server 2003 Cache Manager that could cause all of the server’s memory to be filled with copied file data. The SP1 copy engine changes avoid that, but because the engine issues 32KB I/Os instead of 60KB I/Os, the throughput it achieves on high-latency connections can approach half of what the original Vista release achieved."

    NETWORK TRANSFERS ARE GOING TO BE HALF AS FAST (or twice as slow depending on how you look at it) ?!?!?!?!?! ITS GETTING BETTER AND BETTER!

    "The other case where SP1 might not perform as well as original Vista is for large file copies on the same volume."


    "One final SP1 change worth mentioning is that Explorer makes copy duration estimates much sooner than the original Vista release and the estimation algorithm is more accurate."


    "File copying is not as easy as it might first appear, but the product team took feedback they got from Vista customers very seriously and spent hundreds of hours evaluating different approaches and tuning the final implementation to restore most copy scenarios to at least the performance of previous versions of Windows"


  101. Craig Baker says:

    Interesting read. Still one year after Vista’s release I’m still unable to consistently copy files from remote servers, without encountering failures. It amazes me that such a significant defect can remain unfixed for such a period of time.

  102. PrettySad says:

    It’s all a pretty sad state of affairs at this time. We can’t even install the SP1 here at our offices because the first test machine gets the endless reboot problem (which we fized thanks to the hardworking bloggers out there…not ONE bit of help from Microsoft…Not ONE. Totally unacceptable behavior for a company of this size). Our second test was on a laptop, which had the touchpad software stop functioning due to the intellimouse driver trumping the touchpad driver (got that one fixed).

    We’re frankly scared to deploy it. It did NOT install well at all. It’s already caused numerous hours of finding solutions on newsgroups. Now, we are finding that the network speed issues are somewhat solved, but mostly not really. Somehow in 5 years Microsoft spent hundreds of hours pouring over the previous method of moving files, only to spend even more hundreds of hours trying to fix it? That’s almost unbelievable. Really, NOBODY does that in the real world and doesn’t get a serious butt-kicking over it.

    In any case, at this point we can’t really think of a good reason why we should upgrade…even with SP1. It appears to be more of the same…with one compromise after the other.


  103. Norman Diamond says:

    zzz: "So the SP1 doesn’t have any UI setting to make all folders in explorer look like they do in 2003 by default?"

    I don’t understand the "by default" part, but can confirm that SP1 still doesn’t have a setting that will apply to all folders.

    Pavel Lebedinsky: "To zzz: The problem with explorer view settings changing on their own has been fixed in SP1.

    No it has not been fixed in SP1.

  104. Todd in L.A. says:


    LOL. Dude, you got it right on target.

  105. David says:

    What was the rationale behind reducing the I/O size from 60kb to 32kb? Did it just look cleaner rounding it down to a power of 2?

    And does network latency really cap the maximum transfer rate, even if your bandwidth is much higher? Imagine if TCP had this problem. It’s not reasonable to use a fixed queue size for network transfers, especially one small enough to prevent full bandwidth utilization. If it’s slow now, it’ll be much worse in the future, because as network speeds improve, the speed of light will always be constant.

  106. Howard says:

    I first tried Vista back in Nov 2007, and it seemed ok. However I soon discovered the copy bug – a previously solid VPN link (from my DSL router to a remote firewall) now timed out consistently.

    I tried numerous suggestions relating to this bug including the hotfixes- nothing worked. I even reinstalled Vista. It only went away when I upgraded my DSL router to a Netgear DGFV338. Now I’ve installed SP1 and its broken again. Seriously considering XP again.. this is just not acceptable.

  107. Andersson says:

    Im going back to XP also. Ive had it, been using Vista for 6 months now and it gives the same bad taste in my mouth as Windows ME did in its days. Vista is just a gap filler, worthless.

  108. zzz says:

    >>zzz: "So the SP1 doesn’t have any UI setting to make all folders in explorer look like they do in 2003 by default?"

    >I don’t understand the "by default" part, but can confirm that SP1 still doesn’t have a setting that will apply to all folders

    I meant that:

    "make all folders in explorer look like they do in 2003"

    would mean that all folders have the detail view whether I want it or not.


    "make all folders in explorer look like they do in 2003 by default"

    means that all folders have the detail view put on them globally but the user can go and change settings similar to ACLs.

    How this would work:

    Go to the music folder, add some columns, then it remembers those for that folder and every folder under it – however there could be a checkbox in folder option to not apply the column etc settings recursively but I doubt that’d see a whole lot of use.

    So by default everything is a generic data folder with detail view and if you want to make entire external HDD a music drive you only change the columns for the root to show album details and you’re done. Just like ACL’s.

    Now *IF* you wanted to "dumb it down" or "make it smart" I’d make it learn the user preference eg: if folder is full of audio files then save the column preferences and later apply those preferences to other folders full of only audio files, ignoring hidden files in this process.

    This stuff is nothing new, the ACL way of doing things works quite fine and MS needs to drop some of these "desktop.ini" things they brought from Windows 95 as they simply do not work.

  109. zzz says:

    File copy speed in network: just compared RTM and SP1 performance and in the best case the speed was around 45 MB/s. Now atleast consumer Desktop computers will soon have 10 Gbit (1000 MB/s) LAN interfaces. Thus Vista explorer network performance is capped at ~5% of the speed? I’m having hard time believing this!

    In contrast, purely usermode FTP+FTPd regardless of the used Windows OS combination usually reaches atleast 90% of the theoretical speed advertised for the NIC solution.

    So did Anandtech use bad hw/drivers or is the performance really this bad?

    Has MS tested SMB2 with 10 Gbit NIC’s? Results please.

  110. David says:

    You’re confusing MB and Mb. 45MB/s on gigabit is pretty typical for any OS if you’re not using jumbo frames. Just a 2-3 years ago, you wouldn’t expect a hard disk to read any faster than that.

  111. John Simpson says:

    It is a pretty sad state of affairs that MS can have a big team working on copying files, and that the result is slower performance than XP. And then SP1 comes out and the performance is still not up to XP. Why not simply go back to the way XP used to copy files ?

    While I very much appreciate the insight given in this article, all this theory is of little use if the end result is worse that what we had previously.

  112. ryan says:

    zzz… regarding the comparison, did they actually *use* a 10 Gbit NIC?  If not, that’s a pretty ridiculous conclusion to make.  

    What I found on their site was this: first,information about MMCSS throttling.  For max performance, you can disable MMCSS and get 940 Mbps on a Gb connection.  

    Regarding actual file copy performance however, it appears you’re looking at their results for a particular client computer–i.e., a desktop with a single hard drive (and from what I can tell, a Gb NIC, not 10 Gb).  Most modern hard drives still have difficulty maintaining 45 MB/s transfer rate consistently over the entire platter.  You’d really need to re-run the test with something that can read/write faster to find out.  I don’t see anything that indicates that 45 MB/s is an SMB2 limit.  Even in the article, they state "We do need to note however that these results are extremely variable on a system-to-system basis due to factors such as hard drive speed and the network controller used."

  113. zzz says:

    Alright I didn’t even come to think of they’d botch the tests by forgoing the use of ram drive incase they didn’t have fast enough HDD to saturate the NIC at hand. Should’ve read it more thoroughly.

  114. Guest1 says:

    Unless queued IO combined with seperate read/write threads and asynchronous IO is used I do not expect significant performance improvements.

    BTW: *rofl* no "secret internal tricks" – Just jumbo frames.

    Microsoft can invent new IP stacks (that cause more compatibility problems than bring improvements), but the standard ethernet frame size performance problems won’t be solved by this. Ever tried to use jumbo frames in a big environment? Have fun!

  115. jim says:

    "The time that it says it’s estimating the time to copy IS spent copying the damn file. They just don’t want to give an early estimate which is likely to be way off."

    I just moved a bunch of files (411544 items to be exact. About 23 GB of data) from one disk to another. It spent about a minute calculating the speed and counting items before it even created the first destination item.

    So it does not spend time moving files during that first phase. Perhaps copying is different…

  116. Dave says:

    Step 1:

    Implement an inode based FS.

    Step 2:

    Experience the joy and bliss of not having to worry about in-use files, reboots after upgrades and all the other crap that Windows subjects us to on a daily basis. See why the Unix admins never seem to have so much free time on their hands.

  117. Alexandre Grigoriev says:

    Wow! It took 7 years to realize that it doesn’t make sense to use file cache while copying large files! Who could guess that it’s better to read and files in large chunks rather than jerk the disk heads back and forth for each 64K of data! I’m afraid to check if the file cache is still able to bloat uncontrollably, discarding all executable pages, such as happens in XP during reading of large files. It’s no fun to watch to continuous page-in of executables.

    Instead of all artificial schemes of application startup acceleration, why not just page-in the whole image (up to a reasonable limit) on load? Just touch all its pages, and it’s in! Nothing beats a sequential read.

    With such pace who cares about fixing bugs! Instead just create new ones that the people will forget about old ones.

    I thought "Copying files, 0% done" is more meaningful than that dreadful "Calculating time remaining". Who cares that you’re calculating time? I’m copying files, not entertaining myself with time remaining gauge. Vista/Longhorn are full of such crap. Was there adult supervision provided, or it’s all offshored?

  118. Craig says:

    Thanks for the article, I learned a few things.

    Why does Robocopy seem to be better than Windows copy?  I have done many data transfers moving files/folders from one server to another or from a workstation to a server and Robocopy is by far the best and most efficient.

    It does not cause the machine to slow down, it just goes to town.

  119. Alexander Trauzzi says:

    Keep it simple guys, where’s your heads in all this?!

    How many blunders does it take Microsoft to admit Linux does things better and faster?

    I would really like to know!

  120. Chris Lees says:

    I’m glad there were virtually no people here proposing unbuffered writes. Over in Linux land at the moment, there are Windows refugees who want "sync" by default so they can pull out their USB thumbdrives without having to go through the effort of right-clicking on the icon and choosing "Eject".

    @Norman: Don’t worry, I’m sure Vista syncs data onto disk before shutting down.

  121. G says:

    As a long term beta tester of Vista I was against the release of the product due to so many of the code errors you have in your BLOG of the final product.

    I have noted that many still exist in the post SP1 of VISTA.

    As such I still do not recommend this product as it is unstable as a primary machine.

    I did not notice it mentioned the nightmare of a person wanting to share screenshots from most games mentioned. Often the user is presented with a notice that they do not have Administrator rights even on single user environments by default in Vista.  Further Microsoft has yet to post how to work around that in the knowledgebase.  This is a big oversight on the part of Windows to redirrect images taken inside other applications and copy protect them.

    Thanks for the great blog Mark.

    Keep up the good work.

  122. Bob Bobson says:

    So that’s why I’ve seen a bunch of screenshots of fileop dialogs saying it will take 2,300 years to complete. 🙂

  123. Girish Pandit says:

    woo, I didn’t know about this! really appreciate for sharing this article.

  124. verdy_p says:

    Unpacking the Eclipse for Java ZIP (needed for install) on disk D with Vista:

    a bit less than 80MB total size. 3 days to complete, but never ended: still runs at ridiculous SIX BYTES PER SECOND (after two minutes), and still slowing down (now just 2 bytes per second) !

    I’ve looked at all sorts of tricks, but al I can say is that the cached copy based on the memory cache manager is DAMNED BOGOUS, as it has absolutely NO guarantee to ever complete in a reasonable time: the larger the number of files to handle in the writing queue, the slowest it is in the Cache Manager.

    Really, it does not matter much if I/O should be cached or not for small I/O; the problem is really in the cache manager that takes too much time to complete, most probably because it almost never gets CPU; this is even worse on single core systems, or if there are some small foreground activity, for example the animations made by the Explorer in the progress bar: you need to hide it, by placing it out of the visible screen are (just placing another windows in front of it does not work due to Vista’s implicit virtual display in a virtual buffer, unless the window is really out of screen and it has no icon, for which redraws are not performed);

    Note: CPU usage during copy is very low (under 5%) as well as I/O performed (no massive page faults), there’s LOT of free memory available (as seen in Process Monitor’s performace display), the hard disk LED does not blink (only a very tiny blink every about 8 seconds).

    I’ve tried with various unzip tools, there’s alsways the same problem, except if I perform unzip using a Console-based application (like jar.exe in the Sun Java JDK tools, or unzip.exe in Cygwin).

    Vista is really bogous, and still depends on things inherited from XP that was already bogous, but this has become even more dramatical in Vista due to the more massive implication of the Cache Manager for performing copies, or for moving files from a temporay directory to a final location that is not on the same volume and requiring a backgound copy.

    For everything you need to make sure that the temporary folder used is the same as the target dirctory, otherwise slow background copies will occur. The Cache Manager is bogous and oes not synchronize correctly when working with multiple volumes.

    This is in fact the SAME PROBLEM as slow copies over a remote network, but it is even more critical there with multiple volumes that are not part ofthe same filesystem: CDROM, DVD, USB key, second disk, second partition, junctions, directories virtualized by UAC when the user’s home directory is not on the same volume as the protected (non virtual) programs/Windows/system directory.

    Something is completely wrong in the way Vista schedules the Cache Manager working threads or its I/O completion event queues.

  125. Alexander B. says:

    Shame that the network performance issues and large files on same partition are still slow…

    One thing they could have done with vista was to not fall for the i915 and also go exclusively 64-bit to upgrade the entire "it industry" to the new platform. I’m sure there is a "for teh children" / "fighting the terrorists" one could add to 64-bit and unique CPU ID’s as well, not to mention the memory protection you have on the 64bit cpus.

  126. Norman Diamond says:

    "now just 2 bytes per second"

    That’s because you were lucky and Windows Explorer didn’t crash in the middle of doing the unpack.  If you decide to try to do something else while waiting for those 2 bytes per second, you can watch Windows Explorer stop working and have to start unpacking again from the beginning.

  127. Jan Herder says:

    What’s really amazes me is that all this was discussed and solved when I last did OS programming and that was 25 years ago…

    I totally agree with Lawrence D’Oliveiro that it is easily solved.

    To bad the people at Microsoft seems to suffer from the biggest NIH syndrome in the business.

    Nice though to have gotten an explanation why my Windows Server 2003 takes 5-10 minutes to recover from a Gigabyte copy operation.

  128. Nooobody says:

    Why don’t they put in a feedback system? You know, a negative feedback loop. Try changing parameters on the fly, see how the copy speed responds. E.g. change I/O buffer size from 32KB to 60KB in response to high latency networks, etc.

  129. n00b says:


    I have been trying to drag-drop (copy) files into a folder already containing lots of subfolders.  

    I work using the "details" view – and I just CAN NOT find any free screen area to drop the files so they always fall through to the subfolder.

    From memory, I think XP only dropped into subfolders if released over the *filename*.

  130. Norman Diamond says:

    "just CAN NOT find any free screen area to drop the files"

    If you waste a bit of space horizontally, so that the last column that you’re displaying will end somewhat before the scroll bar, then there will be free (expensive, wasted) screen space between the details and the scroll bar.

    "From memory, I think XP only dropped into subfolders if released over the *filename*."

    Yes, along with all other Windows versions of the previous 12 years.  People complained because they couldn’t double-click just anywhere on the line, they had to double-click the folder name.  So now double-clicking has been made easier but dropping has been made harder.

  131. Chris C. says:

    This is a great informative blog for one.  For two, if some people have soo much hate for Vista, then don’t run it.  This is a blog, not a Vista "bashing" forum.  Want to see Vista complaints?  Go back to the Beta version.  Vista RTM with SP1 won’t get any complaints from me, copying files, playing games or other.  markrussinovich looks to have done a LOT of research and put in a LOT of effort to create this informative blog.  Keep the comments to the blog itself and not to how much "vista sucks"..  It’s getting old.  MS drowned OSX and Linux in the "OS Wars" and will continually do so.

  132. Tired of Vista (where's SP2 already?) says:

    I’ve almost had enough of Vista if it wasn’t for all my software I think I’d revert back to XP.  It was 1000x faster (especially the shell, nevermind networking).  hmmmm…think I’m going to image Vista and reinstall XP.

  133. Emkay1001 says:

    I strongly agree with Chris C. We have a unique opportunity to read about the internals of Windows operating system written by one of the experts in a comprehensible way. Let’s not waste it on "OMG Vista sucks" and "Linux does it better" comments as the point of this blog is not to prove Windows’ superiority (the free market will judge). Plus it doesn’t really provide any valuable insights on the subject.

  134. Eric says:

    I stumbled onto this blog when researching arecurring problem.  Does Vista (or sp1) handle very large (millions) file copies?

    I get "insufficient resourse" errors after a day or so of copying with XP pro.  Must reboot to clear.  Have tried many machines, SAS,ESATA, and other drives.

    Is Vista better in this reguard?    


  135. Norman Diamond says:

    "We have a unique opportunity to read about the internals of Windows operating system written by one of the experts in a comprehensible way."

    Well, sort of.  The sellout was good for Mr. Russinovich and who can blame him.  It wasn’t so good for the rest of us.

    Anyway, if you want to read what Microsoft said about Vista, try this:

    "They Criticized Vista. And They Should Know."

  136. Tim Bolton says:

    You know Eric, sometimes the more you give people the madder they get…

    As far as commenting on your remarks about Mark – who doesn’t need me to stand up for him what so ever – It’s like winning an event in the Special Olympics.  I may have won but I am still retarded for replying to your remark.

    It’s the VENDORS!  Since Microsoft doesn’t build the PCs how can you blame them for bad equipment or lack of drivers?

  137. n00b says:

    Thank you Norman for providing some reasoning behind this change, and a work around.

    Seemed like the place to ask WHY it was done this way, and if there were any registry tweaks.


  138. David Schwartz says:

    Actually, Linux does allow caches to evict things like resident code pages from memory. The defaults have changed over time and Linux has been tuned recently (in the past four years or so) to do this much less, but it will do it. Google ‘swappiness’ for more information.

    Quoting Andrew Morton: "My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don’t want hundreds of megabytes of BloatyApp’s untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful."

    FWIW, I have never seen Linux mess this up. It generally swaps out code that hasn’t been used in ages and only in cases where the enlarged cache will actually help. It is not wrong to do this, and it is not wrong to allow the cache to be a huge fraction of physical memory. You just have to not screw it up, and it’s easy to screw it up.

  139. SRS says:

    If you look at the source code for gnu cp, the  algorithm is really to open the source file, read in chunks and write back to the destination file. Some versions take special care for sparse files (to preserve the sparse regions), but that’s it. There’s no Copy Engine, no modification of paging algorithms, and no optimisation for large files – the kernel is just expected to handle the workload – both memory and cpu – just like it would for any other process.

    I’m not trying to start a flamewar here (I promise) – I’m just interested in the different approaches and why the Vista team decided there was a need for a Copy Engine at all.

  140. Norman Diamond says:

    OK SRS, I’m not trying to start a flamewar either, but since you spoke of gnu cp…  I have to avoid looking at gnu source these days so that none accidentally sneaks into my work, but I can use gnu cp as a user.

    Several years ago Microsoft admitted to a problem copying directories full of short filenames and long filenames.  For example, suppose a directory contains these files:


    really long filename.txt

    and the short name of the second file is REALLY~2.TXT

    If Windows Explorer copied really long filename.txt first, then in the destination directory it would get short name REALLY~1.TXT, and then when Windows Explorer copied the original REALLY~1.TXT, it would overwrite the destination’s copy of really long filename.txt.  The result would be one miscopy and one lost copy.

    If I recall correctly, after Microsoft figured out what their problem was, they fixed Windows Explorer.  Several years ago.  First copy files that have a single name, such as REALLY~1.TXT in this case, and then copy files that have long filenames.

    About two weeks ago I used a one-year-old copy of gnu cp, with the -r flag.  Fortunately I checked the results.  The target drive was missing a file.  It took several hours to track down which directory was missing a file, and then a few minutes more to guess which file had been miscopied in the same directory.

    Short filenames suck, but that’s no excuse this time.  gnu cp should have been fixed several years ago.

  141. PJ Katz says:

    Vista SP1 is still slow and whatever the logic or reason, in 14 years of using Windows PCs and servers, nothing ever went wrong with the old, lightning fast file copy in use prior to Vista.

    So it was a solution in search of a problem that now makes life hell for our Vista users.  Need to move 100 large TIF files?  Take a lunch break … better still, take the day off.

  142. Norman Diamond says:

    "in 14 years of using Windows PCs and servers, nothing ever went wrong with the old, lightning fast file copy in use prior to Vista"

    Wrong.  And wrong for more reasons than one that was mentioned in the comment immediately preceding yours.

    Of course that doesn’t mean different bugs are better, it means different bugs are different, and still bugs.

  143. SRS says:

    Norman Diamond: My point was not that gnu cp was bug-free, but why MS saw the need for a complex engine: I don’t know anyone who complains that *nix file copying is too slow using cp, but many many people have complained about Vista in this regard.

    Although I agree with you that shortnames do indeed suck, I’d argue that the bug you describe is a problem with Windows shortname preservation and filename enumeration – i.e. it should’ve been fixed at a much lower level than explorer. Then everyone gets the benefit.

  144. Benetnach says:

    No matter how long time the programers check copy engine to make "better" in windows XP I move files of 4 or more Gb in very less time than in vista. For Example, yesterday I move an image of 3.92gb from vista to server 2003RC2 it takes 52minutes to move on.

    So for me Vista copy engine (I think Most of vista) must die.

    this is my personal opinion.

  145. Norman Diamond says:

    "Although I agree with you that shortnames do indeed suck, I’d argue that the bug you describe is a problem with Windows shortname preservation and filename enumeration – i.e. it should’ve been fixed at a much lower level than explorer."

    It can’t be.  Some callers wish that enumerations would come back sorted in some order.  (For simple languages you can wish for alphabetical order, for less simple languages you can wish for orderings that might have been set by the Library of Congress or various other standards committees, etc.  But no matter what you wish for, there are still cases where you’re not going to get it.)

    Anyway, even if a lower level spec change can please some callers, it can’t please all.  The app is still going to need its own engine.

    Also suppose I had done cp * dest instead of using the -r flag.  Then the glob would deliver filenames as command line parameters in some order and cp wouldn’t even do enumeration.  That would lose again.  In order to produce correct results, cp will need its own engine.

  146. SRS says:

    "It can’t be.  Some callers wish that enumerations would come back sorted in some order." – not the point at all. If you implement something as twisted as short filenames (and I know MS needed to do this for compatibility reasons – but a problem of their own making), then it has to work consistently and seamlessly at the lowest possible api level. And your excellent explanation of the gnu cp bug shows that it doesn’t. Regardless of glob order, or sort order be it alphabetical or Library of Congress, low level file system operations between windows filesystems have to preserve short filenames consistently. Putting a hack in explorer is just bad design.

  147. Norman Diamond says:

    "Regardless of glob order, or sort order be it alphabetical or Library of Congress, low level file system operations between windows filesystems have to preserve short filenames consistently."

    That still isn’t possible.

    The number of possible longnames is somewhere around

    50000 to the 256th power

    (this is a moderately wild estimate, based on a wild guess at the number of UTF16 codepoints and an approximation of the maximum length of a longname component in a directory in NTFS or FAT, and not compensating for the fact that a high half of a surrogate pair must always be followed by a low half, not counting names shorter than the maximum, etc.)

    The number of possible shortnames depends on the OEM code page of the currently active system locale (or options to the mount command in Linux etc.), but again a wild estimate would be somewhere around

    200 to the 11th power.

    You’re not going to get a consistent mapping from the larger set to the smaller set.  The mapping that you’re going to get is going to depend on the order you create the filenames in.  When each filename is created, it will get a shortname that doesn’t conflict with other names that exist at the time.

  148. Aaron Langley says:

    I’d liked to have seen some more functional enhancements to the Vista  Explorer interface. In a superficial sense that is the OS to most users.

    It’s about giving the user just enough choice. And Windows Explorer in Vista has if anything removed the user even further from the decision process.  In fact everything done since XP sounds like bug fixes and optimization.

    Why not just have a bulk copy command?

    Vista added no pause/resume. And if an error occurs when copying a large number of files it just cuts out.  That’s why most Devs still use XCopy.

    What bugs me is that the Explorer progress window looks like it’s only been glammed up with Vista’s new theme.  Why isn’t there a "File Copy Manager" that pops up when I begin a transfer? Where I can add files to the copy queue when I realise I missed some files. Instead of two copy windows competing for the same resources and all I can do is  still cancel. Windows 3.11 let me do that.

    And what would be powerful for a Business edition OS and make it feel like one (rather than the one without Chess) is the ability to queue Explorer commands. Just simple visual way like FTP programs have. (I don’t use FTP any more.)  Then I press start and go and have a coffee and let it run.

  149. Markku Valkonen says:

    I installed SP1 to my laptop machine and copy operations really speeded up. But problem is that opening of files from network drive slowed down considerably.

    I have no domain, just Win 2k3 server and there network shares. I have created account to server with same name and password what i have in my laptop, so it doesn’t ask anything when i access shares.

    When i copy file to local drive or from local disk to share, it is almost instantenous… but when i open same file, it takes almost 20 seconds to open. First it takes about 5 seconds for program to open (for example wordpad when i’m opening txt-file). But program is unresponsive and file itself doesn’t open for another 15 seconds. Size of file doesn’t seem to matter almost at all.

    I fired up wireshark and looked what is going on. It seems like vista is first retrieving file then waiting 5 seconds and then asking for file oleacc.dll from server. Server replies that no such file. Then vista waits again 5 seconds and then asks file rsaenh.dll. Again server responds that no such file. Again vista waits 5 seconds and then finally opens file. I had no such behavior prior SP1 – all files opened fastly in identical environment.

    Can anyone help me with this problem as it is quite annoying.

  150. SRS says:

    rsaenh.dll – Microsoft Enhanced Cryptographic Provider Library.

    I wonder why this is needed for your copy of a text file.

  151. Graduate Student says:


    I have read the blog and the comments.

    I am thinking in doing a small term project in my OS course that will give me insight to the problem.

    I thought of:

    1) either benchmarking the speed of copy/move/… in Vista (with & w/o SP1) in a couple of scenarios (e.g. many files vs. big few ones),

    2)OR of making/adapting a basic (probably to be sophisticated later) Remaining Time Estimator.

    The second choice can be more related to my pattern recognition (prediction) idea.

    I’ll not go into details untill I hear comments from you, so that I don’t bias you early.


  152. Graduate Student says:


    if I am to do a small research work (10 working days, 4 hours each) to something related, where can I find more references? I am thinking in a small "Remaining Time Estimator" or in benchmarking and comparing copies and moves…


  153. Ian Murphy says:

    I’ve always wondered why programs like Robocopy are so much faster than explorer – the difference is more noticable the more files there are. This may partly be the estimated time calculation but even copying 10 x 100mb files is faster using copy/xcopy or robocopy than using explorer. Gnu CP also performs impressively.

    It may make an interesting subject for a future posting.

  154. SRS says:

    Ian Murphy: A number of us have wondered just the same (see earlier comments). Then you read the comment posted by Markku Valkonen about multiple attempts to load a cryptographic library (rsaenh.dll) for a simple text file copy and you start to get a bit paranoid about just what ‘added value’ Vista is giving.

    Unfortunately, MS will just sit on this one, so I wouldn’t wait for any useful postings on the subject in this blog.

  155. HR says:

    The history of the Windows OS speaks:

    The performance experience on every Windows version is equal. The only thing which changed along the way is the hardware and the "functions". (I’m not talking about the GUI here).

    The Windows OS could have learned from the  *nix world, where speed, very easy design and security is the main goal. But now it’s too late, that will never happend. The Windows OS is a "package" not a fast engine.

    So do not expect anything else from the Windows OS in the future.

    The Vista Copy prosess will perform nicely with a better CPU, a grand graphics card and more memory.

  156. justapassenger says:

    Finaly an article that explains it all…

    Thank you very much 🙂

  157. Matt says:

    Apparently microsoft didn’t get the memo on the file copy inprovements.  I have SP1 installed and it acts just as bad if not worse.

    I miss you XP.

  158. Obliterator says:

    What I want to know is why is such little information provided in the copy progress dialog. The initial summary is fine, but click more details and what do you gain? Transfer speed, thats it. Where is the indication of which file is being copied? It simply says copying from C: to C: no indication of which file or even folder its working on. A list of the processed and pending files would be far more useful!

  159. barrie says:

    Fascinating article, but I wonder if you can explain waht is happening in this case: since upgrading to SP1 I can’t copy large files (a few MB) to my network drive over a vpn.  Another machine on the same home network not upgraded to SP1 works OK, but at the usual slow pace of Vista  file copying .  Aftera few minutes the error message "can’t access c:directoryfilename" comes up.  If you are overwriting a file on the network drive it is deleted.  All this arose after installing SP1.

  160. BjornJ says:

    Amazing that Vista has been out this long and it can’t properly copy a file between two computers.  I just tried to copy a 700Mb file and it was telling me six hours to perform the operation. So I mapped a drive and using the Copy command in a DOS window did it in a few minutes.

  161. ClaesD says:

    Anyone noticed having different sha1 checksums after copying a large file? Occasionally on SP0, frequently on SP1. Happens after copying over ethernet, from disk to disk or even making a copy on the same disk. I first noticed this when the file transfer manager used for MSDN downloads consistently failed during the consistency check after the download. I’ve just tested and verified this after a clean install of both SP0 and SP1. With large file I mean a typical DVD iso, 2.87 GB in my testcase.

  162. Carey Holzman says:

    Want to fix Vista, fix these annoyances:

    XP will show you what file it is copying/moving, whereas Vista doesn’t tell you ANYTHING other than estimated time remaining. You can’t easily tell if its stuck nor can you tell which file may be causing the problem because the display never shows you what is actually happening, only an estimation as to when this mystery SHOULD be done.

    You call that details? Really?

    Whatever happened to FREEDOM OF CHOICE?

    Its called a PC – Personal Computer. I want to make mine Personal, but Vista forces me to be a sheep.

    Try putting a shortcut to a DOS program on your Vista desktop and assign an icon to it.

    Go on, I dare you. Don’t give me any crap about DOS programs being old and obsolete, as I have many customers forced into buying a new PC with Vista on it who rely on some DOS-based apps. These DOS-based apps work perfectly, but you cannot assign an icon to them no matter what you do. If Microsoft doesn’t want DOS apps running in Vista, why make Vista backwards compatible? The fix for this is to create a batch file that calls the DOS .EXE file and put the .BAT file on the desktop. Only then can you assign an icon to it.

    What if you want all of your .MP3 files to be represented by a different icon. I prefer WinAmps lightening bolt icon. I easily set my XP machine that way. How do you do it in Vista?

    How do I re-size the amount of space System Restore uses in Vista? Why does Vista create MEGABYTES volume shadow copies in versions of Vista where restores are not enabled? Thereby just wasting what little hard drive space a user of that minimal version of Vista will most likely have!

    Vista wants me to conform and I refuse. I want Vista to conform to me and my customers usage habits.

    Why does UAC assume ANY action is dangerous? Why does it not have a database of known non-dangerous things, such as changing your screen resolution? Its clearly the simplest application in the world and its effectiveness is equally simple and ineffective.

    In XP, you can load the pictures from your digital camera, view them as thumbnails, and then arrange those thumbnails in any order you want, then hit CTRL-A, then right-click on the first one and select RENAME. Choose to rename it and XP will number them with that name in the order you have placed them. Try that it in Vista! For example IMG_8012.jpg and IMG_8015.jpg and IMG_7822.jpg, displayed as thumbnails in that order can be renamed to HAWAII VACATION 100, and XP will rename them, in that order to HAWAII VACATION 1001, HAWAII VACATION 1002, HAWAII VACATION 1003, etc… This is XPs best feature for digital photography hobbyists. Where is it in Vista?

    I could go on with at LEAST another half-dozen problems, but I’ll save you the speech.

    Let’s hope Windows 7 is out soon and they give the users back control.

  163. Jonny Boy says:

    "I have to avoid looking at gnu source these days so that none accidentally sneaks into my work"

    Isn’t that bizarre? I mean stop and think – isn’t that truly bizarre?

    "I have to avoid looking at someone else’s freely available good work, because if I learn from them and make something better myself, there will be legal and professional implications. Even if I could take a look, I wouldn’t be able to share the results with you like they do. Please buy my product."

    What a pathetic approach to software.

  164. Gregg Wonderly says:

    One of the biggest problems with windows is that it started as an application task switcher, not as an operating system.  It had APIs embedded in application code, and didn’t make it easy to centralize things that this article discusses.

    In the end, there should never, ever be a "copy  file api".  It’s just the wrong answer.  

    It doesn’t matter whether I am reading a file to use it in an application, or to copy it, that read should be as efficient as possible given the current load on the machine.  

    It doesn’t matter why I am writing to a file, it should be as efficient as possible given the current load on the machine.

    What does this mean?  It means that the file I/O stack should include layers that understand that a final "device" is a "network" destination vs "disk" destination and it should then be able to manage caching, seek scheduling and order of operations overall, to maximize throughput.

    The stupid thing about windows copy progress indication is that if you copy a tree of 100,000 files, it has to count them and get their sizes before it starts copying.  This means a huge amount of worthless file I/O that only serves to delay the copy.

    In some cases, I’ve done things like use the cygwin toolset to do

    tar cf – . | (cd /cygdrive/e/dest/dir;tar xvf -)

    and had it complete in about 1/2 the time it took the "copy file" processing to do it.

    There really is no need for this "progress crap" to be around as much as just a listing of which file/directory is being processed next.

    People can see those names and trees going by and get an idea of what the progress is.

    At some point, windows developers might pay attention to real OS principals.  However, the current numbers seem to show that it takes them between 15 and 20 years to finally see and understand what the benefits of computer science is.  For example, running protected mode OSes.  PC was released in 1984.  I was using protected mode Unix at that time.  It wasn’t until the late 1990s that we saw any sign of DOS disappearing as an purported OS.

  165. Gregg5 says:

    "Norman Diamond: My point was not that gnu cp was bug-free, but why MS saw the need for a complex engine: I don’t know anyone who complains that *nix file copying is too slow using cp, but many many people have complained about Vista in this regard.

    Although I agree with you that shortnames do indeed suck, I’d argue that the bug you describe is a problem with Windows shortname preservation and filename enumeration – i.e. it should’ve been fixed at a much lower level than explorer. Then everyone gets the benefit."

    This comment is indeed what should be going on.  Because MS software developers (they aren’t software engineers in my book) are still fixing "applications" to make "OS issues" mute, we see problems like this manifest everywhere.

    The directory scanner should have been changed to return files with single names first, and then long filenames.  That way, program scanning a directory for the purpose of "copying" files out of it, would get them in an order that would create predictable results.

  166. Matt Miller says:

    After reading this lenghtly thread……..this reminds me of the infamous Richard Nixon quote from the movie " All the Presidents Men" :

    " If you’ve got em by the Balls, their hearts and minds will follow " 🙂  🙁

    I briefly was a math major in college 40 years ago………but now just appreciate a ‘decent car’

    ITS STUNNING to me that this is the ‘state of denmark’ and that many are ‘eating this s–t’ !

    I came upon this thread, because I have a ‘simple problem’……..I want to copy and paste files and Vista WITH SP1 won’t let me, because ‘there is not enough room’ to do so :-(((((((((((((    When all I want to do is backup a few missing files into a folder!

    With WXP it was ‘Shift "NO" ‘ and Voila!

    OUTRAGEOUS  :-(((((((((((((((((((((

  167. Bronislav Gabrhelik says:

    Great article Mark! It’s nice to see what is behind scenes and what was an evolution of this API.

    It looks like CopyFileEx() handles case when remote source and destination file lays on the same server and it does copy remotely. There is no write, but CopyFileEx()  sends several IOCTL commands for source and/or destination file. IOCTL with function code 262 is sent only to destination file, so I think it is REMOTE_COPY command.

    Undocumented IOCTL commands are following:







    Stack proves that they are sent from CopyFile API:

    0 fltmgr.sys FltpPerformPreCallbacks + 0x2e5

    1 fltmgr.sys FltpPassThroughInternal + 0x32

    2 fltmgr.sys FltpPassThrough + 0x199

    3 fltmgr.sys FltpDispatch + 0xb1

    4 ntkrnlpa.exe IovCallDriver + 0x252

    5 ntkrnlpa.exe IofCallDriver + 0x1b

    6 ntkrnlpa.exe IopSynchronousServiceTail + 0x1e0

    7 ntkrnlpa.exe IopXxxControlFile + 0x6b7

    8 ntkrnlpa.exe NtDeviceIoControlFile + 0x2a

    9 ntkrnlpa.exe KiFastCallEntry + 0x12a

    10 ntdll.dll ZwDeviceIoControlFile + 0xc

    11 kernel32.dll DeviceIoControl + 0xd2

    12 kernel32.dll BaseCopyStream + 0x1cf9

    13 kernel32.dll BasepCopyFileExW + 0x740

    14 kernel32.dll CopyFileExW + 0x4a

    15 cmd.exe do_normal_copy + 0x7fe

    16 cmd.exe copy + 0xb2

    17 cmd.exe eCopy + 0x10

    18 cmd.exe FindFixAndRun + 0x1de

    19 cmd.exe Dispatch + 0x14a

    20 cmd.exe main + 0x21a

  168. Bronislav Gabrhelik says:

    >IOCTL with function code 262 is sent only to destination file,

    >so I think it is REMOTE_COPY command.

    I just found out that the IOCTL 262 is named IOCTL_COPYCHUNK and can be found in msdn documentation.

  169. Larry says:

    I noticed a question on corrupted data when copying from an SD card or Compact Flash card to a folder on Vista. I have lost many photos, thinking it was a camera problem. Trouble is, when I looked at the photos in the camera, they were fine.

    No one replied to that item, though. Is this a known problem, is there a fix for it, and also does SP1 fix it?

    It was pretty heartbreaking to look at the number of photos over the past few months that had been corrupted by what should have been a safe and routine copy operation. So I would appreciate advice from those of you who know about this. Thanks!

  170. SRS says:

    Look at all the inconsistent function naming conventions in cmd.exe:






    Don’t MS coders follow MS standards?

  171. Shane Creamer says:

    I have Q6600 with 4GB of ram and 4 Western Digital Raptors that are striped for a 2xHD C: drive and a 2xHD D:.

    HD Tach and other software shows these capable of 125MB per minute, and in XP I can copy 6GB of data in well under a minute (8 files about 7.5GB each).

    With Vista SP1 Ultimate with A/V disabled, Diskeeper (background defrag) disabled and no Windows Indexing occurring it still takes over 10 minutes to copy the same data with all SP1 + all required and optional fixes installed from Windows Update.  

    Is there some way Xperf or Process Monitor can determine why?  I mean it’s pretty obvious it’s Vista, but what function call is gaking on the copy operation?

  172. Peter Goodwin says:

    Post SP1 – Better than it was, but sending a  7MB file from Vista via FTP completes in a few seconds, while same file via VPN to a remote XP box takes 10 – 15 mins or falls over.  (the VPN works fine from XP) Seems like although some areas have improved, there’s still subtle TCP related issues. <sigh>

  173. Auto Carr says:

    I have an issue with Vista that does not occur with XP.  I create a folder on a shared drive and the user only has rights to Create files / write data and Create folders / append data.  With XP I get an error if I try to copy the same file a second time into the "dropbox" folder with Vista I do not get an error and the user is not informed if the file name already exists.  Vista should also return an error but does not and silently fails to copy the file.

    Can anyone else duplicate this issue?

  174. HSV says:

    Another thing to remember is don’t click on more info on the file transfer window, as this will gradually slow down your file transfer speed even in sp1 to about 20mb/s from 60-80mb/s without clicking it. And even if you hide the more info during transfer it won’t speed up anymore, so forget looking at the speed, because if you’re not curious it’ll be finished before you know it.

  175. shoreke says:

    Turn off Vista Network Throttling:

    HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows NTCurrentVersionMultimediaSystemProfile

    In the right hand pane double click NetworkThrottlingIndex and type in FFFFFFFF as Hex.

    I don’t know if this is good or bad advice. What say you?

  176. Tim Bolton says:

    "Thanks to Mark Minasi and his forum for this info."

    To check the status run: netsh interface tcp show global

    To disable it, run: netsh interface tcp set global autotuning=disabled

    To enable it, run: netsh interface tcp set global autotuning=normal

    The last two should be run from an Administrator Command Prompt, not a “normal” command prompt.

    You can also try netsh int tcp set global autotuning=experimental

  177. Realist says:

    The solution to this file copy performance problem in Windows Vista is simple.     Do it how Linux does it.    Linux does not suffer from any performance issues from file operations.    I’m truly surprised that Microsoft has BILLIONS of dollars for research and development and can’t get it right, yet Linux is written by a bunch of rag-tag team members and it’s getting close to becoming a superior product over the current Windows consumer OS.

  178. Dale says:

    There’s no improvement in Vista SP1 file transfers to XP.  I’ve been 20 minutes so far with, according to the copy dialog, 23 more minutes to copy 20MB from Vista to XP.  This is on two machines with gigabit network cards that copy perfectly well XP to XP or Vista to Vista.

    I’m generally a big Vista fan but this is totally absurd.  Have they even tested this stuff?

  179. Jeff S. says:

    It’s December 2008 – Last day of the year tomorrow. This thread was started 10 months ago.

    I have Vista Ultimate 64 with SP1 installed. Hotfixes / registry edits applied. etc.

    Simple: Copy two 440MB files from external DVD Bluray (USB 2.0) to hard disk.

    Explorer – takes more than 10 mins (never finishes, I’m not that patient)

    Tera Copy – takes 5.5 minutes.

    Directory Opus – takes 50 secs. About the same speed as an XP box we have here.

    It’s hard for me to conceive that a whole team created a copy engine that’s far slower, and that nobody at MS noticed.

    And more than a year later, it’s still not fixed. MS Support is telling me to use a 3rd party file copy program as a work around.

    This is really sad.

  180. Mark Jacobs says:

    This is interesting. I had to copy about 60GBytes of AVI files from one external USB drive to another through a Vista laptop. I used a Win 32 utilities program I had written in C++ which calls CopyFile (not the CopyFileEx version, the standard version instead) to copy the files, and it copied all 60GB in about 20 minutes (bombing along at about the 480Mbps a USB 2.0 connection can handle). The old techniques are the crudest and the best.

  181. Peter Glasmacher says:

    Well, from a pure technical viewpoint, all the explanations are just fine.

    However, from a users viewpoint, the copy/move behaviour of Vista is just plain not acceptable. You setup a operation to backup a bunch of data and the systems stops (yes completely stops !) a number of times for about 30 seconds ?. I thought Vista is considred a multitasking system.

    In my opinion, its just design without the user in mind……


  182. Mark Jonson says:

    Over a year later, I still believe the original Vista copy method was better. Since installing SP1 on my computers I’ve had just about every copy operation where I’m familiar with the completion time slow down.

  183. bikeman says:

    "During Windows Vista development, the product team revisited the copy engine to improve it for several key scenarios."

    They should’ve left it alone – file copy/move in XP is way faster than Vista

  184. dirbase says:

    Hi Mark,

    Referring to the file copy under XP described in the first part of your article above( ref: first process monitor screenshot), I have tried to interpret the sequence of events:

    event 0: IRP_MJ_READ; maps the source file into the cache, tries to read bytes 0-65,535

    event 1: IRP_MJ_READ; reads bytes 0-65,535 faulting-in from source file

    event 2: IRP_MJ_READ; read-ahead bytes 131,072-196,607 into the cache

    event 3: IRP_MJ_READ; read-ahead bytes 196,608-262,143 into the cache

    event 4: IRP_MJ_WRITE; maps the destination file into the cache and writes to dest file bytes 0-65,535

    event 5: FASTIO_READ; tries to read from the cache in fast I/O mode bytes 65,536-131,071

    event 6: IRP_MJ_READ; reads bytes 65,536-131,071 faulting-in from source file (fast I/O was unsuccessful)

    event 15: FASTIO_WRITE; writes in fast I/O mode to the cached dest. file bytes 65,536-131,071

    event 16: FASTIO_READ; reads from the cached source file in fast I/O mode bytes 131,072-196,607

    event 17:  FASTIO_WRITE; writes in fast I/O mode to the cached dest. file bytes 131,072-196,607

    event 18:FASTIO_READ; reads from the cached source file in fast I/O mode bytes 196,608-262,143

    event 19: FASTIO_WRITE; writes in fast I/O mode to the cached dest. file bytes 196,608-262,143

    event 22: FASTIO_READ; reads from the cached source file in fast I/O mode; end_of_file detected

    events 811,1175,1213,1233: IRP_MJ_WRITE; writes behind  and cache flushes for bytes 0-262,143

    Is this interpretation correct?

  185. dirbase says:

    Now, a small question: since "at a file’s first I/O (read or write)operation the cache manager maps a *256-KB* view of the file that contains the requested data into a free slot in the system cache address space"(quote from Windows Internals 4th edition page 660) [AFAIU, using mapped file I/O], why didn’t XP use these cached views of 256KB (even for read-ahead) as its transfer units for file copy instead of 64KB? It seems to me that the data is there, in the cached views, so that the copying process could be faster..

    (a correction to my previous comment: "maps the source file into the cache" is part of event 1 rather than event 0.)

  186. dirbase says:

    Sorry to comment again, but it seems to me that the following sentence in the paragraph "File Copy in Previous Versions of Windows" may require some precision:

    "Explorer’s first read operation at event 0 of data that’s not present in memory causes the Cache Manager to perform a non-cached I/O, which is an I/O that reads or writes data directly to the disk *without caching it in memory*, to fetch the data from disk at event 1, as seen in the stack trace for event 1.In the stack trace, Explorer’s call to ReadFile is at frame 22 in its BaseCopyStream function and the Cache Manager invokes the non-cached read indirectly by touching the memory mapping of the file and causing a page fault at frame 8."

    Looking at the CcCopyRead function, AFAIU, the missing page in physical memory is read from disk and copied *both* to the cache buffer and the user buffer (so it’s indeed read directly from the disk into the user buffer but it also ends up in the system cache).

    This is different from the case of a read-ahead, where the function CcPerformReadAhead, for a missing page in physical memory, will only copy the data from the disk to the cache buffer and not to the user buffer.

  187. zzzy says:

    It so goes that my Vista (client) would crash the entire XP (server) TCPIP stack when accessing these 3GB+ files.  It would happen randomly but consistently enough to mine any attempts at completing one task or another.  

    Having tried all possible tricks on both sides, including:

    (XP/server side)

    – driver updates,

    – increasing the Lanman IRPStackSize,

    – fine tuning the NdisWanMTU,

    – patching TCPIS.SYS to allow 50 concurrent half connects,

    (Vista/client side)

    – driver updates

    – disabling TCP autotune (see above comment from Tim Bolton from Dec. 2 2008)

    – fine tuning NdisWanMTU

    – tweaking the NetworkThrottlingIndex,

    I had to concede and consider I may be facing a network driver issue.  

    All until I had the great idea of installing Lighttpd on the XP/server side, and using Firefox as download manager on the Vista/client side.  

    Suddenly, not only did my transferred files become restartable, but no more crashes/errors/disconnects occurred.  At the same time transfer rates improved from ~2200 kbps (Vista file copy between crashes) to ~2.1 MBps (please note capital B).  

    I have to conclude now that all the "fine tuning" of Vista file copy Microsoft folks have been fussing over turned up c**p (sorry Mark R., I really respect *your* work).  It could very much be that file copy in both XP *and* Vista is irrevocably broken which is equally fine by me.  

    A decade of "fine tuning" by a well endowed R&D team couldn’t beat work done by volunteer enthusiasts, both with repect to stability and performance.  



  188. Clovis says:

    As someone said over a year ago, a file copy engine shouldn’t be any more complex than open file, read, write, [read, write, […]], close file. The system should handle the workload. That’s what it’s there for. Linux manages this, Mac OSX manages this, heck, DOS managed it.

    So either the Vista algorithm is so over-complex and can’t be simplified because of the egos involved (and I don’t mean Mark R. here), or the Vista copy does something additional with the file contents during the copy.

    Hopefully for Windows users this will be fixed in Windows 7.

  189. Chris Quirke says:

    When there’s a difficult trade-off decision to make, the best thing to do may be to leave users the opportunity to control this.

    A visible way would be a slider that appears on the file operations dialog, when a copy or move operation will take over (say) 5 seconds.  Pull the slider towards File Operations, and more memory is allocated to caching these; pull the slider towards Running Programs, cache memory is reduced to favor other running tasks.  Effects to end with that operation (so that small copy ops don’t "inherit" these settings with no UI to change them).

    A less visible "power user" approach would be similar to holding down Shift when clicking No to file operation stalls (which has the "always No" effect) or when deleting, to bypass the ‘bin.  Hold Shift down when clicking Copy, Move or Paste, and more memory will be allocated to that operation to speed it up, XP style.

    Perhaps add as an enhancement to W7 SP1, or as a Power Toy if the plumbing doesn’t go too deep?

  190. Chris Quirke says:

    If predicted theory isn’t working, after studying exactly what happens during file operations, then perhaps there are other overheads at work.  

    The obvious one is resident av scanning, but there may be background processes that are triggered whenever the contents of a folder are changed; thumbnailers, indexers, SR/Previous Versions, etc.

    Are those optimised to back off for long enough, so that they are not triggered into rebuilding their stuff every time a new file is added to a destination, or deleted from a source?

  191. MichaelG says:

    First of all thanks to Mark. It’s interesting and usefull.


    I see a lot of undocumented DeviceIoControl(s) inside Vista trace copy file operation:

    1. Device:0x14, Functions:260-262, Method:0



    The single found info about is IOCTL_COPYCHUNK in MSDN, which is defined very pure.

    I’d ask Mark, if possible, to describe how it works…

  192. Rob says:

    Just to let you know Mark this post helped me. Thanks.

  193. copy says:

    This is PATHETIC. File copying is one of the most basic operations of an OS and MS can’t get it right. Have they heard of KISS principle? KISS + let user have a choice.

    I’m trying to copy 13 pictures from a vista home premium PC to a vista home premium laptop over a wired network. It freezes during "calculating". Canceling causes the entire laptop become nonresponsive. Can’t even shutdown gracefully. All the fancy sh*t sounds great but it’s a typical programming pitfall. "It’s not as simple as it sounds" only because you’re doing it wrong!

    Seriously, does the MS team doing all these fancy research on copying algorithm realize how very pathetic this is?

    The milisecond another OS has decent UI and device support, I’m dumping MS. Been waiting for this day for 20 years now, unfortunately.

  194. evan says:

    I don’t understand why vista SP2 will not save the file settings whhen an attribute is changed by administrative user, also it appears that an Administrator is not really an administrator, Windows Vista for me and my business has been the worst ever Operating System, countless headaches, non fuctional,un-user frendly, and totally useless.We and 18 other businesses in Australia are getting together to ensure Microsoft pays for the damage that has been done.A class action with mountains of evidence this will get media,you may be able to get away with this deception in other countries,but in Australia Businesses together tageting the course, rectification and results of what microsoft has done will have a massive adverse effect on the Microsoft smug attitude.

  195. chaunam says:

    Does anybody know what Microsoft applications are using IOCTL_COPYCHUNK?

Comments are closed.

Skip to main content