Content Deployment – Best Practices

In the past I have released several blogs about the various problems that can occur with Content Deployment. As it is often hard to find the right resource I have now decided to compile the different known issues into one large article:

  

Avoiding common problems

Problem 1: Mixing deployments with and without retaining object identity

First of all: if possible you should avoid this! Importing with different settings for this property into the same database can lead to serious problems during future deployment and such databases will become hard to maintain.

Be aware that this also means that you should not use STSADM -o import against a database that should be used as the destination of a content deployment job!

STSADM -o import will not retain the object identity while content deployment does.

So why is there a problem when mixing imports with different RetainObjectIdentity settings?

The reason is that with RetainObjectIdentity enabled the imported object will have to be created with the same name and the same guid at the same location in the destination database as it was in the source database. If the item already exists it will be updated. If not it will be created.

Problems occur if there is an item with the same name but a different Guid in the destination database. This can happen if someone has authored on the destination server and created items with the same name but or if he imported content from the source server WITHOUT RetainObjectIdentity setting set to true.

In case that items with the same name but different GUID are allowed for the affected item you will end up with two items with the same name on the destination server. This will be the case (e.g.) for usual ListItems.

In case that items with the same name but different GUID are not allowed for the affected item the import will run into an exception similar to the one below and stop:

Failed to create the 'Pages' library. OriginalException: There can only be one instance of this list type in a web.

=> In order to avoid this problem you have to guarantee that content added to the destination database will not have any name/guid conflicts with the source database – even if new content is added to the source database in the future!
 

Problem 2: Running multiple imports without retaining object identity for updates of the same content

When doing an export and import without retaining the object identity on the destination server you can end up with duplicate items in lists as each import tries to create the same list item again with a different GUID. The import is not able to decide whether you would like to overwrite an existing item with the same name or if you would like to have multiple list items with the same name. Without retaining the object identity you will end up with multiple list items with potentially the same content. To force overwrite of list items you have to retain the object identity.

That means you cannot use STSADM -o export/import as a replacement for content deployment! If you need to do deploy content to a remote destination server without connectivity you need to write a custom tool that has retain object identity enabled rather than using STSADM -o import based on the code samples provided in Part 3 of this article series.

=> STSADM -o export and import should only be used if the content being imported does not already exist in the destination database and if the database will not be used as the destination database for content deployment (see Problem 1 above).
 

Problem 3: delete an item from the source site that belonged to the site definition and recreate it

This is a different flavor of the problem discussed as Problem 1

During provisioning of a site items defined in the site definition template are added to the site. Problems can occur when changes are made to the provisioned items. Especially if the provisioned items are deleted and replaced with items with the same name. That approach will work well on a single server installation. But it will cause problems when using content deployment.

The reason is that during content deployment the site will be provisioned on the destination server using the site definition template. And this will also cause all items defined in this template to be created. When content deployment now tries to import the updated or replaced items there will be a conflict. You will end up with an exception similar to the one in Problem 1.

=> You should never modify or delete one of the items created through the site definition in your site. If the site definition does not suite your needs you should create a custom site definition that fits to your needs and use this instead to avoid the need to customize some of the provisioned items.
 

Problem 4: deploy from destination back to source

This is something that theoretically can be done but only if the source hasn't changed since it was last deployed to the destination. Otherwise the same issues as in Problem 1 can occur.

Also be aware that it will not be an incremental deployment – means you cannot just deploy the changes since you deployed from source to destination. The reason is that the timestamp information about what to deploy is stored with the deployment job. As this information only exists on the source system the first deployment from destination back to source will deploy everything! So the result would be the same as deploying into an empty site collection on the soruce system. And actually deploying into an empty site collection would be better to avoid problems in case that changes have been done on the source system.
 

Problem 5: deploy partial content without exporting the parent items

When deploying with retaining the object identity (as you can do with content deployment in the central admin) it is not possible to reparent items. Deployment with retaining the object identity requires that the identity of the object is the same on the destination server and the identity is defined by Id, Name and by the Url.

So the parent of each deployed object has to exist on the destination server in order to successfully import the package on the destination server. We have seen that customers are trying to export a specific subtree of the site without exporting the parents. E.g. only a specific variation label without the variation root.

If the parent of the exported objects does not exist on the importing site then the item cannot be imported and the deployment will fail.

=> Ensure that all parents of all content being exported exists on the destination server.
=> Or create a custom export tool that does not retain object identity and changes the parent during import as discussed in Part 3 of my content deployment and migration API article series but be aware about the limitations discussed as Problem 2.
 

Problem 6: deploy partial content with references outside the selected scope

This is similar to Problem 5 except that we assume that the parent objects of the selected object exists in the destination database. In this situation you might assume that no problems should occur. That is not correct. When exporting items in a subtree per default all referenced objects (like images or documents) will be exported as well. Even if these objects are outside the selected scope. In this situation the export package will contain objects which might not have a parent in the destination database. 

If the parent of this image or document does not exist on the importing site then the item cannot be imported and the deployment will fail.

=> Ensure that authors do not use resources from other parts of the site collection which is not being exported.
=> Or create a custom export tool that uses the ExcludeDependencies to exclude objects outside the selected export scope. See Part 2 of my content deployment and migration API article series for more details.
  

Problem 7: mixing full and incremental deployment

A common error I often see is that customers are mixing incremental and full deployment when deploying a site collection. E.g. use incremental deployment every day and full deployment once a week – just to be sure (maybe because they do not trust incremental?)

The problem is that incremental and full deployment do not deploy the same type of content. So the data on source and destination might no longer be the same.

Why can this happen? The reason is that only incremental deployment exports deleted items which is required to ensure that the items also get deleted on the target server.

Consider the following scenario:

  1. Site A has been created
  2. Site A gets deployed to the target server
  3. Site A is deleted
  4. a full deployment is done
  5. further actions
  6. an incremental deployment is done

With step 4 the deleted site will not be exported. The incremental deployment in step 6 will only deploy changes done between step 4 and 6. That means that after step 6 site A still exists on the target server!

You might not even notice this if you are not actively monitoring if the content on source and on target is in sync. Only if content with the same name gets recreated later you will notice it as you will get errors similar to the following:

  • "The Web site address "/A" is already in use." or
  • "The specified name is already in use. A document cannot have the same name as another document or folder in this document library or folder."

That affects all type of objects (e.g. sites, pages, lists).

To ensure that the target and the source site collection are in sync it is mandatory to use full deployment only once: when doing the initial deployment from source to target. Afterwards you should only use incremental deployment and not add additional full deployment steps. That also means you should avoid situations where content deployment fails to do an incremental due to the fact that the timespan between incremental deployments exceeds the configured retention of the change log (see here for detail).

On the other hand that also means: whenever you see a need to do a full deployment you should do this full deployment into an empty site collection rather than the earlier created site collection that already received content from earlier deployments.
 

Problem 8: the cab file size exceeds the configured maximum content length configured in IIS 7

Some background infos: Content Deployment in MOSS 2007 first exports all content to the file system as XML and binary files and afterwards packages these files into cab files which then get uploaded through http to the target MOSS server where they then get extracted and imported.

The preconfigured maximum size of the cab files generated by content deployment is 10 MB as discussed in this article. IIS 7 on the other hand has a preconfigured upload limit of 29 MB as discussed in KB article 925083.

So with these two limits (maximum size of a cab file = 10 MB and maximum upload size in IIS 7 = 29 MB) we would not expect any problems.

The problem here is that MOSS does not split single exported files into multiple CAB files. So in case that the MOSS site contains single documents which cannot be compressed to less then 10 MB then the CAB file size can become bigger than 10 MB.

This will become critical as soon as the CAB file size exceeds 29 MB.

As soon as content deployment trys to upload a CAB file with more than 29 MB in size the content deployment job will fail and you will find the following entries in the application event log of the exporting server:

Event ID: 5323 
Source: Content Deployment
Description: Failed to transfer files to destination server for content deployment job 'test 1'.  Exception was 'System.Net.WebException: The remote server returned an error: (404) Not Found.

Event ID: 4958
Source: Content Deployment
Description: Publishing: Content deployment job failed. Error: 'System.Net.WebException: The remote server returned an error: (404) Not Found.

Event ID: 6398
Source: Windows Sharepoint Services
Description: The Excecute method of job definition Microsoft.Sharepoint.Publishing.Administration.ContentDeploymentJobDefination threw an exception. The remote server returned an error (404) Not Found.

In addition you will find similar entries in the ULS log on the source server:

OWSTIMER.EXE (0x0778)                            0x079C CMS                                      Content Deployment                            0             Unexpected      ContentDeploymentJob.DoServerToServer: Remote connection failed while uploading files for source-Job 'Test' with exception 'The remote server returned an error: (404) Not Found.' 

To see if the problem really is the upload size you need to check the IIS log on the target server to see if the response is indeed a 404.13

2008-09-01 08:55:08 10.10.10.2 POST /_admin/Content+Deployment/DeploymentUpload.aspx filename=%22ExportedFiles13.cab%22&remoteJobId=%11456fa7ed-ddcdedcdd-9aae-a1adsf5re1db%22 1976 – 10.10.10.3 – 404 13 0 62

To solve this issue you need to modify the web.config of the Central Admin site and add an entry similar to the following:

<system.webServer>
    <security>
        <requestFiltering>
            <requestLimits maxAllowedContentLength="52428800"/>
         </requestFiltering>
    </security>
</system.webServer>                

52428800 = 50 MB in this sample. You might need to adjust this to your specific needs.

Problem 9: Features used in the site collection are missing on the exporting server

A common problem we see with content deployment and with STSADM -o export is the error message below:

[4/11/2008 9:25:01 AM]: FatalError: Failed to compare two elements in the array.
   at System.Collections.Generic.ArraySortHelper`1.QuickSort[TValue](T[] keys, TValue[] values, Int32 left, Int32 right, IComparer`1 comparer)
   at System.Collections.Generic.ArraySortHelper`1.QuickSort[TValue](T[] keys, TValue[] values, Int32 left, Int32 right, IComparer`1 comparer)
   at System.Collections.Generic.ArraySortHelper`1.QuickSort[TValue](T[] keys, TValue[] values, Int32 left, Int32 right, IComparer`1 comparer)
   at System.Collections.Generic.ArraySortHelper`1.Sort[TValue](T[] keys, TValue[] values, Int32 index, Int32 length, IComparer`1 comparer)
   at System.Collections.Generic.ArraySortHelper`1.Sort(T[] items, Int32 index, Int32 length, IComparer`1 comparer)
   at System.Array.Sort[T](T[] array, Int32 index, Int32 length, IComparer`1 comparer)

*** Inner exception:
Object reference not set to an instance of an object.
   at Microsoft.SharePoint.SPFeature.EnsureProperties()
   at Microsoft.SharePoint.SPFeature.get_TimeActivated()
   at Microsoft.SharePoint.Deployment.WebSerializer.ExportFeatureComparer.System.Collections. Generic.IComparer<Microsoft.SharePoint.Deployment.ExportObject>.Compare(ExportObject exportObject1, ExportObject exportObject2)
   at System.Collections.Generic.ArraySortHelper`1.QuickSort[TValue](T[] keys, TValue[] values, Int32 left, Int32 right, IComparer`1 comparer)

The usual reason for this problem is that some sites in the site collection have features assigned where the feature is not installed in the server farm. The main problem here is that you cannot easily identify which features are missing as the missing features are not reported in the error message.

To overcome this problem I have written a tool which allows to identify all features used in a site collection which are missing on the server: WssAnalyzeFeatures.

To resolve the problem you then have to install the identified missing features on the exporting server. In case that this is not possible you have to remove the features from the site colleciton or the affected sites. Usually this can be done using STSADM -o deactivatefeature but sometimes fails if the feaure definition is not installed on the server. In this case you can use WssRemoveFeatureFromSite.

Problem 10: SQL Deadlocks during STSADM import operations

In the last couple of weeks I have seen several cases where STSADM import operations failed with random exceptions. With other words: performing the same import into an empty site collection multiple times the import operation failed at different points during the import. Checking the ULS logs showed errors like the following:

10/20/2008 12:47:26.59 STSADM.EXE (0x78BC)                         0x4FF4 Windows SharePoint Services            Database                        6f8g      Unexpected       Unexpected query execution failure, error code 1205. Additional error information from SQL Server is included below. "Transaction (Process ID 123) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction." Query text (if available): "…"         

Such a behavior is usually an indication that asynchronous actions interact with the import operation and caused a deadlock in SQL server.

Another interesting tidbit is that this only affects STSADM import but not content deployment. 

Isolating these issues is not very easy as SQL asap kills one of the deadlocking queries and the child process (in our case STSADM.EXE and potentially a second process) continue to run till they finally fail due to the fact that the SQL query did not succeed.

In a test environment it is possible to isolate the issue by attaching a debugger to the SQL server and setting a breakpoint right before the deadlock victim is killed. That causes the deadlock to persist and allows to take memory dumps of STSADM and the other involved processes.

In the cases I have worked on the problem was always caused by a custom event receiver fireing when importing the items. That also explains why only STSADM -o import is affected but not Content Deployment: with STSADM -o import the After event handlers will fire while Content Deployment suppresses After events through the import settings SPImportSettings.SuppressAfterEvents.

Unfortunatelly STSADM does not provide an option to suppress the after events. So there are two possible way to resolve the issue:

  1. Disable the event receivers in the features on the target machine when performing the import
  2. Create a custom import application which uses the content deployment and migration API as discussed in Part 3 of my Deep Dive into the content deployment and migration API series and sets the SuppressAfterEvents property to true.
     

Problem 11: Insufficient disk space on the target server

Another issue which I have seen a couple of times in the past when using STSADM -o  export or content deployment is a failure during the export phase with the following error message:

Failed to create package file
Unknown compression type in a cabinet folder

In all cases I have seen this problem was caused by insufficient disk space. Please monitor the disk space during the export/compression phase and ensure that sufficient disk space is available to perform this operation.

Problem 12: Incremental deployment fails with "The changeToken refers to a time before the start of the current change log."

I have seens this problem a couple of times in the past: sometimes when running incremental content deployment the deployment job fails with the following error message

The changeToken refers to a time before the start of the current change log.

To make a long story short: to resolve the problem you should do a full deployment into an empty database. It is not recommended to run full deployment into the previously used database as the full deployment will not perform delete operations of items that have been deleted in the source database but still exist in the destination database. It will also not reliably work if items have been moved on the source database to other places after the last successful incremental deployment has been used.

Why does this problem occur? This can happen mainly for three different reasons:

A) The timespan since the last incremental deployment job is too long

MOSS stores the change token of the last successful incremental deployment inside the properties of the incremental content deployment job. When a new incremental deployment is run it compares the change token in theses settings with the entries in the change log.

Per default the change log is configured to keep changes for 15 days. If the timespan between two incremental deployment job exceeds this timespan then the change log does not contain entries from before the change token and incremental deployment will not start to prevent deploying only parts of the required content.

A solution for this would be to increase the timespan the change log should be preserved. This can done using the following steps:

Central Administration – Application Management – Web Application General Settings – (choose the desired website) – Change Log

B) The source database has been overwritten with a backup

When a database is restored through STSADM -o restore or using SQL backup and STSADM -o addcontentdb the change log is cleared.

Incremental deployment will not work in this case and you have to do a full deployment to synchronize the content with the destination database.

C) No changes have happend on the source server for a long time

As mentioned in A) the change log will be used to determine the items that need to be deployed. Per default changes are preserved for 15 days in the change log. So if no changes have been done for 15 days the change log will become empty.

Two possible solutions exist: 

  1. Increase the timespan the change log should be preserved. This can done using the following steps:
    Central Administration – Application Management – Web Application General Settings – (choose the desired website) – Change Log
  2. Ensure that at least one item is modified within the configured timespan

D) The source database has been merged with a different site collection

After merging two content databases using stsadm -o mergecontentdbs the EventCache table will be empty. This is similar to problem B) listed above.

Incremental deployment will not work in this case and you have to do a full deployment to synchronize the content with the destination database.

  
Problem 13: Timeouts during content deployment

During the import the exporting server that is hosting the content deployment job definition will poll the destination server in specific intervals to get the status of the import operation. The import will go through different phases like extracting and then performing the import. This timeout actually defines the maximum time that is allowed between changes to the status report. During the actual import the report changes quite often. But there are some phases – e.g. the decompression – which can take some time. If this time exceeds this remote timeout value then a timeout is reported back to the exporting machine. You might have seen this status in the content deployment jobs. Such a timeout does not mean that the import has failed! It just means that no status update did arrive for the configured time. The default here is 600 seconds which means 10 minutes.

To see if the deployment has succeeded or not you need to hover over the job that reported the timeout and choose "Check Status" in the drop down box. This option will only show up if the Job is in the status "Timed out". The Check Status option allows to see if the job succeeded or failed.

In case that you have a large content deployment running with several GB of content it can happen that the decompression phase takes longer than 600 seconds – and then you will the a timeout. Adjusting this parameter allows you to avoid such timeout messages.

Here is some sample code which allows you to configure the RemoteTimeout to one hour. It would be required to run this on the exporting farm if required:

using System;
using Microsoft.SharePoint.Publishing.Administration; 
 

namespace StefanG.Tools
{
    class AdjustContentDeploymentDeploymentSettings 
    {
        static void Main(string[] args)
        {
            ContentDeploymentConfiguration config = ContentDeploymentConfiguration.GetInstance();
            config.RemoteTimeout = 3600;
            config.Update();
        }
    }
}
  
   

Problem 14: Columns with same name defined on sub site and root site

Consider the following scenario: you create a new column named "MyCustomColumn" in a sub site of a site collection. Later you create a column with the same name in the root site of the site collection. If you now check again on the sub site you will notice that there are two columns listed with the same name.

Such duplicate column names cannot be handled by content deployment import. You will get an error error message similar to this:

A duplicate name "MyCustomField" was found. 
   at Microsoft.SharePoint.SPFieldCollection.AddFieldToWeb(String strXml, Boolean checkDisplayName) 
   at Microsoft.SharePoint.SPFieldCollection.AddFieldAsXmlInternal(String schemaXml, Boolean addToDefaultView, SPAddFieldOptions op) 
   at Microsoft.SharePoint.Deployment.FieldTemplateSerializer.CreateField(SPWeb web, SerializationInfoHelper infoHelper) 
   at Microsoft.SharePoint.Deployment.FieldTemplateSerializer.SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) 
   at Microsoft.SharePoint.Deployment.XmlFormatter.ParseObject(Type objectType, Boolean isChildObject) 
   at Microsoft.SharePoint.Deployment.XmlFormatter.DeserializeObject(Type objectType, Boolean isChildObject, DeploymentObject envelope) 
   at Microsoft.SharePoint.Deployment.XmlFormatter.Deserialize(Stream serializationStream) 
   at Microsoft.SharePoint.Deployment.ObjectSerializer.Deserialize(Stream serializationStream) 
   at Microsoft.SharePoint.Deployment.ImportObjectManager.ProcessObject(XmlReader xmlReader) 
   at Microsoft.SharePoint.Deployment.SPImport.DeserializeObjects() 
   at Microsoft.SharePoint.Deployment.SPImport.Run()

To resolve this issue you need to ensure that the column names are unique – either by deleting and recreating the column with a new name in the sub site or the root site. Having such a configuration is not a good idea anyway as it will be hard to identify which of the columns is the one you would like to pick when creating a new content type or when adding to a list.
     

Problem 15: Conflicting content on source and target

Content deployment expects that items it is trying to import do not exist on the target or are older revisions of the items it is currently deploying. In case it finds an item with the same URL as an item that gets imported it verifies the GUID of the item to verify that it is indeed the same item as the one being imported.

If the GUID is the same import will create a new revision (if versioning is enabled) or replace the item in the target site.

In case that the GUID is different you will get an error similar to the following:

  • "The Web site address "/A" is already in use." or
  • "The specified name is already in use. A document cannot have the same name as another document or folder in this document library or folder."

To avoid the problem it is required to ensure that no content is added on the target through (e.g.) authoring activities that exists on the source. Best would be to avoid changes on the target server at all.

  
Problem 16: Maximum upload size on target web application smaller than files to be deployed 

Often the web application on the target is created with the default values – which means a maximum upload size of 50 MB. In case that the source site collection contains files which are bigger than the upload limit the deployment will fail with an error message similar to the following:

The form submission cannot be processed because it exceeded the maximum length allowed by the Web administrator. Please resubmit the form with less data.

Content Deployment will still succeed! So be aware that you need to review the content deployment logs even if the status of the job is Succeeded.

To avoid this problem ensure that the maximum upload size on the target is at least as big as the larges item in the source site collection.
 

Problem 17: Document in source library is marked as virus infected

I have seen a couple of cases on this. Often the document is not really virus infected but something went wrong during with the virus scanning during the upload. A common reason is that while the document was uploaded the signature files were in the process of being updated. This can cause the uploaded files to be incorrectly marked as virus infected.

Independent if the file is really infected or not: Items being marked as virus infected cannot be exported!

Unfortunately the error message that occurs when items which are marked as virus infected are exported is not self-explaining:

Exception from HRESULT: 0x80041050
at Microsoft.SharePoint.Library.SPRequest.GetFileAsByteArray(String bstrUrl, String bstrWebRelativeUrl, Boolean bHonorLevel, Byte iLevel, OpenBinaryFlags grfob)
at Microsoft.SharePoint.Deployment.FileSerializer.SaveFile(SerializationInfo info, ExportObjectManager objectManager, ExportDataFileManager fileManager, SPExportSettings settings, SPWeb parentWeb, Boolean isGhosted, String setupPath, String setupPathUser, Byte setupPathVersion, String webRelativeFileUrl, Int32 size, Byte level)

The error code 0x80041050 means that the item is virus infected.

To resolve the problem either run another full scan to remove the incorrectly virus infection flags or delete the infected items.

  

Requirements for a successful content deployment

Be aware that this article cannot provide a complete list of all requirements. I will update it from time to time if required. Today I will list all the requirements I have seen in the past which – when missed – can cause content deployment jobs to fail.
 

1) The servers configured as export and import server need to host an instance of the Central Administration website

When configuring the "Content Deployment Settings" for your farm you have the chance to select different servers in your farm to work as export and import servers for the content deployment. This allows to offload this task to a dedictated server to reduce the load on your web frontend servers.

The comments of this configuration options look as follows (here shown for the Import server):

Import Server
Specify the server you want to receive incoming content deployment jobs. This server must have enough available disk space to store the incoming jobs, and it must be running an administration Web application for the farm.

So this requirement is actually listed but a little bit hidden in the second half of a sentence.

In case you configure a server that does not host the Central Admin website then you will not get a warning here!

Impact: The effect you will see is that the content deployment export or import phase will not start.

How to resolve:
1) Provision the Central Administration website on the desired server
2) Change the configuration of the export and/or import server
 

2) Ensure that sufficient disk space is available

This sounds like a simple prerequesit but it isn't. Content deployment uses different places to store the extracted and compressed files. The compressed files are stored in the location you can configure on the "Content Deployment Settings" page in the "Temporary Files" row.

But before export creates the compressed files it first exports everything into a temporary directory. And this directory is placed inside the directory which is configured in the TMP environment variable of the user the Windows SharePoint Services Timer service (OWSTIMER.EXE) runs on (usually refered to as farm credentials). Per default this variable has the following value: "%USERPROFILE%\Local Settings\Temp" which is usually on your system drive.

Impact: So per default MOSS content deployment requires the disk space for the uncompressed exported files on your system drive!

How to resolve: The easiest way to resolve this is to logon to the machine with the farm credentials and adjust the TMP variable to point to a different location. Afterwards you would need to restart the OWSTIMER service.
 

3) Use an empty site collection as the destination of your content deployment job

As already discussed in Part 5 content deployment will fail if the destination database contains conflicting content. To avoid this it is required that the initial deployment is done into a site collection that has been created without assigning a template. (The template would actually be applied to the implicitly created root site of the site collection and not the site collection as only sites can hold templates and not site collections.)

Be aware that the only way to create an empty site collection is to use the following STSADM command:

STSADM.EXE -o createsite -url <url-to-site-collection> -ownerlogin domain\user -owneremail <email-address>

Using the "Blank Site" template will NOT create an empty site collection! It will actually create a site collection with content. You can see the difference if you create a site collection using both methods and then inspect the content of the created sites using SharePoint designer.

I personally recommed to always use the STSADM command with the syntax above to ensure that you really have an empty site collection as destination.

Impact: If the site collection has been created using a different method or already contains data the content deployment job will fail.

How to resolve: Deploy into an empty site collection
  

4) Install all required features for your site on the destination server

If your site requires custom features ensure that the features are installed on the destination server before running content deployment.

Impact: If the are missing the import phase will fail.

How to identify this: I have written the tool WssAnalyzeFeatures which allows you to identify such problems.

How to resolve: Copy the features to the destination server and install them using STSADM -o installfeature. 

5) Do not activate custom features for your site collection on the destination server manually

You should not activate custom features on the destination server if this activation creates content in the destination database as this can cause conflicts as outlined in "Problem 1" of Part 5 of this article series. Instead you should run content deployment and let the import process activate the features on your destination server as this will ensure that all items get created using the same ids as on the source server which is otherwise not guaranteed.

Impact: If the features have been activated and the content deployment import can fail with similar error messages in as "Problem 1" of Part 5.

How to resolve: Deactivate the feature in the destination site and ensure that all items created by the feature are removed. Alternatively do a full deployment into an empty site collection instead.
 

6) Do not expect that incremental deployment will deactivate features in the destination server site collection

The content deployment and migration API was not designed to deactivate features on the destination server. If a feature needs to be deactivated on the destination server you need to manually perform this deactivation.

7) Ensure that all feature definitions of features activated on the site collection exist on the source server

This is actually a high call generated for Microsoft Support Services: in the development a feature becomes obsolete and is removed or replaced with a different version with a different Guid but on some sites or site collection the old feature is still activated.

Impact: The affected sites can no longer be exported. You will get the errors listed in this article.

How to identify this: I have written the tool WssAnalyzeFeatures which allows you to identify such problems.

How to resolve: Either copy the missing feature files to the required location or uninstall the feature using STSADM -o deactivatefeature/uninstallfeature. In case that STSADM -o deactivatefeature fails to deactivate the feature you can use my tool WssRemoveFeatureFromSite.
 

8) Configure the retention period of the change log to be long enough for incremental deployment

See here for details.
 

9) Ensure that content deployment jobs do not run in parallel

The current implementation of the content deployment and migration API does not allow parallel execution. There are plans to change this behavior in the near future but as is you need to ensure that only one deployment is running at a time.

So if you have multiple deployment paths and jobs for the same site collection/content database you need to ensure to schedule them in a way that they don't overlap.

But this is not the only place to look at! Sometimes it is nearly impossible to prevent parallel execution. Just think that the content deployment and migration API is not only used to deploy content between different webfarms. The same API is used in the copy/move implementation inside site manager and in the variation feature.

With other words: you can experience problems with content deployment if an author copies a page at the same time or creates a new page in the source variation label which is then replicated to the destination. And also vice versa a copy operation can fail because a quick deploy job was running at the same time.

Impact: parallel execution of deployment jobs can lead to failing content deployment.

How to resolve: you need to restart the failed deployment job

  
10) Ensure that the patch level on source and destination farm is identical

Content deployment is only supported if the WSS and MOSS patch level is identical on source and destination. Some hotfixes have changed the schema of the export packages slightly which can cause deployments between different patch levels to fail.

Impact: content deployment and STSADM -o export/import can fail if the patch level is not identical

How to resolve: ensure that both farms are on the same WSS and MOSS patch level

  
11) Required Language Packs need to be available

Language Packs used in the source site collection have to be installed on the target farm as well. 

Impact: if a required language is missing content deployment will fail

How to resolve: install the required language packs on the target farm 


12) Avoid authoring activities on the target farm

Authoring activities on the target can cause conflicting content between source and target. E.g. if an author creates a page named "A" in site "B" and the same is done on the source server you will have conflicting content.

Impact: If conflicting content exists on the target server the import process will fail with an exception

How to resolve: Delete the conflicting item on the target server
 

13) Avoid authoring activities on the source farm while content deployment is running

Authoring activities on the source farm can modify collections while content deploymeng is trying to export them. E.g. if a list is being exported and in the middle of the export a new list item is added to the list content dpeloyment can fail to export the list. So if possible you should ensure that content deployment jobs are only run during a time when no authoring activities are expected.

Impact: Content Deployment Export can fail if authoring activities happen in parallel.

How to resolve: Rerun the content deployment job. You can also configure an automatic retry to cover such problems. See the Retry options mentioned in the following article: Pimp my Content Deployment Job
 

Optimize Content Deployment

For this specific topic, please have a look at the following article: Pimp my Content Deployment Job

100 Comments


  1. In the past I have released several blogs about the various problems that can occur with Content Deployment.

    Reply

  2. Thanks! This is timely and important info for all who ar einvolved with WCM in SharePoint!

    Reply

  3. One of the more mis-understood features in SharePoint is the Content Deployment functionality that was

    Reply

  4. Hi Stefan,

    great post!

    Just one question concerning problem 3:

    if I modify BlueBand.master (the one provisioned with the out of the box site definition) to fit my needs and then I make a full deploy to an empty site, what happens? Will I have my modified BlueBand.master or the original one?

    Reply

  5. Hi Francesco,

    I haven’t tested this. The reason is that modifying the items coming with the site definition is highly discouraged. You should instead create a copy of the item and then modify the copy. In general it can cause problems with content deployment when items coming from the site definiton are modified.

    Cheers,

    Stefan

    Reply

  6. Today I have updated my Content Deployment – Best Practices article with additional common problems.

    Reply

  7. Stefan, thanx for this great compilation.

    I have two things to add that may be worth mentioning. First of all people tend to forget to change the output location of the content deployment files. By default that is c:windowstempcontentdeployment. But if for some reason that fails, the content deployment files will appear under %tmp% which is often %userprofile%local%20settingstempcontentdeployment. If that is the case either change the content deployment output path through central admin or change the environment path variable of the export and import servers.

    Second thing is that although you should precreate the web application and create the site collection through STSADM withouth applying the template, people forget to deploy solutions to the web application. Just a minor thing.

    The third thing is about the message that ‘Web part or Web From Control type could not be found, or is not registered as safe. The web part will still be exported’. In that case check if you have a content by query webpart which has an empty List reference. You can check that by exporting the web part properties and check the list property of the DWP file.

    Reply

  8. Hi Serve,

    with content deployment both locations are used: the extracted files end up in the %tmp% directory. the compressed packages end up in windowstemp. So you always have both locations.

    Regarding solution deployment: usually you should not do this before the deployment is done. Reason: solutions usually activate features on the site collection which can cause conflicts.

    Cheers,

    Stefan

    Reply

  9. Hi Stefan,

    Could you explain more about your last comment, "Regarding solution deployment: usually you should not do this before the deployment is done. Reason: solutions usually activate features on the site collection which can cause conflicts."

    I understand the reason, but are you saying that deploying solutions should be done after content deployment? I thought the needed to be done before…what if they contain a feature that isn’t installed yet, won’t this cause content deployment to fail or am I missing something here?

    Reply

  10. Hi Brent,

    the solution needs to be available on the target server – and modules have to be in the GAC and feature definitions need to be available in the features folder.

    But you are not allowed to deploy the solution to the target site if this deployment activates the features on the target or adds items to the target.

    In some cases it might be required to split the solution into two parts: the actual installaton the server and the feature activation part.

    Cheers,

    Stefan

    Reply

  11. Hello Stefan,

    concerning the problem case 12 A:

    Would there be a downside to just choose "Never delete the changelog"?

    Would the logs grow uncontrollably? If yes: Are there ways to access/delete them manually aor automatically delete the oldest?

    I’m at a position right now, where content deployments happen *very* sporadically. Ususally there are some weeks inbetween, but it can also be months.

    Reply

  12. Hi Stefan,

    RE: Problem 13: Timeouts during content deployment.

    I am getting the timeout and when doing a Check Status, see that the job has failed.

    Drilling into this shows the following.

    "The connection to the destination server was lost after the remote import job was started. The import job might still succeed. Content deployment will continue to attempt to reestablish a connection with the destination server."

    Nothing is being deployed to the target server.

    Does the error description refer to another "timeout" issue unrelated to the polling for the report? The job appears to be "running" for about 10 min 30 sec but the following suggests otherwise.

    Objects Exported 1387

    Objects Imported 0

    Content Size (in MB) 7.86

    Deployment Time 00:00:25

    Start Time 2/23/2009 5:14 PM

    End Time 2/23/2009 5:15 PM

    Any help would be appreciated.

    Reply

  13. Hi John,

    this comment is for the timeout you resolved with "check status".

    Usually when you look at the report afterwards you should see the real problem here – if it can be determined.

    Cheers,

    Stefan

    Reply

  14. Hi Helmut,

    they would grow uncontrollably. You cannot delete manually or delete the oldest.

    Cheers,

    Stefan

    Reply

  15. Hi Stefan, further to my timeout enquiry above, here are 2 Application errors from the Event Viewer on the Source Server.

    The content deployment job ‘XXXXX’ lost its connection with the destination server after the remote import job was started. The import job might still succeed. Content deployment will continue to attempt to reestablish a connection with the destination server.

    Failed to transfer files to destination server for Content Deployment job ‘XXXXX’. Exception was: ‘(null)’

    Any ideas?

    Thanks

    john

    Reply

  16. Hi John,

    please see the "RemoteTimeout" section in the above referenced "Pimp my content deployment job" article.

    Cheers,

    Stefan

    Reply

  17. Stefan Gobner “Escalation Engineer for SharePoint (WSS, SPS, MOSS) and MCM” updated his important article…

    Reply

  18. Hi all, Excellent post for content deployment in sharepoint best practices by Stefan Gobner – Escalation

    Reply

  19. Hi Stefan,

    I checked out your reference to remore timout but that is not related from what I can see as it just controls wether or not the status appears as timed out when it actually has failed.

    I did eventually reboot both the source and destination servers and with no other changes the problem has resolved itself and now everything works…it remains a mystery.

    Thanks for a great resource btw.

    Reply

  20. Hi John,

    that is not correct. It controls how long it waits for a status update. Usually the import has not failed when the "timed out" status shows up.

    Cheers,

    Stefan

    Reply

  21. hi stefan , for us – problem 10 ( deadlock issue ) happend when using content deployment as well.

    what i’ve observed is it always happens when some content deployment job fails and we try to execute another one or with in a short span of time. im debugging to get more details on that that now.

    Reply

  22. Hi Stephan,

    I am attempting to deploy content from a large (7 GB) site collection (with many sub-webs) using the Content Deployment configs thru Central Admin.  I have been having problems with incremental deployments, so am trying a few variations.  

    I have a question that I think is related to Problems 5 & 6 you have outlined.

    I tried last night to deploy just the root web to an empty site collection and received a new error.  I have not done a full deployment of the entire site collection yet.  I just tried to deploy the "/" site but not the child webs.  The error I received is that an SPFolder object could not be imported because it’s parent web does not exist.  The parent web listed in the manifest.xml is a subsite (/sites/cities/london).  This SPFolder appears to be either a document library using a content type we created, or the content type itself (content type was created thru the admin web page, not thru a feature).  The content type was created at the top site.  I’m confused by the reference to the subweb.  

    I think I have two questions:  Am I allowed to break up my deployment in to multiple jobs so that I deploy just the root web first, then create separate jobs to deploy the child webs?  and… Is there a way to change the parent of this SPFolder object in the source site collection so that the parent is actually in the root web?  

    Thank you!!

    Reply

  23. Hi Kai,

    in general that is allowed. I assume that you here have a dependency between the root web and the sub web. E.g. your rootweb references an item in a library in the subweb.

    Then it will not work as always all dependent items are exported as well. But if the web this items is in does not exist because you did not export it the import will fail.

    So ensure that you set the granularity in a way that you always pick all dependent objects as well.

    Cheers,

    Stefan

    Reply

  24. Hi Stefan,

    Thank you for the post!

    I am hoping you might be able to help us. We seem to be having problems with features being activated on our destination servers when running incremental deployment. Just recently we activated the Reporting OOB feature on the source farm and assumed it would be activated on the destination farms during the next incremental deployment. Unfortunately, that didn’t happen. Are you aware of a limitation with features and incremental deployments — do features only get activated on the destination servers during full deployments? Or do you know of an issue specific to the Reporting feature and content deployment?

    Thanks and take care,

    Terri

    Reply

  25. Hi Terri,

    site collection and site scope features should automatically be activated. Server and farm scoped feature will not be activated.

    Cheers,

    Stefan

    Reply

  26. Hi Stefan,

    Are any advice on the following?

    We have a scheduled incremental CD job that runs once/day at off-peak hours.

    – The last scheduled run aborted with Status of this run: failed and 2 error messages.

    – The export completed succesfully with a package containing 56259 objects and content size of 65.04 MB.

    – Import failed reporting nothing but the following 2 error messages:

    4/1/2009 3:13 AM

    Value does not fall within the expected range. at Microsoft.SharePoint.SPWeb.GetWebRelativeUrlFromUrl(String strUrl, Boolean includeQueryString, Boolean canonicalizeUrl) at Microsoft.SharePoint.SPWeb.GetWebRelativeUrlFromUrl(String strUrl) at Microsoft.SharePoint.Deployment.ImportObjectManager.FixLinkInFile(String fileUrl, Guid webId, String oldTargetUrl, String newTargetUrl) at Microsoft.SharePoint.Deployment.ImportObjectManager.FixBrokenLinks() at Microsoft.SharePoint.Deployment.SPImport.DeserializeObjects() at Microsoft.SharePoint.Deployment.SPImport.Run()

    4/1/2009 3:13 AM

    Content deployment job ‘Remote import job for job with sourceID = ‘guid’ failed.The exception thrown was ‘System.ArgumentException’ : ‘Value does not fall within the expected range.’

    Would appreciate it if you have any ideas as to what goes wrong here,

    thanks and regards,

    Reply

  27. Hi John,

    that needs to be investigated in more details.

    please open a support case for this.

    Cheers,

    Stefan

    Reply

  28. Hi Stefan,

    We cannot get incremental deployments to run between our "authoring" and production web application, (these applications are on the same domain in the same farm, same SSP) if we choose to not deploy user names.  We are not deploying security information.  If we choose to deploy user names, the job fails with a Save Conflict –  Your changes conflict with those made concurrently by another user.

    My question is can we deploy user names and not deploy security information?  I am getting conflicting information about these settings.  Can’t we just deploy both usernames and security and set the target (production) application as read only?

    Thanks for your time.

    Reply

  29. Hi Kyle,

    you should raise a support case to get this analyzed.

    Cheers,

    Stefan

    Reply

  30. Many of you might know the Content Deployment – Best Practices article I have written a while ago. Together

    Reply

  31. Hi Stefan,

    Thank you for your article.

    Concerning problem 14.

    There is a case, there two individual sites in the different

    web app and both sites have site columns named xyz with same

    data types and they want to copy one site

    to under the another one. It will fail because duplicate column names,

    these columns are created by SharePoint UI, I know if columns

    should be created by feature if it is used in multiple sites

    but I have a client situation,

    I have two questions;

    1) I create new site column from content type settings page,

    it creates column also it references column to content type

    then if I use this content type in my list in both sites import

    does not fail when I copy site1 to under site2 because each site

    column is referenced by different content types but if create site

    column if I create site columns from site settings> site columns page

    then if I go to content type then add this column,

    import fails. Why?

    2)If I create site columns from site settings> site

    columns page then If add this column directly to list, it creates

    list column and when I delete the site column in order to  successful

    copy operation will not cause any problem for the list to edit or

    insert items but if content type used in the list(site columns created

    by ui then manually referenced to content type), in order to delete

    custom site column I need to remove from content type. When I remove

    from content type and delete from site, after import operation I am not able to

    edit or insert deleted custom site column in the list.My list is completly useless.

    What is the work around on this issue?

    Thanks and Best Regards

    I

    (wss 3.0 with service pack 2)

    Reply

  32. Hi Riza,

    sorry I haven’t looked into this. Please open a support case with Microsoft to get this analyzed in more detail.

    Cheers,

    Stefan

    Reply

  33. Hi, If you use or want to you content deployment, in Microsoft Office SharePoint Server 2007, you should

    Reply

  34. Escalation Engineer Stefan Goßner posted Content Deployment – Best Practices on his blog. We have now

    Reply

  35. Dentro de las capacidades que ofrece Microsoft Office SharePoint Server (MOSS) 2007, hay algo llamado

    Reply

  36. Dentro de las capacidades que ofrece Microsoft Office SharePoint Server (MOSS) 2007, hay algo llamado

    Reply

  37. Dentro de las capacidades que ofrece Microsoft Office SharePoint Server (MOSS) 2007, hay algo llamado

    Reply

  38. Dentro de las capacidades que ofrece Microsoft Office SharePoint Server (MOSS) 2007, hay algo llamado

    Reply

  39. Hi Stefan,

    Your posts have helped me huge amounts in the past with content deployment jobs, but I have hit something that I can’t find any reference to anywhere.

    A content import job is failing with the following error message. Any pointers would be most appreciated!

    Cheers

    Gavin

    You cannot delete a hidden column. at Microsoft.SharePoint.Library.SPRequest.RemoveField(String bstrUrl, String bstrListName, String bstrFieldName) at Microsoft.SharePointSPFieldCollection.Delete(String strName) at Microsoft.SharePoint.SPField.Delete() at Microsoft.SharePoint.Deployment.ListSerializer.CreateOrUpdateField(SPList list, String fieldName, XmlNode fieldNode) at Microsoft.SharePoint.Deployment.ListSerializer.UpdateListFields(SPList list, Dictionary`2 listMetaData) at Microsoft.SharePoint.Deployment.ListSerializer.SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) at Microsoft.SharePoint.Deployment.XmlFormatter.ParseObject(Type objectType, Boolean isChildObject) at Microsoft.SharePoint.Deployment.XmlFormatter.DeserializeObject(Type objectType, Boolean isChildObject, DeploymentObject envelope) at Microsoft.SharePoint.Deployment.XmlFormatter.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ObjectSerializer.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ImportObjectManager.ProcessObject(XmlReader xmlReader) at Microsoft.SharePoint.Deployment.SPImport.DeserializeObjects() at Microsoft.SharePoint.Deployment.SPImport.Run()

    Content deployment job ‘Remote import job for job with sourceID = ae8ad253-69df-4629-bae6-941645045228’ failed.The exception thrown was ‘Microsoft.SharePoint.SPException’ : ‘You cannot delete a hidden column.’

    Reply

  40. Hi Gavin,

    it seems the schema for the list has changed in an unexpected way. The object model of SharePoint does not allow to delete a hidden field.

    It seems that someone has unhidden the field on the source system and then deleted it.

    Incremental deployment has catched the fact that the column has been deleted and is now trying to deploy the delete action to the target. As this is a delete action it does not deploy the previously changed state that the field is no longer hidden.

    That sounds as if the issue is by design. If this is a frequent issue for you I would suggest to open a support case with Microsoft to request a design change request.

    Cheers,

    Stefan

    Reply

  41. Thanks Stefan,

    I will debug the export cabs to figure out exactly which list this is and see if I can see what is happening.

    The only strange aspect of it is that we are doing an export to a blank site collection! So there is no existing list to update, it must be part of the package that some dependency is applying a modification to a list that has already been imported as part of the same package!?

    Cheers

    Gavin

    Reply

  42. Hi Stefan,

    Just recently our content deployment, within the production environment, from our source server(within our domain) to our WFE farm(in the DMZ) started to throw an error about the change token.

    Deployments had been running daily so I don’t see how they could have got out of sync.

    Thanks,

    Simon

    Reply

  43. Hi Gavin,

    then it is a list that is part of the site definition. These lists are created during the provisioning of the site during import.

    If later the list is imported you will get the same result.

    As outlined above: don’t change items that are part of the site definition.

    Cheers,

    Stefan

    Reply

  44. Hi Simon,

    did someone restore the source DB from an older state or used STSADM -o mergecontentdatabase?

    Or has the last update on source happend long ago?

    Cheers,

    Stefan

    Reply

  45. Thanks for the quick reply.

    The site has been live for 2 months with content deployment running every hour.

    There is no access to the WFE servers by the content editors and the initial deployment was done to a blank site some 4 months ago.

    Simon

    Reply

  46. Hi Simon,

    has the SOURCE farm been updated regulary between the initial deployment and the time when the problem has occurred?

    Cheers,

    Stefan

    Reply

  47. Hi Stefan,

    The source farm, there is only one server there, has people currently content editing for our internet facing publishing site.

    Both the server(source) and destination(destination x2 server NLB’d) are of the same version with no WSS/MOSS updates applied since SP1 and December 08 Infrastructure Update. As regards Windows updates as far as I know there have been no updates applied to either source or destination but I’ll have to check on that as I have to request access to the production servers file systems.

    Simon

    Reply

  48. Hi Stefan the great whole help that could provide me, I am new en the use of Sharepoint, nowadays has presented the situation to content distribute of the server development to production server, for valitdation and later use, after following the steps of guide content distibution of microsoft, i do not take idea of as that marks the following mistake.

    Hi Stefan, de antemano le agradezo toda la ayuda que me pueda

    proporcionar, soy nuevo en el uso de sharepoint, y actualmente se me ha

    presentado la situación de distribuir el contenido del servidor de desarrollo

    al servidor de producción para su validación y posterior uso, buscando en la

    red encontre la guia para la distribución de sitios de Microsoft.

    Despues de seguir los pasos como ahi se indica aun asi me marco el siguiente

    error, la verdad no tengo idea de que pueda ser, agradeceria mucho que me

    ayudara; un saludo y gracias.

    Error

    No se pudieron comparar dos elementos en la matriz. en

    System.Collections.Generic.ArraySortHelper`1.SwapIfGreaterWithItems[TValue](T[]

    keys, TValue[] values, IComparer`1 comparer, Int32 a, Int32 b) en

    System.Collections.Generic.ArraySortHelper`1.QuickSort[TValue](T[] keys,

    TValue[] values, Int32 left, Int32 right, IComparer`1 comparer) en

    System.Collections.Generic.ArraySortHelper`1.QuickSort[TValue](T[] keys,

    TValue[] values, Int32 left, Int32 right, IComparer`1 comparer) en

    System.Collections.Generic.ArraySortHelper`1.Sort[TValue](T[] keys, TValue[]

    values, Int32 index, Int32 length, IComparer`1 comparer) en

    System.Collections.Generic.ArraySortHelper`1.Sort(T[] items, Int32 index,

    Int32 length, IComparer`1 comparer) en System.Array.Sort[T](T[] array, Int32

    index, Int32 length, IComparer`1 comparer) en

    System.Collections.Generic.List`1.Sort(Int32 index, Int32 count, IComparer`1

    comparer) en System.Collections.Generic.List`1.Sort(IComparer`1 comparer) en

    Microsoft.SharePoint.Deployment.WebSerializer.GetDataFromObjectModel(Object

    obj, SerializationInfo info, StreamingContext context) en

    Microsoft.SharePoint.Deployment.DeploymentSerializationSurrogate.GetObjectData(Object

    obj, SerializationInfo info, StreamingContext context) en

    Microsoft.SharePoint.Deployment.XmlFormatter.SerializeObject(Object obj,

    ISerializationSurrogate surrogate, String elementName, Boolean bNeedEnvelope)

    en Microsoft.SharePoint.Deployment.XmlFormatter.Serialize(Stream

    serializationStream, Object topLevelObject) en

    Microsoft.SharePoint.Deployment.ObjectSerializer.Serialize(DeploymentObject

    deployObject, Stream serializationStream) en

    Microsoft.SharePoint.Deployment.SPExport.SerializeObjects() en

    Microsoft.SharePoint.Deployment.SPExport.Run() *** Excepción interna:

    Referencia a objeto no establecida como instancia de un objeto. en

    Microsoft.SharePoint.SPFeature.EnsureProperties() en

    Microsoft.SharePoint.SPFeature.get_TimeActivated() en

    Microsoft.SharePoint.Deployment.WebSerializer.ExportFeatureComparer.System.Collections.Generic.IComparer<Microsoft.SharePoint.Deployment.ExportObject>.Compare(ExportObject

    exportObject1, ExportObject exportObject2) en

    System.Collections.Generic.ArraySortHelper`1.SwapIfGreaterWithItems[TValue](T[] keys, TValue[] values, IComparer`1 comparer, Int32 a, Int32 b)

    Reply

  49. Hi Stefan,

    I’m facing this challenge where I have to provide a method to keep a custom list on source and target servers in sync.

    There is a custom list that is maintained on source and is content deployed to the target.

    Items in this custom list are changed/ added on the target as well. Therefore, a very good candidate for the "problem 15" you mentioned in this article. 🙂

    Is there a way to exclude this custom list from content deployment? Then one of the solution could be: developing a custom syncronization routine via webservice. Source would get changed content via webservice (residing on target) and updates the source custom list accordingly.

    If we cannot exclude the custom lists from content deployment… then is there a way to avoice content deployment failure/ duplicate records on the target?

    I could still use the webservice approach to get the content from the target. BUT, then sync. has to be made immediately before the content deployment otherwise there would be a greater risk of overwriting changes on the target.

    Any suggestion/ help to achieve a stable solution for this problem would be highly appreciated.

    Thanks!

    Reply

  50. Hi Mridul,

    no it is not possible to exclude a list from content deployment. If changes are done on source and target you should do them in different lists. You could use custom web parts to show content from two different lists like on list.

    Cheers,

    Stefan

    Reply

  51. Got a small questions

    to have a more complex sheduling (for example at 8 am and 8pm), can i create two diffrent incremental jobs or is the increments on a per job base and not on a path base

    Reply

  52. Thanks for the great post and FYI we're seeing Problem 10 randomly with Content Deployments…

    That is, SQL deadlock errors during the import phase of a Content Deployment.

    We have it running every 15 minutes and might get one a day that fails due to SQL deadlock… go figure…

    Reply

  53. Hi Craig,

    you might want to consider an upgrade to SharePoint 2010. In SP2010 the WSS layer has been stabilized against problems with Deadlocks: they are detected and a retry is done automatically.

    As SharePoint uses WSS deserialization during import it means that content deployment should be much more stable.

    Cheers,

    Stefan

    Reply

  54. I have enabled the Anonymous access on internet facing site. Anonymous access turning off automatically on target site after running content deployment job, but the status of job was succeeded. Anonymous access setting to “nothing” automatically. it is happening only when more than 500 objects are exported. Appreciate your help.

    Reply

  55. Please verify on latest patch level (SP2+August 2010 CU). If you still see the issue, please open a support case with Microsoft as we will have to analyze the issue in more details.

    Reply

  56. Hi Stefan,

    I am getting the following error while deploying content….

    Content deployment job 'Remote import job for job with sourceID = bacadfb2-b0e2-4309-9ccb-7dfe6995a37d': Import in progressThe exception thrown was 'System.Xml.Schema.XmlSchemaValidationException' : 'The element 'Field' in namespace 'urn:deployment-manifest-schema' cannot contain text. List of possible elements expected: any element in namespace '##any'.'

    Any pointers Please

    Thanks,

    Sarav

    Reply

  57. Hi Stefan,

    I am using the Content Deployment API to export/import sites. I am repeatedly getting the error "Failed to create package file" and "Cabinets in a set do not have the same RESERVE sizes". You suggested that this happened due to the insufficient disk space. But my disk had enough space to store the resultant package file at the time of exportation.

    I am exporting the site which have the total size near to 12GB. I can migrate this site if I use the compression property to false. But I need it as ".cmp".

    Any ideas?

    Thanks,

    Gore…

    Reply

  58. Hi Gore,

    you need to check the size on the drive defined by your TMP and TEMP variables and the one where the CMP file should reside.

    Cheers,

    Stefan

    Reply

  59. Hey Stefan,

    well, as it happens, last night we migrated to SP2010, so here's hoping we don't see the deadlocks anymore…

    Later'ish

    Craig

    Reply

  60. Great post Stefan, loved reading through all of it, and it's helped me with my content deployment.  I've been using content deployment in SharePoint 2007 successfully for the past 5 months.  However recently we uploaded a very large file, greater than the 98 Mb allowed.  After this, and not knowing we couldn't deploy files larger than 98 Mb, we had issues with content deployment.  And in that time, we deleted our content deployment path we had been using all along.  And now we need to create it again.  My question is, can we successfully deploy content between the two sites again?  Or is it now cut off, and never to return?  Is content deployment smart enough to realize what has changed on the site?  We are only using the 'Deploy only new, changed, or deleted content'…

    Thanks

    Reply

  61. Hi Ryan,

    Content Deployment does not have a file limiation. I assume there was a limitation with the http upload to the target server.

    Content Deployment cannot recover if the path has been deleted.

    The only fully supported way to go on is to delete the target site collection and use a full deployment into an empty site collection.

    Cheers,

    Stefan

    Reply

  62. Hi Stefan,

    is the Problem described in "9) Ensure that content deployment jobs do not run in parallel"

    still a problem or has there been a resolution for that within the service packs or cumulative updates?

    Thanks,

    Jörg

    Reply

  63. Hi Jörg,

    for deployments from one source site collection to different target site collections the issue is resolved in SP 2010 when using the snapshot functionality.

    for deployments to the same target site collection there is no solution but not to run the deployments in parallel.

    Cheers,

    Stefan

    Reply

  64. Hi Stefan

    We have pages with custom webparts in source site. These pages are being pushed to target site using CDJ. Sometimes, the order of webparts on pages changes in target site – which also disturbs the web part connection we have established.

    This does not happen everytime – only sometimes. But it makes the our solution non reliable.

    Any suggestions on what could be root cause of web part ordering getting disturbed during CDJ ?

    Thanks

    Anshuman

    Reply

  65. Hi Anshuman,

    do you have the latest CUs installed?

    I ask because such a problem has been fixed in August 2009 CU roughly 2 years ago.

    Cheers,

    Stefan

    Reply

  66. Hi Stefan

    I have a Project Server 2007 PWA site collection with multiple sub-sites and I am trying to export the PWA site to my Test Environment. I am able to export the site, all 45GB of it with no errors, but I've run into several problems when trying to import. I have now decided to export only the parent site first and then export the workspaces one by one. Therefore my questions is, is it possible to export only the Parent site?

    I am using the following command to export:

    stsadm -o export -url http://myserver.com/PWA -filename e:PWA -includeusersecurity -nofilecompression -versions 2

    Unfortunately when I run the above command, it picks up all the sub-sites and workspaces under PWA as well.

    Thank you for any recommendations.

    Regards,

    Derek

    Reply

  67. Hi Stefan,

    Thanks a lot for detailed explanation content deployment.

    However i have one issue, How can we delete unused/orphaned pages from destination server?

    Thanks,

    Ravi

    Reply

  68. hi,

    I'm trying to deploy a subFolder (source library->Folder->subFolder) to a destination which has Folder created with same name, but deployment succeeds with creating (destination library -> subFolder), can you suggest why does it pick the target as below and where can i fix this.

    Reply

  69. Hi Vijay,

    are you using content deployment and migration API directly?

    Cheers,

    Stefan

    Reply

  70. Hi Stefan, is there a way to overwrite an item in destination with the one from destionation even though they have different guid id? Users delete items from source which exists on destination and we get the file already exists on destination error which is becoming a show stopper to use this content deployment feature.

    Thanks

    Reply

  71. Hi Sally,

    no this is not possible.

    You have to manually remove the conflicting item and redo the deployment.

    Cheers,

    Stefan

    Reply

  72. Hi Stefan,

    Can copying a site from one variation to another using Manage Content and Structure on Authoring server cause GUID conflict and subsequently affect content deployment to the destination server? e.g. copy marketing from http://server/sites/abc/en-us/marketing to http://server/sites/abc/en-ap/

    Do you suggest manually creating the sites on Authoring server instead of copying from another site on the same portal?

    thanks and regards,

    Praveen

    Reply

  73. Hi Prav33n1x,

    copying between Variation Labels using manage Content and structure will not harm Content deployment – but it will harm variations.

    The copied items will not belong to the Variation hierarchy.

    You should create the sites and pages in the source Label and let them propagate to the target Labels using variations rather than copying them using manage Content and Structure.

    Cheers,

    Stefan

    Reply

  74. Hi,

    Thanks a lot for the quick reply Stefan.

    Cheers,

    Praveen

    Reply

  75. Exception information:  Exception type: InvalidOperationException  Exception message: You may not create more than 200 content deployment jobs

    Can we create more than 200 content deployment jobs? we have sp 2010 enterprise edition.

    Reply

  76. Hi Amit,

    no that is not possible. This limit is hardcoded.

    Cheers,

    Reply

  77. Hi Steve,

    I was facing issue with content deployments in moss 2007.Error details below

    Destination list already exists.

    I managed to resolve by doing an manual export and import from authoring site to a new site in authoring site and running deployment job to deploy new site and then delete the old site which was throwing errors on content deployment.

    Not sure if its a recommended approach but worked nevertheless!!

    Regards

    Laxmi

    laxmiprasadng123@gmail.com

    Reply

  78. Hey Stefan,

    it's great article.

    However, we encounter one issue with Incremental content deployment.

    When we ran the incremental content deployment it failed with the error

    'The file ************* cannot be imported because its parent web /site does not exist

    Checked the Manifest file and found that no reference is there for that particular file wihtin the scope of the content deployment. (We checked every spobject from Manifest file to find out if we have refernce of that URL may be link or picture inside scope of the content deployment)

    Reference which we are getting is from the out side of scope of the content deployment.

    We would like to know why this reference would come when we dont have any reference inside the scope of the content deployment when previous incremental content deployment was successful and nothing has been changed.

    Thank you,

    SUSHD

    Reply

  79. Hi Sushd,

    it is hard to answer this question without more details.

    I would recommend to open a support case with Microsoft Support Services to get this analyzed.

    Cheers,

    Stefan

    Reply

  80. Hello Stefan,

    We also have the 98 Mb upload limit on the target farm.
    But the way to cope with this is not clear to me.
    http://support.microsoft.com/kb/969565
    I don’t really understand where in my external farm I need to change something.

    Regards

    JL

    Reply

  81. Hi JL,

    see "Problem 8: the cab file size exceeds the configured maximum content length configured in IIS 7" in the article above.

    Cheers,
    Stefan

    Reply

  82. Hi Stefan,
    I going to implement content deployment job in my share point 2013 site. We are directly changing the content in target site[Prod] now. Can we implement Content deployment job now since i have the following doubts ?
    We created Publishing template when we create new site in Prod?
    We already have the content in Prod so Can we use incremental first time ?
    If accidentally move the content [Page] to Prod, Can we revert particular content only from Prod ?.

    Reply

  83. Hi Abdul,

    to use content deployment you need to create the target site without a site template.

    You cannot use the current target site. You need to delete your current target site collection and start with a full deployment to prod.

    Reverting a page in prod to an earlier version is possible.

    Cheers,
    Stefan

    Reply

  84. SharePoint 2010 and Internet Web Site. Some Best Practices

    Reply

  85. Hi Stefan,
    Please tell me hot to revert the page from Prod.

    Reply

  86. We are going implement content deployment job in SharePoint 2013 site. We don’t have separate STAGING environment for act as a source of the content deployment.

    My Proposal design is EX: A,B and C are my web applications. Webapp-A (Test site) and Webapp-B(Staging) in both are same farm. Staging will be act as a source of content deployment
    WebApp C is Production and act as destination of content deployment.
    We have custom code to deploy web application level and only one feature at FARM level.

    Will content deployment job work without fail ? if we deploy and activate the new feature only in Webapp-A not in Webapp-B since both are in same FARM.
    We have around 2000 pages in PROD. Can we continue with existing site or content deployment will work only for empty template for first moving from source to destination?

    Reply

  87. Hi Abdul,

    Content Deployment will activate Site and Site Collection level features during the deployment. Web Application level features have to be activated manually.
    Content Deployment require an empty site collection at the target during initial deployment. If there is content in the site collection. Content Deployment will fail.

    Cheers,
    Stefan

    Reply

  88. Thanks Stefan, Now I am clear on the destination side. we have two application in the source farm one is act as a staging and another one need for testing. We will install new feature in testing web app not in the staging webapp. Will content deployment
    work without fail since FARM is same for staging and testing web app?. Please share your email ID, I will share the FARM topology structure to you.

    Reply

  89. Hi Abdul,
    Content Deployment works fine between site collections in the same farm.
    Cheers,
    Stefan

    Reply

  90. Hi Stefan,

    I have content deployment is running between two sp2010 farms. Source contains two site collections with single content database so as destination.
    The content deployment was not set to deploy entire site collection(only include partial webs) because destination farm is running fast search however source farm is not.

    The content database size discrepancy are so hug between the farms ex)source : 32 GB, Destination : 220GB)
    I don't have any idea what can cause this.
    If you have any idea, can you please let me know?

    Thanks,

    Dongwook Kim

    Reply

  91. Hi Dongwook,
    it is normal that the destination database grows more than the source. The reason is versioning and the fact that content deployment always deploys dependent items. So if a page is changed on the source site collection content deployment deploys this page and
    all referenced items like images, page layout, …
    With other words; you will get a new version for each of the referenced items as well in the target while in the source you only got a new version for the page you changed.
    Cheers,
    Stefan

    Reply

  92. Hi Stefan,

    thank you for the info and I would like to ask you one more question. is there any way to bring the content database size of destination database down?

    Thanks,
    Dongwook

    Reply

  93. Hi Dongwook,
    you can configure the number of versions to keep in each document library. That would purge older versions of the documents.
    After the versions are purged you would need to reduce the database size using some SQL methods to shrink it.
    Cheers,
    Stefan

    Reply

  94. Hi Stefan,
    we have a problem with CD on custom search pages, after CD copies all content to the target site we get “The context has expired and can no longer be used” 0x80090317", we disabled the "Web page Security validation" in “Web Application General Settings” and
    this was in the event viewer:
    Failed import operation for Content Deployment job 'Remote import job for job with sourceID =…. Exception was: 'Microsoft.SharePoint.SPException: The context has expired and can no longer be used. (Exception from HRESULT: 0x80090317) —> System.Runtime.InteropServices.COMException:
    The context has expired and can no longer be used. (Exception from HRESULT: 0x80090317)

    Thanks,
    Me

    Reply

  95. Hi Me,
    sounds as if some custom event receivers are being fired during import which can cause such problems. If you can’t figure it out yourself I would recommend to open a support case with Microsoft.
    Cheers,
    Stefan

    Reply

  96. Hi Stefan,
    I ‘m having an issue that the web site is slowing down or unavailable during the content deployment.
    source farm is running flawless but target farm is always slowing down started from last a couple of weeks.
    Target farm consists of 2 WFEs(win 2008 R2), 2 APPs, SQL Cluster(sql 2008 r2). content db is getting bigger and now it’s a little more than 200GB in target farm(40 GB in source)
    My first concern is 200GB content DB. it seems that there are lots of activities on SQL during the content db.
    if it is the case, what is the best way to split a content database? unfortunately, the site is single site collection.
    My second concern is TaxnomyHiddenList. the list contains almost 12,000 items(enterprise keyword) and majority content types are using manage metadata and enterprise keyword.
    Can you tell me what would be the issue?
    Thanks,
    Dongwook

    Reply

    1. Hi Dongwook,
      performance is a complex topic and it is not possible to answer that by just looking at the information you provided.
      I would recommend to open a support case with Microsoft to get your environment analyzed to understand where the bottlenecks come from.
      Cheers,
      Stefan

      Reply

  97. The Content deployment got this error. do you have any idea?
    9:05 AM Violation of PRIMARY KEY constraint ‘PK__#Increme__3214EC066614E12A’. Cannot insert duplicate key in object ‘dbo.#IncrementalSearchScope’. The duplicate key value is (91140da0-3811-4013-b57f-3f2310f86e10). Violation of PRIMARY KEY constraint ‘PK__#Increme__3214EC066614E12A’. Cannot insert duplicate key in object ‘dbo.#IncrementalSearchScope’. The duplicate key value is (91140da0-3811-4013-b57f-3f2310f86e10). The statement has been terminated. The statement has been terminated.
    Information 2/4/2016 9:05 AM Snapshot of source content database deleted.
    Error 2/4/2016 9:05 AM Content deployment job ‘Deployment Job Primary Path’ failed.The exception thrown was ‘System.Data.SqlClient.SqlException’ : ‘Violation of PRIMARY KEY constraint ‘PK__#Increme__3214EC066614E12A’. Cannot insert duplicate key in object ‘dbo.#IncrementalSearchScope’. The duplicate key value is (91140da0-3811-4013-b57f-3f2310f86e10). Violation of PRIMARY KEY constraint ‘PK__#Increme__3214EC066614E12A’. Cannot insert duplicate key in object ‘dbo.#IncrementalSearchScope’. The duplicate key value is (91140da0-3811-4013-b57f-3f2310f86e10). The statement has been terminated. The statement has been terminated.’
    Thanks,
    Dongwook

    Reply

    1. Hi Dongwook,
      please verify if this error still occurs with the latest CU.
      If yes I would recommend to open a support case as this type of error indicates either a problem with SharePoint or a database inconsistency.
      Cheers,
      Stefan

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.