Automating Index Fragmentation Reports

Summary: Guest blogger, Laerte Junior discusses how to use PowerShell to automate an index fragmentation report and generate an Excel file.

One of the day-to-day tasks of a DBA is to maintain your environment. This task includes checking the fragmentation of indexes. In this post, I will show how to automate this collection and generate an Excel file.

Environment considerations

For this particular script, I am assuming that like me, you use a central “collector” machine. For that, you have installed SQL Server 2012 or later, Excel, and Windows PowerShell 4.0 or later. Yes! I will use some coding features that only work in Windows PowerShell 4.0 and later. So you are good to go if your checklist looks exactly like this one:

  • Windows PowerShell 4.0 or later
  • SQL Server 2012 or later
  • Coffee and chocolate donuts
  • WooHoo ! Watched the new Star Wars trailer

Coding considerations

Here are some things to consider about running this script:

  • $PSItem and PipelineVariable common parameter are being used. This is why you need PowerShell 4.0 or later. $Psitem is a new syntax to the $_ placeholder in Windows PowerShell 3.0, and PipelineVariablean awesome new common parameter supported in Windows PowerShell 4.0. For more information, see:
  • Invoke-SQLCMD is used. It is a cmdlet supported by the SQLPS provider. In early versions of SQL Server, SQLPS is not a module. In this code, I am importing it as a module, so that is why you need a central repository machine with SQL Server 2012 or later to collect the data.
  • This script is used to automate the collection of the data, so it will be scheduled and called by the Windows Task Scheduler. This means that I need to log everything to check if something bad happened. I am using a Start/Stop transcript for that operation.
  • A list with the SQL Server instance names. The script is using a text file called instancenames.txt. It can be easily changed to query a table in the central repository with the all SQL Server instance names.
  • If a SQL Server instance could not be connected, the script cannot stop. It needs to skip the instance, log, and flow the script to the next instance name. To perform this operation, I am using an  error handle plan with TryCatchFinally and the ErrorVariable common parameter.
  • I am not using SMO to retrieve the index fragmentation information. SMO has a known bug and I want to keep my code as simple as I can—not doing tons of conditions to check what version of SQL Server is to use SMO. For more information, see SMO and Index Fragmentation.
  • I am using T-SQL directly with Invoke-SQLCMD to gather the index fragmentation and SMO to connect and run into the databases.
  • KISS–L (keep it simple, stupid Laerte). We are members of a team, and as a team, I need to remember that other people will maintain my code. There are very good best practices documented for coding in shared and corporate environments, but essentially I like to say, “Use common sense.”
  • Tons of verbose and coding comments. Yes. Everything needs to be logged deeply and people need to understand what I did.
  • Do not reinvent the wheel. Check the Windows PowerShell community channels and you will find a lot of very good functions, scripts, and ideas already posted by the top of the PowerShell heroes. Believe me, I live (virtually) with them. They are the best. Literally, the Council Jedi.

Coding flow

The idea of this script is to automate the report of index fragmentation. The control flow is:

  • Run into the SQL Server instance names
  • Connect to the instance
  • Run into the databases
  • Retrieve the index fragmentation to each table
  • Export to CSV files by database
  • Generate an Excel file with the databases from the instance as worksheets. The Excel name will have the name of the instance.

To generate the Excel file from the CSV files, I am using the awesome function from my fellow MVP, Luc Dekens. Check it out: Beyond Export-Csv: Export-Xls.

Here’s the code

Note  You can download this code from GitHub: LaerteJunior/ScriptingGuysNov2015.

Let’s get deep into the code…

Check and configure the location of the script and initial configurations

#getting the location of the script to create the log and instance directories.

#if error , need to stop the script

$scriptRoot = Split-Path -Parent $MyInvocation.MyCommand.Definition

$PSScriptRoot = $scriptRoot

Set-Location $PSScriptRoot

    • Create the paths

try {

            $Creator = @{

                        $True = { $null}

                        $False = {$null = New-Item -Path $Path -ItemType Directory }


            $Path = “$($PSScriptRoot)\Log”

            & $Creator[(Test-Path $Path)]

} catch {

            throw “Could not create the LOG directory”


Test to start the script

#Trying to import SQLPS module. If could not, need to stop the script

try {

            import-module SQLPS -DisableNameChecking

} catch {

            throw “Could not import the SQLPS module”


#Check to SMO. If could not, need to stop the script

if ([Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SMO”) -eq $null -or ([System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.ConnectionInfo”)  -eq $null)) {

            Throw “SMO not avaliable”


Define some variables

#TSQL index fragmentation

$SQLIndex = “

SELECT object_name(IPS.object_id) AS [TableName], AS [IndexName],



FROM sys.dm_db_index_physical_stats(db_id(), NULL, NULL, NULL , ‘DETAILED’) IPS

   JOIN sys.tables ST WITH (nolock) ON IPS.object_id = ST.object_id

   JOIN sys.indexes SI WITH (nolock) ON IPS.object_id = SI.object_id AND IPS.index_id = SI.index_id

WHERE ST.is_ms_shipped = 0 “#and IPS.page_count > 1000”

#variable to use to store the instance CSV created. Will use to generate the Excel files.

$PathCSV = @()

#Variable to load the instance names

$InputInstances = “$($PSScriptRoot)\InstanceNames.txt”

#Verbose preference to show evertything

$VerbosePreference = ‘Continue’

Load the instance names and check the connection

Try {

            #Loading the Instance Names and storing in the InstanceName Variable by the awesome
               pipelinevariable common parameter

            Get-Content $InputInstances -PipelineVariable InstanceName -ErrorAction Stop  |

            #foreach in the instances

            ForEach-Object {

                        Write-Verbose “Connecting instance $($InstanceName)”

                        try {

                                    try {

                                                #clean the InstanceSkipped variable. It is stored in the write error to skip the instances
                                                  with error


                                                $Connection = new-object (“Microsoft.SqlServer.Management.Common.ServerConnection”) $InstanceName 


                                    } catch {

                                                Write-Error  “Could not connect on instance $($InstanceName) . Error : Error $($_.Exception.Message)” -ErrorVariable InstanceSkipped


                                    } finally {

                                                #if the instance was not skipped by any error

                                                if (!($InstanceSkipped)) {

Define some variables and collect the data

#replacing the “\\” by _ to create the paths and csv files

$InstanceNamePath = $InstanceName -replace “\\”,”_”             

#acumulating the paths to the $pathCSV to use to generate the Excel file and

# storing the current path to the $path variable

$PathCSV += $Path = “$($PSScriptRoot)\Instances\$($InstanceNamePath)”

#removing the path and all inside it if it exits. I am not handling error here or checking if exists do remove, I dont need it

#if not exists and try to remove will generate error that will be supressed and if exists will be removed

Remove-Item -Recurse -Path $Path -Force -ErrorAction SilentlyContinue

New-Item -Path $Path -ItemType Directory -Force

#connecting to the Instance. At this point I know that the instance is online

#and storing the database enumeration to  the DatabaseName Variable using the awesome

#pipelinevariable common parameter

$SQLServer = new-object (“Microsoft.SqlServer.Management.Smo.Server”)  $Connection

$SQLServer.Databases  |

Where-Object {!($PSItem.IsSystemObject)} -PipelineVariable DatabaseName |

foreach-object {

            #running the TSQL and saving to a CSV

            invoke-sqlcmd -ServerInstance $InstanceName -Query $SQLIndex -Database $
            -verbose:$false |

            Select-Object     @{N=’Table Name’;E={$_.TableName}},

                                                            @{N=’Index Name’;E={$_.IndexName}},

                                                            @{N=’Index Type’;E={$_.Index_type_desc}},

                                                            @{N=’Fragmentation’;E={$_.avg_fragmentation_in_percent}} |

            Export-Csv “$($Path)\$($” -NoClobber -NoTypeInformation

Generate the Excel file

try {

            if ($PathCSV) {

                        $PathCSV |

                        ForEach-Object {

                                    $PathToSave = “$($Psitem)\$(($Psitem -split ‘\\’)[-1]).xlsx”

                                    dir “$($Psitem)\*.csv” |

                                    ForEach-Object {

                                                try {

                                                            $data = Import-Csv $Psitem.fullname

                                                            Export-Xls $data $PathToSave -WorksheetName $ -AppendWorksheet

                                                            Write-Verbose “Excel created for $($PathToSave)”

                                                } catch {

                                                            write-Error “Oh Boy..something bad happened. Error : Error $($_.Exception.Message)”





} catch {

            write-Error “Oh Boy..something bad happened. Error : Error $($_.Exception.Message)”


Schedule and register the script

I am using scheduled jobs in Windows PowerShell 3.0, so it just creates the trigger:

$trigger = New-JobTrigger -Daily -At 1am

Register the job in the Windows Task Scheduler by pointing to the .ps1 file:

Register-ScheduledJob -Name EnergyAnalysisJob -Trigger $trigger -FilePath ‘C:\laerte\scripting guys\indexes.ps1’

For more information, see Scheduling Background Jobs in Windows PowerShell 3.0.

That is it. Thanks to my good friend, Ed Wilson, for allowing me to talk a little bit about my passion: automation!


Thanks, Laerte. Awesome post.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Comments (0)

Skip to main content