Recommended optimizations, monitoring and maintenance for the Accelerator environment

Article: 100019120
Last Published: 2023-09-19
Ratings: 1 0
Product(s): Enterprise Vault

Problem

Recommended optimizations, monitoring and maintenance for the Accelerator environment

Cause

This document is to assist the Enterprise Vault Accelerator Administrator to properly optimize, monitor and maintain the Accelerator environment.
 

Solution

Article sections:
Optimizations
Monitoring
Maintenance

Optimizations


Item 1 - Antivirus exclusions

Please review the following articles to exclude the proper extensions and folders when using anti-virus software on the SQL server.

- Recommended list of antivirus exclusions for SQL Server when used for Veritas Enterprise Vault, Compliance Accelerator and Discovery Accelerator (under Related Articles).
- Antivirus folder-level exclusions for Enterprise Vault and McAfee Antivirus version 8.8 (under Related Articles).

- How to choose antivirus software to run on computers that are running SQL Server (under the section: Directories and file-name extensions to exclude from virus scanning).
https://support.microsoft.com/kb/309422/

It is also imperative the Vault Service Account (VSA) Temporary folder on all EV and Accelerator servers and the Export folder on all Accelerator servers are also excluded from antivirus scanning, per article 'Recommended list of antivirus exclusions for Veritas Enterprise Vault' (under Related Articles). The following query run against the Accelerator Customer database will list all of the Export locations:

SELECT DISTINCT ExportOutputDirectory AS [Export Location]
FROM tblCase WHERE Type IN (101, 102)
UNION SELECT DISTINCT Location FROM tblProduction
ORDER BY [Export Location];



Item 2 - Network settings on the Enterprise Vault, Accelerator and SQL servers

Please review and follow the steps under the section for optimizing TCP/IP communications and under the section for useful Net Shell command in article 'Recommended steps to optimize performance on Enterprise Vault, Compliance Accelerator, Discovery Accelerator and SQL Servers' under Related Articles.

Notes:

- On servers running Windows 2008 and higher, by default the operating system allows socket connections to be established between the ports 49152 – 65535; this equates to a little over 13,000 user ports. 13,000 TCP ports are inadequate for an Enterprise Vault and Accelerator environment. The article describes how to increase the number of available ports by using the netsh int ipv4 set dynamicport tcp command.

- Microsoft has identified an issue running the netsh command to set global TCP parameters on Windows Server 2008 and Vista machines. Some global parameters, such as TCPTimedWaitDelay, can be changed from their default or manually set values to 0xffffffff. Before running any netsh command, it is recommended to review the following Microsoft books online article https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/troubleshoot-network-performance-features-rss-netdma. Upon execution completion of any netsh command noted in this article, it is also recommended to review the TCP Parameters noted in the KB Article and applying the hotfix from the article if needed.

- When increasing the MaxUserPort, Microsoft recommends that port 1434 be reserved for use by the SQL Server Browser service (sqlbrowser.exe).

Warning: Incorrect use of the Windows registry editor may prevent the operating system from functioning properly. Great care should be taken when making changes to a Windows registry. Registry modifications should only be carried-out by persons experienced in the use of the registry editor application. It is recommended that a complete backup of the registry and server be made prior to making any registry changes.


Item 3 - Disable TCP/IP Offload Engine (TOE), TCP Segmentation Offloading (TSO) and TCP Chimney on the Enterprise Vault, Accelerator and SQL servers

Important Note For Servers In A Clustered Environment Or That Use Network Interface Card Teaming

It is of vital importance to determine if a server is a member of a cluster BEFORE making any changes to the TCP Offload Engine settings described in this article. Examples include Windows Server Failover Cluster nodes and SQL Always On Availability Groups replicas. Some cluster applications require TCP Offload Engine to be enabled on each cluster node or replica for proper functionality. Disabling any TCP Offload Engine settings on cluster nodes or replicas could adversely affect network performance for cluster-aware applications and/or operating systems. As such, it is recommended not to edit any TCP Offload Engine settings for servers that are nodes or replicas in a clustered environment without first consulting the cluster application documentation. If the cluster documentation clearly confirms TCP Offload Engine settings can be changed without any negative effects, then proceed with the changes after creating a plan to roll back the changes if needed. When in doubt, do NOT make any changes to the TCP Offload Engine settings.

Similar consideration should be given to servers using Network Interface Card (NIC) teaming. Some NIC teaming applications require TCP Offload Engine to be enabled on each NIC for proper functionality. Disabling any TCP Offload Engine settings on teamed NICs could adversely affect network performance for cluster-aware applications and/or operating systems. As such, it is recommended not to edit any TCP Offload Engine settings for servers that use NIC teaming without first consulting the NIC teaming documentation. If the NIC teaming documentation clearly confirms TCP Offload Engine settings can be changed without any negative effects, then proceed with the changes after creating a plan to roll back the changes if needed. When in doubt, do NOT make any changes to the TCP Offload Engine settings.

 

TCP Chimney Offload (TOE) is a networking technology that enables the Windows networking subsystem to offload the processing of a TCP/IP connection to a network adapter that includes special support for TCP/IP offload processing. More information on TOE may be found in the following Microsoft article:

- Information about the TCP Chimney Offload, Receive Side Scaling, and Network Direct Memory Access features in Windows Server 2008
https://support.microsoft.com/en-us/kb/951037

TOE also has the following associated components:

- TCP Chimney.
- Receive Side Scaling.
- TCP Segmentation Offloading (TSO).

While TOE does work for most applications, Enterprise Vault and the Accelerators perform some functions using large batches of items, also known as batch processing. For example, an Accelerator Search will cause the Enterprise Vault Indexing engine to return Search hits in large batches; similarly, an Accelerator Export will also request items from EV Storage in large batches. The TOE engine can be slow when performing batch processing, sometimes causing delays and latency. These delays/latency could cause the active function (Search, Export, etc.) to time out and fail the function.

To view the current Offload state, run the following from a command prompt:

- Windows 2003 Server:

netsh int ip show offload

- Windows 2008 Server or higher:

netsh int tcp show global

Steps to disable TOE and its associated components can be found in article 'TCP Chimney, TCPIP Offload Engine (TOE) or TCP Segmentation Offload (TSO) will cause a transport-level error to be logged resulting in inaccurate hit counts for Accelerator searches' under Related Articles.

The article also mentions keeping the network cards updated. When .NET 2.0 programming with XML batch transfers (as used by Enterprise Vault and the Accelerators) was introduced, this was a relatively new addition to the programming world at that time. The age of a network card driver could make a difference in successful communications between the servers In order for network cards to handle the new and ever-changing demands of the .NET world, the network drivers should be less then two years old.

To locate the network card driver date:

- From the Control Panel, open Network Connections.
- Open the properties page for the network connection being used.
- Click Configure.
- On the Driver tab, verify the date of the driver is less then two year old.


Item 4 - Vault Service Account Temporary folders size and additional Registry optimizations

All functions that involve an exchange of data between the Accelerators and Enterprise Vault (EV) utilize the Vault Service Account (VSA) temporary folders on the respective servers for temporary storage. These folders need to be large enough to accommodate the data being processed on the Accelerator server as well as on the EV Indexing server(s) for Searches and EV Storage servers(s) for Export requests on the EV server(s).

At a minimum, the VSA temporary folders should have 20 GB of available drive space. It is recommended the VSA temporary folders have 40 GB of available drive space. For certain Exports where there are large numbers of items or there are large item sizes, the VSA temporary folders may require 80 GB or more of available drive space. It is also recommended to move the VSA temporary folders off the system drive and on to a drive with fast disk I/O access. This applies to the Accelerator servers as well as the EV servers.

See 'Recommended steps to optimize performance on Enterprise Vault, Compliance Accelerator, Discovery Accelerator and SQL Servers' under Related Articles. This article also contains Registry and other optimizations that should be implemented. In addition, it is also recommended to implement the optimizations in article 'Enterprise Vault Best Practice Registry Keys and Boot.ini Settings' under Related Articles.


Item 5 - SQL file growth

Tempdb and Accelerator Customer database and transaction logs can grow rapidly in an active Accelerator environment. Running regular maintenance can avoid issues involving the transaction logs, but preemptive steps can be taken to accompany the regular maintenance:

- Set the Autogrowth for database files and transaction log files to 'File Growth: In Megabytes' with a value of between 250 MB and 500 MB. Set the Maximum File Size to Unrestricted File Growth.
- Verify there is ample space for the database files and transaction log files to grow.

[Back to top]


Monitoring


Item 1 - Accelerator Activity

1. Active Accelerator Searches

Monitoring of active Accelerator Searches is essential to Accelerator health. The following SQL query can be run against each of the Accelerator Customer databases to show active Searches:

SET NOCOUNT ON;
SELECT tis.CaseID
     , tc.Name AS 'Case/Dept Name'
     , tp1.PrincipalName AS 'Case/Dept Owner'
     , CASE WHEN (tc.FolderType = 330 AND tc.Type = 101) THEN 'Case'
       WHEN (tc.FolderType = 330 AND tc.Type = 102) THEN 'Department'
       WHEN tc.FolderType = 331 THEN 'Folder'
       WHEN tc.FolderType = 332 THEN 'Hidden Folder'
       ELSE 'Other' END AS 'FolderType'
     , tc.StatusID AS 'Case StatusID', ts2.Name AS 'Case Status'
     , tis.SearchID , tis.Name AS 'Search Name'
     , ISNULL(tp2.PrincipalName,'Not available') AS 'Search Submitter'
     , tis.CreateDate, tis.RunDate
     , ts.StatusID AS 'Search StatusID' , ts.Name AS 'Search Status'
     , tis.Type AS 'Search Type ID', ts3.Name AS 'Search Type'
     , tis.NumHits AS '# of Hits'
     , tis.SampleSize AS 'Sample Percentage'
     , tis.SampleResultSize AS '# of Hits Sampled'
     , tis.NumIndexVolsSearch, tis.NumIndexVolsTotal
FROM tblIntSearches AS tis
JOIN tblCase AS tc ON tis.CaseID = tc.CaseID
JOIN tblStatus AS ts2 ON ts2.StatusID = tc.StatusID
LEFT JOIN tblPrincipal AS tp1 ON tc.OwnerPrincipalID = tp1.PrincipalID
LEFT JOIN tblPrincipal AS tp2 ON tis.CreatedByID = tp2.PrincipalID
LEFT JOIN tblStatus AS ts ON tis.StatusID = ts.StatusID
JOIN tblStatus AS ts3 ON tis.Type = ts3.StatusID
WHERE tc.Type IN (101, 102)
AND tis.StatusID IN (2, 5, 6)
ORDER BY tis.CaseID, tis.SearchID;


Inspect the RunDate entries for any results. If the RunDate is over 24 hours old, there may be an issue that needs to be investigated and resolved.

2. Paused Accelerator Searches

Paused Accelerator Searches can prevent the release of worker threads for use by active Searches, thus causing slow or no Search processing. The following SQL query can be run against each of the Accelerator Customer databases to show Searches in the pausing process:

SET NOCOUNT ON;
SELECT tis.CaseID
     , tc.Name AS 'Case/Dept Name'
     , tp1.PrincipalName AS 'Case/Dept Owner'
     , CASE WHEN (tc.FolderType = 330 AND tc.Type = 101) THEN 'Case'
       WHEN (tc.FolderType = 330 AND tc.Type = 102) THEN 'Department'
       WHEN tc.FolderType = 331 THEN 'Folder'
       WHEN tc.FolderType = 332 THEN 'Hidden Folder'
       ELSE 'Other' END AS 'FolderType'
     , tc.StatusID AS 'Case StatusID', ts2.Name AS 'Case Status'
     , tis.SearchID , tis.Name AS 'Search Name'
     , ISNULL(tp2.PrincipalName,'Not available') AS 'Search Submitter'
     , tis.CreateDate, tis.RunDate
     , ts.StatusID AS 'Search StatusID' , ts.Name AS 'Search Status'
     , tis.Type AS 'Search Type ID', ts3.Name AS 'Search Type'
     , tis.NumHits AS '# of Hits'
     , tis.SampleSize AS 'Sample Percentage'
     , tis.SampleResultSize AS '# of Hits Sampled'
     , tis.NumIndexVolsSearch, tis.NumIndexVolsTotal
FROM tblIntSearches AS tis
JOIN tblCase AS tc ON tis.CaseID = tc.CaseID
JOIN tblStatus AS ts2 ON ts2.StatusID = tc.StatusID
LEFT JOIN tblPrincipal AS tp1 ON tc.OwnerPrincipalID = tp1.PrincipalID
LEFT JOIN tblPrincipal AS tp2 ON tis.CreatedByID = tp2.PrincipalID
LEFT JOIN tblStatus AS ts ON tis.StatusID = ts.StatusID
JOIN tblStatus AS ts3 ON tis.Type = ts3.StatusID
WHERE tc.Type IN (101, 102)
AND tis.StatusID IN (175, 176, 177, 178, 179)
ORDER BY tis.CaseID, tis.SearchID;


Each result needs to be investigated and resolved.

3. Accelerator Searches in Pending Status

Neglect by Department/Case owners can cause millions of items from Searches to be backlogged in the Customer database awaiting some action to be performed by the end user. Run the following SQL query against the Accelerator Customer database to determine if Searches are awaiting action:  

SET NOCOUNT ON;
SELECT tis.CaseID
     , tc.Name AS 'Case/Dept Name'
     , tp1.PrincipalName AS 'Case/Dept Owner'
     , CASE WHEN (tc.FolderType = 330 AND tc.Type = 101) THEN 'Case'
       WHEN (tc.FolderType = 330 AND tc.Type = 102) THEN 'Department'
       WHEN tc.FolderType = 331 THEN 'Folder'
       WHEN tc.FolderType = 332 THEN 'Hidden Folder'
       ELSE 'Other' END AS 'FolderType'
     , tc.StatusID AS 'Case StatusID', ts2.Name AS 'Case Status'
     , tis.SearchID , tis.Name AS 'Search Name'
     , ISNULL(tp2.PrincipalName,'Not available') AS 'Search Submitter'
     , tis.CreateDate, tis.RunDate
     , ts.StatusID AS 'Search StatusID' , ts.Name AS 'Search Status'
     , tis.Type AS 'Search Type ID', ts3.Name AS 'Search Type'
     , tis.NumHits AS '# of Hits'
     , tis.SampleSize AS 'Sample Percentage'
     , tis.SampleResultSize AS '# of Hits Sampled'
     , tis.NumIndexVolsSearch, tis.NumIndexVolsTotal
FROM tblIntSearches AS tis
JOIN tblCase AS tc ON tis.CaseID = tc.CaseID
JOIN tblStatus AS ts2 ON ts2.StatusID = tc.StatusID
LEFT JOIN tblPrincipal AS tp1 ON tc.OwnerPrincipalID = tp1.PrincipalID
LEFT JOIN tblPrincipal AS tp2 ON tis.CreatedByID = tp2.PrincipalID
LEFT JOIN tblStatus AS ts ON tis.StatusID = ts.StatusID
JOIN tblStatus AS ts3 ON tis.Type = ts3.StatusID
WHERE tc.Type IN (101, 102)
AND tis.StatusID IN (3, 10)
ORDER BY tis.CaseID, tis.SearchID;


Any results should be reviewed and either Accepted or Rejected.

4. Export/Production Runs

Monitoring of active Accelerator Exports is essential to Accelerator health. Run the following SQL query against each of the Accelerator customer databases.

SET NOCOUNT ON;
SELECT tc.Name AS 'Case/Department Name'
    , tpr.PrincipalName AS 'Case Owner'
    , tp.Name AS 'Export/Production Name'
    , tp.ProductionID
    , ts0.Name AS 'Export or Production'
    , tp.NumItemsToProduce AS '# of Items to Produce'
    , tp.NumItemsProduced AS '# of Items Produced'
    , SUM(CAST(tidi.size AS bigint)) AS 'Export Size (in KB)'
    , tp.Location AS 'Output Folder'
    , CASE WHEN tp.ExportToHTML = 1 THEN 'HTML'
    WHEN tp.ExportToPST = 1 THEN 'PST'
    WHEN tp.ExportToNSF = 1 THEN 'Lotus NSF' ELSE 'MSG' END AS 'Export Type'
    , ts1.Name AS 'Current Status'
    , ts1.StatusID
    , CASE WHEN tp.ExportToPST = 1 THEN (SELECT ts3.Name)
    ELSE 'N/A' END AS 'PST Export Status'
    , CASE WHEN tp.ExportToPST = 1 THEN (SELECT tp.ExportPSTStatus) ELSE '0' END AS 'ExportPSTStatus'
    , ts2.Name AS 'Report Status'
    , tp.ReportProducedStatus
    , tp.CreateDate AS 'Create Date'
    , tp.CompletionDate AS 'Completion Date'
    , tp.Info AS 'Export Information'
FROM tblProduction tp
INNER JOIN tblCase tc ON tp.CaseID = tc.CaseID
INNER JOIN tblIntDiscoveredItems tidi ON tp.CaseID = tidi.CaseID
LEFT JOIN tblPrincipal tpr ON tc.OwnerPrincipalID = tpr.PrincipalID
LEFT JOIN tblStatus ts0 ON tp.Type = ts0.StatusID
LEFT JOIN tblStatus ts1 ON tp.StatusID = ts1.StatusID
LEFT JOIN tblStatus ts2 ON tp.ReportProducedStatus = ts2.StatusID
LEFT JOIN tblStatus ts3 ON tp.ExportPSTStatus = ts3.StatusID
WHERE tp.StatusID IN (50, 51, 56, 57, 59)
GROUP BY tc.Name, tpr.PrincipalName, tp.Name, tp.ProductionID, ts0.Name, tp.NumItemsToProduce, tp.NumItemsProduced, tp.Location, tp.ExportToHTML, tp.ExportToPST, tp.ExportToNSF, ts1.Name, ts1.StatusID, tp.ExportToPST, ts3.Name, tp.ExportPSTStatus, ts2.Name, tp.ReportProducedStatus, tp.CreateDate, tp.CompletionDate, tp.Info
ORDER BY tc.Name, tp.ProductionID;


Too many concurrent active Exports can cause delays when processing other Accelerator functions; this is perceived in the Accelerator Client as slow responsiveness. If any Exports listed by the above query have a Create Date over 24 hours old, a service call to Veritas technical support is suggested. Other status codes may refer to issues that need to be investigated by the Accelerator administrator and resolved. The problem might be a temporary communication issue and may be resolved by restarting the Enterprise Vault Accelerator Manager Service (EVAMS), then Retrying the Export.


Item 2 - Quick SQL checks for possible performance bottlenecks

1. Use Task manager to examine the amount of Virtual Memory consumption by the sqlservr.exe service.
Record the amount of memory paged to Virtual Memory (VM) after a reboot of the server and while the system is performing up to speed to set a benchmark value. If the VM usage exceeds the bench mark value consistently, consider moving some databases to a new SQL server or adding additional memory.

2. Use Performance Monitor and review the CPU, memory and hard drive counters for any bottlenecks. See article 'Analyzing SQL Performance using Performance Monitor Counters' under Related Articles for more information on how to use Performance Monitor for this operation.


Item 3 - SQL utilization

Some Accelerator functions such as Searching, Reviewing, Productions and Legal Holds require a considerable amount of SQL resources. When a SQL server is over-utilized, deadlocks on the database resources may occur and lead to SQL time-outs. If SQL time-outs or deadlocks are an issue and the steps in this document are being followed, it is recommended to move the Accelerator databases to a dedicated SQL server.

To monitor the SQL server for deadlocking issues, start and monitor a SQL Profiler (SQL 2008 R2 or earlier) or Extended Events session (SQL 2012 or higher) while performing daily activities.

SQL Profiler steps:
 
- Open SQL Server Profiler.
- Select File then New Trace.
- Use a Blank Template and expand Locks.
- Select Lock:Deadlocks and Lock:Deadlock Chains.
- Run.

Extended Events steps:

- Open SQL Server Management Studio | Management | Extended Events | Right-click on Sessions | New Session Wizard. Follow the Wizard prompts.
- Templates: Do not use a template.
- Select Event To Capture: lock_deadlock, lock_deadlock_chain.
- Included the following Global Fields: event_sequence, sql_text, session_id
- Session Event Filters: add a filter for sqlserver.database.name and set to equal the Accelerator Customer database name (without quotes).
- Session Data Storage: set to 'Save data to a file for later analysis (event_file_target)'.
- Complete the wizard.
- To start the session: Right-click the session | Start Session.
- To monitor the session: Right-click the session | Watch Live Data.
- To stop the session: Right-click the session | Stop Session.

The Profiler Trace or Extended Events Session activity should remain empty or show very limited activity.

[Back to top]


Maintenance


Item 1 - SQL Maintenance

In order to maintain a healthy Accelerator environment it is imperative the databases are maintained regularly. The maintenance interval will vary depending on the environment and the usage of the Accelerator product(s). A good benchmark is to run the Statistics health and Indexes health queries below once a week and review/monitor the results over a period of time, then determine a maintenance interval based on the results and continue to monitor the databases' health. The recommended maintenance steps are discussed in article 'How to automatically back up and perform recommended maintenance for Enterprise Vault SQL databases' under Related Articles. Here is a summary of the basic maintenance steps:

1. Backup the databases.
2. [OPTIONAL STEP] Shrink databases
Note this is an optional step - see Shrink Database option discussion below.
3. Backup and truncate the transaction logs.
4. Rebuild Indexes.
5. Update Statistics.

If SQL Maintenance is not effective or is not performed, the Indexes can remain highly fragmented and the Statistics can be outdated. This can reduce overall performance as the SQL Query Optimizer is forced to use outdated (stale) Statistics and slow Indexes. Consider the following example scenarios:

- Outdated Statistics and highly fragmented Indexes
If the Query Optimizer sees from the outdated Statistics of previously run queries and their results that it needs to skip the use of a particular Index, it will instruct the Execution Plan to use an index scan (table scan), the execution of which could take considerably longer and consume more resources.

- Up-to-date Statistics and highly fragmented Indexes
If the Statistics are up to date and the Execution Plan determined it was more efficient to use an index seek instead of an index scan, which then uses the Index to navigate directly to the required rows (as determined by the predicate in the query, stored procedure or function being processed), and if the Index fragmentation was high then the query execution could still take longer and consume more resources.

- Outdated Statistics and minimally fragmented Indexes
If the Execution Plan determined from the outdated Statistics that an index scan was needed, the query execution could take longer and consume more resources, even though the Indexes had minimal fragmentation (remember - fragmentation will rarely remain at 0%) and could have caused the Statistics to use a more efficient index seek had the Statistics been updated.

- Up-to-date Statistics and minimally fragmented Indexes
If Index fragmentation was low (remember - fragmentation will rarely remain at 0%) and the Statistics were up to date, the Execution Plan cold decide on a table seek and use an Index that returns results quickly and efficiently, reducing the overall time taken and resources consumed.

How to review the Statistics health:

Run the following query against the database and review the percent_change since the last_stat_update:

SELECT object_name(i.object_id) as 'table_name', i.name as 'index_name', STATS_DATE(i.object_id, i.index_id) as 'last_stat_update',
(case WHEN rowcnt>rowmodctr THEN abs((1-(cast(rowmodctr AS float)/cast(rowcnt AS float)))*100)
WHEN rowcnt<rowmodctr THEN abs((1-(cast(rowcnt AS float)/cast(rowmodctr AS float)))*100)
ELSE 0.0 END) as 'percent_change'
FROM sys.indexes i
LEFT JOIN sys.objects o ON o.object_id = i.object_id
LEFT JOIN sys.sysindexes si ON o.object_id = si.id
WHERE o.type='U' AND rowmodctr>0 AND rowmodctr<rowcnt
AND (case WHEN rowcnt>rowmodctr THEN abs((1-(cast(rowmodctr AS float)/cast(rowcnt AS float)))*100)
WHEN rowcnt<rowmodctr THEN abs((1-(cast(rowcnt AS float)/cast(rowmodctr AS float)))*100) ELSE 0.0 END) > 5
ORDER BY 'last_stat_update', 'percent_change' DESC;


How to review the Indexes health using sys.dm_db_index_physical_stats:

Run the following query against the database in question:

SELECT '[' + DB_NAME() + '].[' + OBJECT_SCHEMA_NAME(ddips.[object_id], DB_ID()) + '].[' + OBJECT_NAME(ddips.[object_id], DB_ID()) + ']' AS [Object],
     i.[name] AS [Index],
     ddips.[index_type_desc] AS [Index Type],
     ddips.[index_id] AS [Index ID],
     ddips.avg_fragmentation_in_percent AS [Average Fragmentation (%)],
     CAST(ddips.[avg_fragment_size_in_pages] AS INT) AS [Average Fragment Size (pages)],
     ddips.[fragment_count] AS [Fragments],
     ddips.[page_count] AS [Pages]
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') ddips
INNER JOIN sys.[indexes] i ON ddips.[object_id] = i.[object_id] AND ddips.[index_id] = i.[index_id]
WHERE ddips.[avg_fragmentation_in_percent] > 10 AND ddips.[page_count] > 1000
ORDER BY ddips.avg_fragmentation_in_percent DESC;


The query lists all Indexes with more than 1000 pages and more than 10% average fragmentation. Average fragmentation using the dm_db_index_physical_stats represents logical fragmentation for clustered/non-clustered indexes and extent fragmentation for heap indexes. Review the results for any rows with average fragmentation higher than 20%.

How to review the Indexes health using DBCC SHOWCONTIG:

Run the following query against the database in question:

DBCC SHOWCONTIG WITH ALL_INDEXES, TABLERESULTS;

Review the output for any lines with more than 1000 pages:
- Logical Scan Fragmentation - should be under 1% (Latency occurs once logical scan fragmentation exceeds 10%)
- Extent Scan Fragmentation - should be under 50%
- Avg. Page density (full) - should be over 90%

Note - If it is seen that Statistics become outdated and/or Indexes are fragmented faster than expected, it may be needed to run SQL Maintenance more often. It is also recommended to run SQL Maintenance with the Accelerator Service stopped - for more information please see the 'How to automatically back up and perform recommended maintenance for Enterprise Vault SQL databases' under Related Articles.

Important Note - Shrink Database option discussion:

Shrinking the database is an optional step that could be done to reorganize the pages within the database toward the beginning of the database file which MAY allow for improved database performance. Note this is NOT a guaranteed result. One example of when a shrink database operation could (but not necessarily should) be performed is after an upgrade of EV, CA or DA, as this might involve migrating and dropping one or more tables. However, the need for running a shrink operation should be carefully reviewed by the SQL DBA responsible for the databases.

Running a shrink on a regular basis is not recommended and there are numerous online references stating as such, the following examples being from a trusted SQL resource:
- https://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/
- https://www.brentozar.com/archive/2017/12/whats-bad-shrinking-databases-dbcc-shrinkdatabase/

Shrinking a database should not include the option to return file space back to the operating system unless there is a critical need for that space to be returned, as noted in the SQL Best Practices Guide (see the Related Articles section below):

Note: The database maintenance plan should not include a data file shrink, to avoid unnecessary file growths. However the database log files may need to be shrunk after backing up.

Shrinking the database file size can and will cause requests to autogrow the file more frequently than before the shrink operation. When an autogrowth request is submitted, data processing for the database is paused until the new space is available for use.

A shrink operation increases the fragmentation of table indexes; therefore, a shrink operation MUST be followed by the index maintenance operation of reorganizing or rebuilding the indexes. Note if an index maintenance operation does not reduce fragmentation levels, even after multiple runs with the EV/CA/DA services stopped, and there is a large amount of free space in the database, a shrink operation with the option to return file space back to the operating system may need to be run, followed by the index maintenance operation to rebuild the indexes. Additionally, a shrink operation can also cause the database statistics to become stale; therefore, an update of the statistics must also follow a shrink operation. Updating the statistics and rebuilding / reorganizing the indexes can be done in any order after the shrink operation.

For additional information on this topic, see Microsoft articles https://learn.microsoft.com/en-us/sql/relational-databases/databases/shrink-a-database?view=sql-server-ver16.



[Back to top]

Was this content helpful?