This is a troubleshooting article showing some of the Issues that one might encounter when working with the Deduplication storage and how to go about solving them or collect info that would help Veritas Technical Support to solve it quicker for you.
Note: Backup Exec version 2014, 15, 16 utilize PostgreSQL as the backend database in deduplication storage folder. Backup Exec 20 does not use PostgreSQL, so some commands that are suited for Backup Exec version 2014, 15, 16 may not apply to Backup Exec version 20 deduplication storage.
Deduplication Storage Folder Issues:
- Deduplication folder is offline
- Backup fails when targeted to a deduplication device but works on a regular disk storage.
- Restore from a deduplication device does not work.
- Deduplication storage folder is full
1. Deduplication folder is offline
Dedupe service not starting, Dedupe credentials not correct, Dedupe folder corruption
It is important to start your checks by making sure the disk is healthy.
- Run a CHKDSK to ensure the drive on which the Deduplication Folder is hosted is free from any corruption.
- Ensure Windows indexing is unchecked for this drive.
- Ensure an AV exclusion is set for the drive on which Deduplication Folder is created.
Deduplication folder in Backup Exec may show offline due to any of the following:
- Deduplication service not starting
- Deduplication folder credential in Backup Exec (BE) have changed and not updated on the Deduplication SPA database
- Corruption inside the dedupe folder.
Ensure all Dedupe services can be started. Deduplication Engine, Deduplication Manager, Postgres, are all run under local system account. If any one does not start, then following can be checked:
- PostgreSQL or any other deduplication service startup issues due to registry set incorrectly. Confirm ImagePath is accurate. This can be compared with another working system if available:
- Logs that can be referred to apart from error in Windows Event Viewer for each Dedupe service startup issue
Known Issue with Postgres startup:
Deduplication Service Startup -
There can be various reasons why deduplication engine doesn’t start. From BE Install Path\ spoold.exe --test this tests the contentrouter.cfg file in Dedupe\etc\Puredisk. If this file is tampered then Deduplication Engine will not start (call support to rectify the file)
- If the services starts up, it is also important to note that the queue processing in dedupe is working. If tlogs (inside dedupe\queue) are older than a couple of days and are not going away then this could mean there may be something wrong with the internal queue process.
They should be committing automatically and under no circumstances be deleted by anyone (manually deleting of tlog files can badly affect the dedupe folder and backup sets).
Note: Any errors for queue process will be recorded in Dedupe\log\spoold\storaged.log
See https://www.veritas.com/docs/000087645 - A known case where queue processing does not run.
Queue processing can be manually triggered by running crcontrol.exe --processqueue from command prompt (run from BE Install path. Run the command 2 times) to see if the tlogs inside the queue folder get cleared.
Note: Read the log from the bottom as the latest entries are added at the very end.
If errors are still being reported in stroraged.log then contact Veritas Technical Support.
- If the dedupe folder is still offline i.e. all Deduplication Services are running, then restart the Backup Exec services. If the deduplication folder is still offline refer to the adamm.log present in Backup Exec Install Path\logs folder. Review the adamm.log from the bottom.
Example: See adamm log snippet below. In this case, the dedupe user's password was not correctly updated.
See, How to change password for logon account used for deduple storage folder
Here's how to look for a similar section in your adamm.log.
- Open adamm.log,
- Go to the bottom of the page
- Search for the following string "adamm log started".
- When you find it, start reading the log downwards from there. Review these sections and note any errors:
 02/24/16 03:09:24.300 Read OST Server Records - start
 02/24/16 03:09:24.345 OST Server: PureDisk:BE-CAS:PureDiskVolume
 02/24/16 03:09:24.345 Read OST Server Records - end
 02/24/16 03:09:24.345 DeviceIo Discovery - start
 02/24/16 03:09:26.431 DeviceIo: STS: Critical: (Storage server: PureDisk:BE-CAS) PdvfsRegisterOST: Failed to register with SPA on storage server BE-CAS. Check to make sure the server is on and that the services are running. (Permission denied) V-454-25
 02/24/16 03:09:26.432 DeviceIo: sts_open_server PureDisk:BE-CAS dedup 2060029
 02/24/16 03:09:26.432 DeviceIo: ostaspi: sts_open_server PureDisk:BE-CAS as dedup error 2060029
 02/24/16 03:09:26.432 DeviceIo: ostaspi: authorization with server PureDisk:BE-CAS has failed
SGMON.exe can also be used to debug a Deduplication Folder Offline issue. If all deduplication related services are starting without any problem then shut down only the Backup Exec services, launch SGMON with Device and Media verbose (verbose can be enabled from SGMON settings) and start all Backup Exec services.
Note: SGMON.exe is present in Backup Exec Install Path
Filter SGMON or any other log with string “ERR” as shown below. The SGMON log file may be named differently, therefore check the log location to confirm the name of the log file.
C:\Program Files\veritas\Backup Exec\Logs>findstr /C:"ERR" BE-CAS-SGMon.log > SGMON_ERR.log
PVLSVR: [02/24/16 03:18:32]  DeviceIo: STS: Error: [ERROR] PDSTS: pd_register: PdvfsRegisterOST(BE-CAS) failed (13:Permission denied)
PVLSVR: [02/24/16 03:18:32]  DeviceIo: STS: Error: [ERROR] PDSTS: add_mount: PdvfsMount() failed for mount point:<BE-CAS#1> (13:Permission denied)
PVLSVR: [02/24/16 03:18:32]  DeviceIo: STS: Error: [ERROR] PDSTS: open_server: pd_mount() failed (2060029:authorization failure)
PVLSVR: [02/24/16 03:18:32]  DeviceIo: STS: Error: [ERROR] PDSTS: impl_open_server: open_server(PureDisk:BE-CAS) failed (2060029:authorization failure)
PVLSVR: [02/24/16 03:18:32]  DeviceIo: STS: Error: [ERROR] PDSTS: pi_open_server_v7: impl_open_server(PureDisk:BE-CAS) failed (2060029:authorization failure)
There could be multiple reasons for dedupe being offline even if the services are started. This was just one example. But one thing to note is that deletion of the dedupe folder from UI and re-importing would not help in bringing the Dedupe folder online for the reasons discussed above.
Note: While creating or re-importing a Deduplication Storage Folder in Backup Exec, pdde-config.log needs to be referred if an error is seen while creation/re-import. The log file is present in DedupeFolder\log.
See Deduplication Folder creation or recreation fails with "An Error Occurred while Creating the deduplication storage folder".
Sometimes it may be required to delete the EtcPath and ConfigFilePath String Value from the following Windows Registry location:
Note: This only applies to re-import cases.
Incorrect use of the Windows registry editor may prevent the operating system from functioning properly. Great care should be taken when making changes to a Windows registry. Registry modifications should only be carried-out by persons experienced in the use of the registry editor application. It is recommended that a complete backup of the registry and workstation be made prior to making any registry changes.
- After upgrading to Backup Exec 2014, the Deduplication Manager (SPAD) is seen to crash on startup, See https://www.veritas.com/docs/000022713.
2. Backup fails when targeted to Deduplication folder and works when targeted to normal disk storage
It is important to narrow down the issue if the backup problem only occurs when directed to a deduplication folder. It may caused by the kind of resource we are backing up or something else outside of deduplication storage but affecting the backup and it could also be something within Dedupe.
- If the deduplication folder is online and the backup is queued.
There could also be other discrepancies, refer another Issue https://www.veritas.com/docs/000088005
- Backup to dedupe may not run and show status ready;no idle devices available even when it has no active jobs.
Note: Nothing should be deleted or touched in the above path. Just view the content and attempt to restart Backup Exec services to dismount the logical drives if there are any. The server name in the path should be replaced by the hostname of the server where deduplication folder is located.
- Client Side Backups failing with Read Write Errors.
On the Backup Exec Server, KeepAliveTime registry dword can be created and set to decimal 5000
(Note: reboot the server post making the change) to test if it helps.
If client side backup still fails an SGMON log with Job Engine, Backup Exec Server, Device and Media (from settings, select verbose for device and media) selected can be captured when backup is running. Also get the SGMON log from Client Server. More details about Logging can be found here - https://www.veritas.com/docs/000005927.
There are some additional logs created which can also be collected for analysis by Veritas Technical Support staff DedupeLocation\log\spoold\remoteclientserver\beremote\store
The above log location can be interpreted as spoold connecting to remote server and to its process i.e. Beremote process and that Beremote has made a store connection i.e. backup with Dedupe Server. The logs inside this can be looked for connection reset errors just in case the connection is being aborted. Antivirus software can sometimes cause these aborts so in order to isolate this issue, if possible, uninstall the Antivirus Software and see how it goes. It is sometimes observed that disabling the Antivirus application does not help and hence it is recommended to test by uninstalling the Antivirus Software (Note: This is not a solution but doing this will help us isolate the problem).
- Network issues may cause optimized duplication between the backup servers or LSU to fail with this Read Write errors - See https://www.veritas.com/docs/000103306
SGMON.exe is a utility which helps to debug many issues in Backup Exec. We can use SGMON to debug this Issue when the optimized duplication is running. This will need to be done on the Backup Exec server which running the optimized duplication operation.
Additionally, also look at the replication.log located at DedupeFolderlocation\log\spad on the primary deduplication backup (i.e 1st Copy). On the target side i.e. where the optimized duplication is targeted to, check DedupeDrive:\log\spoold\primarydedupeBEservername\spad.exe\Store\ and review the logs with the maximum size for clues.
Also review Windows Event Viewer and ADAMM log from the Backup Exec server for more information.
The Opt Dupe between the Hardware OST Appliance will need the SGMON and vendor OST plugin logs during the opt dupe failure to narrow down the Issue.
- Client-side deduplication is enabled for this job, but it could not be used. https://www.veritas.com/docs/000041175 review adamm.log in BE Install Path\logs. Open that log, from the bottom search up with string "DeviceIo Discovery - start" and then down. For the remote server for which the client side exception is seen, please check for errors if any reported there. For eg :  04/21/16 09:26:51.801 DeviceIo: Discover: configure remote OST server PureDisk:BE-CAS for dbclient1.lab.symc failed, error=7. More about the error can be found out by running net helpmsg 7 in command prompt. The error shows The storage control blocks were destroyed. This could be related to DNS or any firewall disrupting the connections that the backup exec remote agent is trying to make with the deduplication engine and deduplication manager service ports. Maybe by just updating the firewall rules or updating correct IP and hostname entries in each other server (BE and Remote server for which client side isn't working) host file may do the trick. After making the corrective change it is important to restart the backup exec device and media service to again check the adamm.log if the same error appeared after the changes for the same client or not. If the error no longer appears for the remote client, then test the client side backup again.
f. For Client side Backups or with opt dupe Jobs always ensure to edit the verify so that it runs as a seperate job. This way it runs locally on the BE server hosting the deduplication folder. Choose from Backup Exec Job settings -> Verify -> After the job finishes, as a seperate job.
- It is important that the backup set is verified i.e. from within the deduplication store in Backup Exec console -> Storage -> Dedupe Folder details -> Backup Set -> highlight the backup set that is being restored -> right click on it and select Verify.
- If the Verify job fails: There may be an issue with that backup set and in which case one needs to check if the backup was successful for that resource.
- If the Verify job is successful: Duplicate the backup set on a normal disk storage (B2D) and then test the restore.
- Test the restore from other backup sets of the same resource to narrow down if this is an issue with specific backup set.
- Perform a backup of the same resource to a normal disk (i.e. new backup to B2D) and then perform a test restore.
- One other thing that can be tested is to disable the Client-Side Deduplication from Deduplication Folder properties in Backup Exec console -> Storage (again this is just for testing and to narrow down and workaround the Issue). This will prompt to restart the Backup Exec services. Once the services are restarted, attempt to restore and see how it goes. If this does not help please contact Veritas Technical Support to investigate further.
a. Check if the deduplication device is nearing full capacity:
The capacity column available in the Storage tab of Backup Exec console shows usage of a disk storage. If it is RED in color, it is an alarm that the Deduplication device may be nearing its full capacity.
Note: 95% usage is the highest we have seen a deduplication folder get to. This is the time to reclaim space within Deduplication Storage so that newer backups can run. It is recommended that "The percentage of disk space to reserve for non-Backup Exec operation" value should never be lowered below 5%.
b. Check for deduplication statictics:
Following command can be run from command prompt -> BE Install Path to check the real deduplication statistics:
The parameters that need to be looked at are as follows :
- Use Rate - Should be less than 95. Preferably in the 70 - 80 percent range so that future Backups can be run without worrying about the space on Dedupe
- Catalog Logical Size - This is the front end data (we can say the uncompressed data (original size)) that is currently residing on the deduplication folder.
- Space Allocated For Containers - This is the space taken by the Dedupe containers i.e. the content inside the Dedupe\data folder.
- Space Used Within Containers - If this is near or same as Space Allocated for containers then for newer backups (if more unique data is to be backed up) then new containers will be created hence more space will be needed. At this point if the use rate is high and there is no Space available within Containers the the dedupe disk volume may need to be extended or space needs to be reclaimed by deleting existing backup sets.
- Space Available Within Containers - Each container within Dedupe Data folder is 256 MB. Some containers might fill up completely and some may not. This leftover space from all container is called Space available within container. If there is ample space within container then Backups can be run but again be cautious of the use rate.
- Space Need Compaction - This is the deleted/dereferenced space from within space used within container which is still taking up space within Dedupe containers (i.e. bin bhds within Dedupe\data). This is not counted while calculating Used percentage but if this is in TBs (high), please use crcontrol.exe --compactstart 100 0 1 (Run it from command prompt, BE Install Path\ .It may take a while for this command to complete). Always open the command prompt in the elevated mode. To monitor the status, you can run crcontrol.exe --dsstat as well to monitor Deduplication Folder statistics.
c. Manual space reclamation:
If space need to be reclaimed follow this Technical Article Manual space reclaimation for Deduplication Storage Folder in Backup Exec 2012 and above.
Note: It is recommended to stop backups (i.e. backup to another storage) while attempting to manually reclaim the space within dedupe folder, since it may be difficult to identify how much space has been reclaimed due to a constant addition of data if Backups continue to run while reclaim process is being carried out.
Points to Remember Reclaiming Space manually from Deduplication Storage :
- From the dedupe stats, review the deduplication ratio. If it is high i.e. you are getting a very good dedupe ratio then may need to expire more sets to release some space and its because dedupe does not release the space for that chunk, until the last last reference for a chunk is gone. (refer reclaim article for more details)
- When backup sets are manually expired from Backup Exec console, core BE services delete the Media and catalog reference of that backup set from within BE (BEDB, catalogs etc.). This activity is recorded in BE Audit Log (Go to Backup Exec console -> Configuration and Settings - Audit Log ->Choose "Backup Set Retention" category from the drop down to check if the media used for that backup set was deleted or not). For assistance in identifying which media was used by the backup set see point no. 2 under solution in article number 000107956.
- If point 2 has worked (If not, contact Technicel Support), then BE Deduplication Manager is notified that these Media references need to be deleted from within Deduplication folder. Deduplication Manager then delegates the responsibility to Deduplication Engine Service to delete these references and update the Deduplication Engine database (To update the dedupe engine database tlogs are created. Tlogs are a way to perform updations in the Deduplication Database). To committ the Tlogs, queue processing (as the manual reclaim article mentions) needs to be performed a couple of times and thats when the dedupe stats need to be checked to decide if more backup sets need to be expired to lower the use rate.
- Space available within containers increases after the manual reclaim process is followed. Space needs compaction may also be high after this process. crcontrol.exe --compactstart 100 0 1 can be run to give available space within containers and space needs compaction to the file system. May need to run this command a few times (use command prompt and run it from BE Install Path)
Smaller than expected deduplication ratio
Within the job log the deduplication ratio is displayed, in this example it is 72.9%
Deduplication Stats::PDDO Stats for (Servername): scanned: 543241 KB, CR sent: 147034 KB, CR sent over FC: 0 KB, dedup: 72.9%
A smaller than expected deduplication ratio can have multiple root causes like the type of backup (AVVI or Remote Agent backup), the data itself like file system data versus a database or even a virtual machine (VM).
Before identifying if it is related to a specify type of backup or type of data, clarify if the issue exist for every backup job or just a few jobs.
If it affects every backup job then it is most likely a more general issue, please follow these steps:
Please use DCSCAN to validate the contents of a Deduplication Storage Folder as a whole.
For more info about DCSCAN please see:
From a command line window:
- cd "\Program Files\veritas\Backup Exec"
- crcontrol --compactoff (Note: two dashes before compactoff)
- dcscan --verify -q -H -a > crcerrors.txt 2>&1 (Note: two dashes before verify)
- crcontrol --compacton (Note: two dashes before compacton)
If the crcerrors.txt file is zero bytes, then the dcscan program did not find any corruption inside the deduplication folder
If the crcerrors,txt file reports an error, start researching the error reported.
Example of a possible error and solution:
25017: FileCopyA: could not open destination file D:\Dedupe\data/journal/delstat_0.log
This points to a problem with delstat.log
For for information about this particular error and solution see:
- DO NOT delete any files from within the deduplication storage folder unless it has been validated by Veritas Technical Support.
- Backup Exec 2014 or 15 or 16 uses deduplication version 7 and run a lot faster and smoother as compared to previous older BE versions.
- Backup Exec 20 uses Dedupe version 10 and is even better with respect to the older dedupe version 7 which comes with backup exec 2014, 15, 16.
- Backup Exec upgrade to version BE 20 involves dedupe storage conversion. PostgreSQL service is removed and hence the dedupe storage folder is converted into new dedupe version format.
UMI Code: V-275-550
Was this content helpful?
Rating submitted. Please provide additional feedback (optional):