BUG REPORT: PDDO Duplications with PureDisk 6.6 fail with Status 84 "sts_copy_extent failed: error 2060014 operation aborted"

  • Article ID:100022473
  • Modified Date:
  • Product(s):

Problem

BUG REPORT: PDDO Duplications with PureDisk 6.6 fail with Status 84 "sts_copy_extent failed: error 2060014 operation aborted"

Error Message

Status 84 - media write error

Solution

Overview:
PDDO Duplications with PureDisk 6.6 fail with Status 84 "sts_copy_extent failed: error 2060014 operation aborted"

PDDO replication jobs fails with:
Error: 2 : CRReplicate: Could not receive DO b8a61d743b47de956e4d1ca9d469d3fc to replicate: no such object.
This happens when a file is expired by PDDO data removal, and PDDO replication runs before MB Garbage Collection.

Log Files:
bpdm:
16:13:36.612 [5352.5048] <2> Puredisk: check_pdvfs_job id=5863 state=FAILED
16:13:36.612 [5352.5048] <16> Puredisk: impl_copy_extent check_pdvfs_job jobId 5863 unexpected state 4
16:13:36.612 [5352.5048] <2> Puredisk: impl_copy_extent exit 2060014 operation aborted
16:13:36.612 [5352.5048] <2> Puredisk: pi_copy_extent_v9 exit (2060014:operation aborted)
16:13:36.612 [5352.5048] <2> set_job_details: LOG 1257455616 32 bpdm 5352 sts_copy_extent failed: error 2060014 operation aborted
16:13:36.612 [5352.5048] <32> bp_sts_copy_extent: sts_copy_extent failed: error 2060014
16:13:36.612 [5352.5048] <2> set_job_details: LOG 1257455616 32 bpdm 5352 image copy failed: error 2060014: operation aborted
16:13:36.612 [5352.5048] <32> copy_data: image copy failed: error 2060014:

Source SPA:
[2009-Nov-05 16:03:46 EST][stream0] Forwarding data (NUMBER OF FINGERPRINTS in this batch:8)
[2009-Nov-05 16:03:46 EST][stream0] Error: 2 : CRReplicateReceiveDO: DO download failed do fingerprint b8a61d743b47de956e4d1ca9d469d3fc
[2009-Nov-05 16:03:46 EST][stream0] Error: 2 : CRReplicate: Could not receive DO b8a61d743b47de956e4d1ca9d469d3fc to replicate: no such object

On Source SPA:
# grep -r b8a61d743b47de956e4d1ca9d469d3fc /Storage/history/*
/Storage/history/dataobjects/2009-11-01:1257050791,0,4342,Cb8a61d743b47de956e4d1ca9d469d3fc
/Storage/history/dataobjects/2009-11-02:1257181412,0,4342,b8a61d743b47de956e4d1ca9d469d3fc,DB_DEL
/Storage/history/dataobjects/2009-11-02:1257181579,0,4342,b8a61d743b47de956e4d1ca9d469d3fc,ST_DEL
/Storage/history/segments/2009-11-01:1257050791,1751,4342,0,b8a61d743b47de956e4d1ca9d469d3fc,922638539,c3473c960edfa78a6e67cf4dcfb3e098,1
/Storage/history/segments/2009-11-03:1257225234,1751,4342,0,b8a61d743b47de956e4d1ca9d469d3fc,DB_DEL
/Storage/history/segments/2009-11-03:1257225627,1751,4342,0,b8a61d743b47de956e4d1ca9d469d3fc,ST_DEL

# /opt/pddb/bin/psql -U pddb mb -c "SELECT dirname,basename,cdoref,type,modtype,registertime,statusid FROM ds_2 WHERE cdoref='3ad5fc65269163799266e9186165f4d4' ORDER BY registertime;"
dirname | basename | cdoref | type | modtype | registertime | statusid
-------------------------+----------------------------------------+----------------------------------+------+---------+--------------+----------
/svr1/New_FS_Test | svr1_1257035535_C1_HDR.info[R_1] | 3ad5fc65269163799266e9186165f4d4 | 0 | M | 1257035553 | 5
(1 row)

From the above you can see that the statusid = 5 which means that this data is de-referenced. So, the optimized duplication is querying the de-reference data for duplications, which causes the failure.

Workaround:
Run MB Garbage collection before running optimized duplications.

Formal Resolution:
This issue is addressed in the PureDisk 6.6.0.2 hotfix.  See Related Documents below for a download link.



Related Articles

Hotfix NB_PDE_6.6.0.2_339244.tar provides critical fixes to Veritas NetBackup (tm) PureDisk 6.6

Was this content helpful?

Get Support