BE interaction with OpenDedupe components during normal operation

Article: 100034234
Last Published: 2017-11-02
Ratings: 0 0
Product(s): Backup Exec

Problem

This article discusses how to identify configuration and log files and log file entries for OpenDedupe.

Solution

BE interaction with OpenDedupe components during normal operation

After the OpenDedupe components have been installed and configured, the following processes happen within BE when BE services are started:
  • The pvlsvr.exe process (also known as "ADAMM") discovers all devices.  It discovers OpenDedupe devices in the same way as other OST devices.  The act of making an OST connection invokes APIs in the OpenDedupe OST plugin.
  • The OpenDedupe OST plugin gets its configuration information from a file named "ostconfig.xml" in the OpenDedupe installation directory (default C:\Program Files\SDFS\etc\ostconfig.xml").  Within that configuration file is an entry to cause AUTOMOUNT activity.
  • If the OpenDedupe file system volume has not already been mounted, the OST plugin mounts it by invoking the "mountsdfs" command.
  • If OpenDedupe OST logging has been configured for Debug output, you should see the following entries in a file named "C:\Program Files\SDFS\logs\OpenDedupe.log:
2017-06-23 13:33:18,145538 | DEBUG| int __cdecl stspi_open_server(const sts_session_def_v7_t *,const char [],const sts_cred_v7_t *,const char [],struct bstsp_server_handle_s **) | In Function
2017-06-23 13:33:18,223700 | DEBUG| __cdecl EventServer::EventServer(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | init
. . .
2017-06-23 13:33:18,645534 | DEBUG| void __cdecl CreateChildProcess(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) |executing
2017-06-23 13:34:19,807813 | DEBUG| void __cdecl CreateChildProcess(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) |



If the mount fails, there will be other error messages in that log.

Additional information about the mount process is shown in a volume-specific log file.  For an OpenDedupe volume named "AWSDev1", the log file is named "C:\Program Files\SDFS\logs\AWSDev1-volume-cfg.log". Entries at the start of the mount process look like this, as the java.exe process starts and mounts the volume:
 
2017-06-23 13:33:48,006 [sdfs] [org.opendedup.sdfs.Config] [286] [main]  - ############ Running SDFS Version 3.4.2
2017-06-23 13:33:48,069 [sdfs] [org.opendedup.sdfs.Config] [294] [main]  - Parsing volume subsystem-config version 3.4.2
2017-06-23 13:33:48,069 [sdfs] [org.opendedup.sdfs.Config] [299] [main]  - parsing folder locations
2017-06-23 13:33:48,069 [sdfs] [org.opendedup.sdfs.Config] [328] [main]  - Setting hash engine to VARIABLE_MURMUR3
2017-06-23 13:33:48,225 [sdfs] [org.opendedup.hashing.HashFunctionPool] [55] [main]  - Set hashtype to VARIABLE_MURMUR3
2017-06-23 13:33:48,256 [sdfs] [org.opendedup.sdfs.io.Volume] [219] [main]  - Mounting volume F:\sdfs\volumes\AWSDev1\files
. . .
2017-06-23 13:34:18,307 [sdfs] [org.opendedup.sdfs.mgmt.Io] [57] [main]  - mounting F:\sdfs\volumes\AWSDev1\files to Q:\



The adamm.log file in BE shows entries like the following:
 
[08120] 06/21/17 09:55:59.309 OST Server: OpenDedupe:AWSServer1:AWSDev1


When a backup job runs, the bengine.exe process uses the OST plugin to create and write a backup image.  If the ost-logging-global.conf file has been set up to collect Debug output, there are entries in the OpenDedupe.log file like this:

2017-06-23 13:40:10,698981 | DEBUG| class Image *__cdecl SampleLSU::createImage(struct ImageDefinition) | /BEOST_00000012.img fd=1082
2017-06-23 13:40:10,698981 | DEBUG| unsigned __int64 __cdecl writeODDFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,int,void *,void *,unsigned __int64,unsigned __int64) | https://localhost:6442/io/write/1082/0/6352?password=passwd


There are always corresponding entries in the volume-specific log "AWSDev1-volume.cfg.log"
 
2017-06-23 13:40:10,605 [sdfs] [org.opendedup.sdfs.mgmt.Io] [568] [Worker: RequestDispatcher: Thread-101]  - creating file F:\sdfs\volumes\AWSDev1\files\BEOST_00000012.img
 

Configuration files associated with OpenDedupe

There are three kinds of configuration files OpenDedupe uses:
  • Volume-specific configuration file.  For example for an OpenDedupe volume named "AWSDev1", the configuration file name is "AWSDev1-volume-cfg.xml".  It contains information about the location of the data cache files, HTTP port used to communicate with the java.exe OpenDedupe engine process, credentials for the cloud storage provider, and the name of the cloud container where the data is stored in the cloud.   These configuration files are located in a subdirectory of the SDFS installation directory.  By default "C:\Program Files\SDFS\etc".  They are created by the "mksdfs.exe" command installed with the OpenDedup component.  The Veritas Backup Exec™ Configuration Tool for OpenDedupe invokes "mksdfs.exe" when you use it to configure a new OpenDedupe volume.
  • OST configuration file named "ostconfig.xml", which is used by the OpenDedupe OST plugin connector to associate an OST device with the corresponding OpenDedupe volume.  This file also contains the name of the Windows drive letter where the volume will be mounted.  This file can be created and modified by a text editor, or by the Veritas Backup Exec™ Configuration Tool for OpenDedupe.
  • OST logging configuration file named "ost-logging-global.conf" file in the "etc" folder of the OpenDedupe installation directory.  If it doesn't exist, this file is automatically created by the Veritas Backup Exec™ Configuration Tool for OpenDedupe using standard values.  By default the tool creates a logging configuration that enables debug logging:
       *DEBUG:
           ENABLED = true
    For production use, you may want to change the value to "false" so the log does not grow too large.
 

Data files associated with OpenDedupe

OpenDedupe uses two groups of files to contain data.  Their exact layout and format are controlled by the OpenDedupe software, and details about the exact structure are not readily available.
  • Volume-specific local disk cache files hold a default of 10 GB of frequently accessed data and index information for an OpenDedupe volume.  These files are located by default in a directory named by the volume name.  The default location for the cache files for a volume named "AWSDev1" is "C:\Program Files\SDFS\volumes\AWSDev1".  The location can be changed when the volume is created if you want the cache data to be stored in a different place.  The configuration tool makes this easy to do.
  • Volume-specific cloud container, or "bucket".  This is the ultimate destination for the data.  You must use a cloud vendor console to create such a bucket or container before using it with OpenDedupe. The bucket is arranged into a series of sub-folders used by OpenDedupe.  You must use a separate cloud container/bucket for each OpenDedupe volume, and the container/bucket must not be shared with any other Backup Exec server.  The container/bucket should be used only for OpenDedupe data.  It should not also contain data for any BE cloud connector devices.
  • There is a third virtual data location - the mounted drive letter associated with an OpenDedupe volume.   Through this drive letter, you get a virtual view of OST images.   Here is an example the drive letter view for an OpenDedupe volume containing several backup images.  (Images with a length of 131,072 bytes are empty images created by Backup Exec.)

    C:\Program Files\Veritas\Backup Exec>dir q:
     Volume in drive Q is Dedup Filesystem
     Volume Serial Number is 5AA2-46C6

     Directory of Q:\

    06/21/2017  12:37 PM        70,656,000 BEOST_00000001.img
    06/21/2017  12:30 PM    15,678,185,472 BEOST_00000002.img
    06/21/2017  01:45 PM     9,325,584,384 BEOST_00000003.img
    06/21/2017  10:47 AM        70,656,000 BEOST_00000004.img
    06/21/2017  12:35 PM     1,535,778,816 BEOST_00000005.img
    06/21/2017  10:45 AM     1,525,489,664 BEOST_00000006.img
    06/21/2017  11:54 AM     9,325,584,384 BEOST_00000007.img
    06/21/2017  10:51 AM        43,327,488 BEOST_00000008.img
    06/21/2017  12:42 PM        43,327,488 BEOST_00000009.img
    06/21/2017  10:40 AM    15,304,826,880 BEOST_00000010.img
    06/22/2017  12:15 PM           131,072 BEOST_00000011.img
    06/23/2017  02:29 PM     3,366,133,760 BEOST_00000012.img
    06/23/2017  02:32 PM           131,072 BEOST_00000013.img
    06/22/2017  12:15 PM           131,072 BEOST_00000014.img
    06/22/2017  12:15 PM           131,072 BEOST_00000015.img
    06/22/2017  12:15 PM           131,072 BEOST_00000016.img
    06/22/2017  12:15 PM           131,072 BEOST_00000017.img
    06/22/2017  12:15 PM           131,072 BEOST_00000018.img
    06/23/2017  02:27 PM         3,350,528 BEOST_00000019.img
    06/22/2017  12:15 PM           131,072 BEOST_00000020.img
    06/21/2017  09:56 AM           131,584 BEOST_SInf-Bak.img
    06/21/2017  09:56 AM           131,584 BEOST_SInf.img
    06/23/2017  02:29 PM           655,360 BEOST_SMap.img
                  23 File(s) 56,294,867,968 bytes
                   0 Dir(s)  1,085,192,222,208 bytes free
 

Benign Error Messages

There are several types of commonly appearing messages in the volume-cfg.log file and OpenDedupe.log file that are not signs of errors, but just internal messages generated by SDFS due to the way Backup Exec interacts with some OST images files.  The most common are these messages:
  • Can't open BEOST_SDrvN.img - this happens when Backup Exec looks for an OpenDedupe "drive" while starting a backup or restore job.  Images with this kind of name don't exist until BE creates one for the job, so an error for a non-existent image is OK. You can ignore these messages.

    In OpenDedupe.log:
2017-07-03 16:07:49,552367 | ERROR| int __cdecl openODDFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,void *) | SampleStorageServer.cpp:287 | unable to open file/BEOST_SDrv2.img fuse.FuseException http_code = 500 opcode= -2
2017-07-03 16:07:49,552367 | ERROR| class Image *__cdecl SampleLSU::openImage(struct ImageDefinition,int) | SampleStorageServer.cpp:1836 | /BEOST_SDrv2.img could not be found
2017-07-03 16:07:49,552367 | ERROR| int __cdecl stspi_open_image_v10(const stsp_lsu_v7_t *,const sts_image_def_v10_t *,int,struct stsp_image_handle_s **) | sampledisk.cpp:1878 | /


From the -volume-cfg.log file:

2017-07-03 16:07:49,552 [sdfs] [org.opendedup.sdfs.mgmt.Io] [419] [Worker: RequestDispatcher: Thread-70]- unable to process get request /io/openfile/%.img?password=passwd

at org.opendedup.sdfs.mgmt.Io.resolvePath(Io.java:81)
at org.opendedup.sdfs.mgmt.Io.getFileChannel(Io.java:89)
at org.opendedup.sdfs.mgmt.Io.open(Io.java:608)
at org.opendedup.sdfs.mgmt.Io.handleGet(Io.java:405)
at org.opendedup.sdfs.mgmt.Io.processIo(Io.java:129)
at org.opendedup.sdfs.mgmt.MgmtWebServer.handle(MgmtWebServer.java:1021)
at org.simpleframework.http.socket.service.RouterContainer.handle(RouterContainer.java:106)
at org.simpleframework.http.core.RequestDispatcher.dispatch(RequestDispatcher.java:121)
at org.simpleframework.http.core.RequestDispatcher.run(RequestDispatcher.java:103)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
 
 

Error log for stalled or failed restore jobs

In cases where network response to cloud requests is slow, or latency is high, restore jobs may fail, especially when restoring  or staging large amounts of data, such as a granular restore job based on a VM backup set. If you see that a restore job seems to be stuck and making no progress, you should examine the -volume-cfg.log file for the OpenDedupe device being used for restore.  A typical pattern in the logs looks like this:

... followed by large stack dump…
java.io.IOException: java.io.FileNotFoundException: F:\sdfs\volumes\AWSDev1\chunkstore\chunks\269\2692630782013113328 (Access is denied)

2017-06-28 15:36:13,949 [sdfs] [org.opendedup.sdfs.filestore.cloud.BatchAwsS3ChunkStore] [1211] [Worker: RequestDispatcher: Thread-107]  - getnewblob unable to get block
… repeated several times …
2017-06-28 15:36:13,377 [com.amazonaws.services.s3.internal.S3AbortableInputStream] [com.amazonaws.services.s3.internal.S3AbortableInputStream] [159] [Worker: RequestDispatcher: Thread-107]  - Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
 
 



 

Authentication Errors

The -volume-cfg.log file also shows errors if the access key or secret is incorrect during a mount.  Here's one example:
 
2017-07-04 12:30:11,768 [sdfs] [org.opendedup.sdfs.filestore.cloud.BatchAwsS3ChunkStore] [714] [main]  - unable to start service

com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 2F5AB82812F84DF5), S3 Extended Request ID: +52omXrheeac3+dlfxWOjtvC02bz9/VKbceg3UmXaLSK50x4tTvLh15H5MhEUJRFfAdr85WWORs=


and the corresponding error in the OpenDedupe.log file:
 
2017-07-04 12:30:13,082145 | ERROR| __cdecl SampleLSU::SampleLSU(const char *,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,bool,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | SampleStorageServer.cpp:1545 | Connection to LSU Failedhttps://localhost:6442/?file=null&cmd=volume-info&password=passwd
 
 
Another example:
 
2017-07-04 12:37:16,839 [sdfs] [org.opendedup.sdfs.filestore.cloud.BatchAwsS3ChunkStore] [714] [main]  - unable to start service

com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 37D5D3C00085D5ED), S3 Extended Request ID: EUFeuOazvsMpefo3Evenl09h/eVuC495EkifN7834C1ecxS+5W/27Vi/q0ZycNTox1fp0MBVhuY=
 

Messages from other cloud providers may be somewhat different.

Was this content helpful?