Veritas Data Insight User's Guide
- Section I. Introduction
- Section II. Data Insight Workspace
- Navigating the Workspace tab
- Analyzing data using the Workspace views
- Viewing access information for files and folders
- About viewing file or folder summary
- Viewing the overview of a data source
- Managing data custodian for paths
- Viewing the summary of user activity on a file or folder
- Viewing user activity on files or folders
- Viewing file and folder activity
- Viewing CIFS permissions on folders
- Viewing NFS permissions on folders
- Viewing SharePoint permissions for folders
- Viewing Box permissions on folders
- Viewing audit logs for files and folders
- About visualizing collaboration on a share
- Viewing access information for users and user groups
- Viewing the overview of a user
- Viewing the overview of a group
- Managing custodian assignments for users
- Viewing folder activity by users
- Viewing CIFS permissions for users
- Viewing CIFS permissions for user groups
- Viewing NFS permissions for users and user groups
- Viewing SharePoint permissions for users and user groups
- Viewing Box permissions for users and user groups
- Viewing audit logs for users
- Section III. Data Insight reports
- Using Data Insight reports
- About Data Insight reports
- How Data Insight reporting works
- Creating a report
- About Data Insight security reports
- Activity Details report
- Permissions reports
- Inactive Users
- Path Permissions
- Permissions Search report
- About Permissions Query templates
- Creating a Permissions Query Template
- Creating custom rules
- Permissions Query Template actions
- Using Permissions Search report output to remediate permissions
- Entitlement Review
- User/Group Permissions
- Group Change Impact Analysis
- Ownership Reports
- Create/Edit security report options
- Data Insight limitations for Box permissions
- About Data Insight storage reports
- Create/Edit storage report options
- About Data Insight custom reports
- Considerations for importing paths using a CSV file
- Managing reports
- About managing Data Insight reports
- Viewing reports
- Filtering a report
- Editing a report
- About sharing reports
- Copying a report
- Running a report
- Viewing the progress of a report
- Customizing a report output
- Configuring a report to generate a truncated output
- Sending a report by email
- Automatically archiving reports
- Canceling a report run
- Deleting a report
- Considerations for viewing reports
- Organizing reports using labels
- Using Data Insight reports
- Section IV. Remediation
- Configuring remediation workflows
- About remediation workflows
- Prerequisites for configuring remediation workflows
- Configuring Self-Service Portal settings
- About workflow templates
- Managing workflow templates
- Creating a workflow using a template
- Managing workflows
- Auditing workflow paths
- Monitoring the progress of a workflow
- Remediating workflow paths
- Using the Self-Service Portal
- About the Self-Service Portal
- Logging in to the Self-Service Portal
- Using the Self-Service Portal to review user entitlements
- Using the Self-Service Portal to manage Data Loss Prevention (DLP) incidents
- Using the Self-Service Portal to confirm ownership of resources
- Using the Self-Service Portal to classify sensitive data
- Managing data
- About managing data using Enterprise Vault and custom scripts
- Managing data from the Shares list view
- Managing inactive data from the Folder Activity tab
- Managing inactive data by using a report
- Archiving workflow paths using Enterprise Vault
- Using custom scripts to manage data
- Pushing classification tags while archiving files into Enterprise Vault
- About adding tags to files, folders, and shares
- Managing permissions
- Configuring remediation workflows
- Appendix A. Command Line Reference
Using the metadata framework for classification and remediation
To apply tags to files, folders, and shares, you must create a CSV file with the metadata key value pairs. You may either create the CSV file manually or use a third-party tool or script to generate the CSV with tagging information for paths.
To apply metadata tags
- Create a CSV file with the tagging information. You can create more than one CSV file with tagging information for paths.
To assign tags to the files, ensure that the CSV file name starts with
File_(for example,File_tags.csv). Enter paths for different files with the tag name and tag values. CSV files with any other name are considered to have paths of folders.Note:
i18n and special characters are not supported in tag names.
- Save the CSV files in the
data/console/tagsfolder in the Data Insight installation directory on the Management Server. - A scheduled job TagsConsumerJob parses the CSV file and creates a
Tagsdatabase for each share. The job imports the tags for the paths into Data Insight. The job runs once in a day by default.If the job is executed manually using the configcli command, the job forcefully consumes all the CSV files under
Tagsfolder.Whenever the job runs, it checks if the modified time of any of the CSV files under the
Tagsfolder is greater than the time of the previous execution of job. If the job finds any such CSV, it processes all the CSV files underTagsfolder. If the CSV file(s) have not been modified after the job was last executed, the job does not take any action.The job does not accept any tag name that starts with mx_ because they are reserved for Data Insight internal tags usage. Whenever the job processes the CSV, Data Insight deletes all existing tags (except tags starting with "mx_") from all files and folders and attaches new tags.
Note:
If a path is tagged in two different CSV files with the same tag name, but with a different value, then the value in the last CSV file that is processed is applied.
- To replace existing tags, update the CSV with new tags. The scheduled job replaces existing tags with the new tags. If any paths are discarded during the last run of the job, then these are logged in
$DATADIR/console/generictags_scan_status_5.0.db.If any paths are discarded, then these are logged in a database that stores the discarded paths during the last run of the job.
To remove all tags, delete the CSV from the
Tagsfolder. - Create a DQL report to retrieve the tags from the database.
Here are a few example queries that you can use:
To fetch all paths in your storage environment along with the tags (my_tag) assigned to them.
FROM path GET name, TAG my_tag
To get all paths owned by user Joe Camel tagged with the needs_assesment tag.
FROM owner GET TAG owner.path.needs_assessment, owner.path.name IF user.name="joe_camel"
Note:
The DQL report output does not return any tag if the content does not match any predefined classification tag.
- To verify the names of tags that are stored for a share, run the idxreader command on the indexer node.
idxreader - i $MATRIX_DATA_DIR/indexer/default/99/99 - gettags all
The CSV file with the metadata tags should be in the following format:
File/folder path, tag name, tag value
For example, \\filer\share\foo,tname,tvalue
Where, tname refers to the name of the tag, and tvalue refers to the tag value.
Note:
Multiple values for a same for the same tag are not supported.
If the path or the tag name contains a comma, enclose the text in double quotes (","). For example, if the folder name is foo, bar, you can add the path in the CSV as follows:
"\\filer\share\foo,bar",t_name,t_value
For shares, the path should be present in the CSV file containing folder paths. Following are examples of share level paths:
CIFS/DFS | \\filer\share |
SharePoint | URL of the site collection |
NFS | <export path> For example, /data/finance/docs |
Box | \\Box\<box name in Data Insight> |