Using s3cmd to remove buckets created in Access
From s3tools.org:
S3cmd is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.
S3cmd is written in Python. It's an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage.
First download s3cmd:
http://s3tools.org/download
Move the .tar.gz (if downloaded from SourceForge or this technote) or the .zip (if from GitHub) to one of the cluster nodes, using filezilla, SCP, etc. You can use the admin user and connect directly using SSH and have read/write access to the /admin directory.
Extract it in the /admin directory - it will create a subdirectory:
hroaccess_01:/admin # pwd
/admin
hroaccess_01:/admin # tar xzf s3cmd*
hroaccess_01:/admin # ll
total 124
drwxr-xr-x 4 tomcat admin 4096 Jun 13 09:44 s3cmd-2.0.1
-rw-rw-r-- 1 admin admin 121926 Jun 13 09:30 s3cmd-2.0.1.tar.gz
In a separate SSH/terminal session, connect to your Console CLISH and run "objectaccess show" to get your s3 URL. You will use that for the S3 endpoint in the s3cmd configuration.
hroaccess> objectaccess show
Name Value
============= ===========================
Server Status Enabled
Admin_URL http://admin.hroaccess:8144
S3_URL http://s3.hroaccess:8143
admin_port 8144
s3_port 8143
ssl no
pools danny_pool1,mattpool
fs_size 10g
fs_type largefs-simple
fs_blksize 8192
fs_pdirenable yes
Back on the 01 node, change directory to the newly created folder, and run the ./s3cmd python script with the "--configure" option.
hroaccess_01:/admin/s3cmd-2.0.1 # ./s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: <s3 access key here>
Secret Key: <s3 secret key here>
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: s3.hroaccess:8143
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: s3.hroaccess:8143
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: <access key displayed here>
Secret Key: <secret key displayed here>
Default Region: US
S3 Endpoint: s3.hroaccess:8143
DNS-style bucket+hostname:port template for accessing a bucket: s3.hroaccess:8143
Encryption password:
Path to GPG program: /bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Now you can use s3cmd to list your buckets associated to that access key:
hroaccess_01:/admin/s3cmd-2.0.1 # ./s3cmd ls
2018-06-12 18:31 s3://4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket
From the Console CLISH you can see the buckets, as well. Note that in this instance, there are two buckets but each are associated to a different owner. The access key and secret key we used in the above configuration was for the "admin" user.
hroaccess> objectaccess bucket show
Bucket Name FileSystem Pool(s) Owner
============================================ ============== ==================== =========
29e2ccd1-f6ec-4c90-8447-2e8398f0fe0fs3bucket S3fs1528405712 mattpool,danny_pool1 vxsupport
4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket S3fs1528828194 mattpool,danny_pool1 admin
From s3cmd's "--help" output:
Commands:
Make bucket
s3cmd mb s3://BUCKET
Remove bucket
s3cmd rb s3://BUCKET
List objects or buckets
s3cmd ls [s3://BUCKET[/PREFIX]]
List all object in all buckets
s3cmd la
Put file into bucket
s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
Get file from bucket
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
Delete file from bucket
s3cmd del s3://BUCKET/OBJECT
Delete file from bucket (alias for del)
s3cmd rm s3://BUCKET/OBJECT
Delete the bucket that matches the filesystem that you wish to remove, as seen in the Access GUI.
In order to delete the bucket, it must first be empty. Depending on the content of the bucket and Access version, the content of the bucket can be deleted as follows.
First try to delete content with a single command:
s3cmd del --recursive --force s3://bucket-to-delete
The command may give the following error, if it does not skip the next section and remove the bucket itself.
ERROR: S3 error: 501 (NotImplemented): A header you provided implies functionality that is not implemented.
If the command errors, delete the content of the bucket manually as follows:
1. List the content of the bucket
./s3cmd ls s3://4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket
2. Delete objects in the top level of the bucket
./s3cmd del s3://4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket
3. If there were any DIR entries at step 1, delete them. For example, in this case DIR called 'S3' is deleted
./s3cmd del s3://4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket/S3/*
4. List the content of the bucket
./s3cmd ls s3://4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket
5. Repeat steps 3 and 4 until the bucket is completely empty
Remove the bucket itself:
hroaccess_01:/admin/s3cmd-2.0.1 # ./s3cmd rb s3://4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket
Bucket 's3://4db34bd7-ad15-44d5-ad6c-1465c5987ba5s3bucket/' removed
List the buckets again from the CLISH:
hroaccess> objectaccess bucket show
Bucket Name FileSystem Pool(s) Owner
============================================ ============== ==================== =========
29e2ccd1-f6ec-4c90-8447-2e8398f0fe0fs3bucket S3fs1528405712 mattpool,danny_pool1 vxsupport
hroaccess>
Since the "vxsupport" user has a completely different s3 access key and secret key, we'll need to edit the configuration file in ~/ to replace those values:
hroaccess_01:/admin/s3cmd-2.0.1 # vi ~/.s3cfg
(Replace access key and secret key with appropriate values, save the file and quit out of vi)
Then, list the buckets - you'll now see the bucket associated to the other user "vxsupport":
hroaccess_01:/admin/s3cmd-2.0.1 # ./s3cmd ls
1969-12-31 19:00 s3://29e2ccd1-f6ec-4c90-8447-2e8398f0fe0fs3bucket
Attempted to remove the bucket, but it failed:
hroaccess_01:/admin/s3cmd-2.0.1 # ./s3cmd rb s3://29e2ccd1-f6ec-4c90-8447-2e8398f0fe0fs3bucket
ERROR: S3 error: 404 (NoSuchBucket): The specified bucket does not exist.
(Failed because bucket was offline. Onlined bucket/FS from GUI, then attempted again--)
hroaccess_01:/admin/s3cmd-2.0.1 # ./s3cmd rb s3://29e2ccd1-f6ec-4c90-8447-2e8398f0fe0fs3bucket
Bucket 's3://29e2ccd1-f6ec-4c90-8447-2e8398f0fe0fs3bucket/' removed
hroaccess> objectaccess bucket show
Bucket Name FileSystem Pool(s) Owner
============ =========== ======== ======
hroaccess>
You can then use the Access GUI to recreate any buckets as needed with the Provision Storage wizard under the Share perspective: