Running Veritas InfoScale Storage Foundation in OpenStack instances with Cinder Volumes created on SAN based storage & backed by LVM

  • Article ID:100034274
  • Modified Date:
  • Product(s):

Problem

Running Veritas InfoScale Storage Foundation in OpenStack instances with Cinder Volumes created on SAN based storage & backed by LVM

Solution

Prerequisites: Export the cinder volumes to guest as virtio-scsi to avoind UDID mismatch 
Note:
The underlying Cinder Volumes exported to Guests are claimed under VirtIO by InfoScale Storage Foundation. In this case, the devices can come up with different names (vdc, vdd etc.) across reboots, thus impacting change in the UDID of the device across reboots. Adding such devices under VirtIO-SCSI will solve this issue.
 
Software configuration:
Red Hat OpenStack Platform 10
InfoScale Storage Foundation 7.3
 
 
[On the OpenStack host]

1) Create Physical Volumes and Volume Group on the OpenStack host using SAN storage:  
Note: Below example uses 5 SAN disks attached to OpenStack host at OS path: /dev/sds, /dev/sdt, /dev/sdu, /dev/sdv &  /dev/sdw
 
# pvcreate /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw
  Physical volume "/dev/sds" successfully created.
  Physical volume "/dev/sdt" successfully created.
  Physical volume "/dev/sdu" successfully created.
  Physical volume "/dev/sdv" successfully created.
  Physical volume "/dev/sdw" successfully created.
#
 
# vgcreate emc_vnx_vg /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw
  Volume group "emc_vnx_vg" successfully created
#
 
# vgs
  VG                 #PV #LV #SN Attr   VSize  VFree
  cinder-volumes       1   1   0 wz--n- 20.60g 612.00m
  emc_vnx_vg           5   0   0 wz--n- 99.84g  99.84g
  rhel_os_server1   2   2   0 wz--n-  1.47t      0
#

2) Configure “volume_group” attribute in /etc/cinder/cinder.conf  
After changes [lvm] section should look like:


[lvm]
iscsi_protocol = iscsi
iscsi_helper=lioadm
iscsi_ip_address=192.168.25.42
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = emc_vnx_vg
volumes_dir=/var/lib/cinder/volumes
volume_backend_name=lvm



3) Restart Cinder service  
# systemctl enable openstack-cinder-volume.service target.service
# systemctl restart openstack-cinder-volume.service target.service
#

4) Login using admin keystone and check @lvm storage  option in Cinder list  
# source ~/keystonerc_admin
(keystone_admin)# cinder service-list
+------------------+--------------------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |                    Host                    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------------------------------------+------+---------+-------+----------------------------+-----------------+
|  cinder-backup   |     os_server1.example.com      | nova | enabled |   up  | 2017-06-16T05:56:40.000000 |        -        |
| cinder-scheduler |     os_server1.example.com      | nova | enabled |   up  | 2017-06-16T05:56:40.000000 |        -        |
|  cinder-volume   | os_server1.example.com@access-1 | nova | enabled |   up  | 2017-06-16T05:56:41.000000 |        -        |
|  cinder-volume   |   os_server1.example.com@lvm    | nova | enabled |   up  | 2017-06-16T05:56:43.000000 |        -        |
+------------------+--------------------------------------------+------+---------+-------+----------------------------+-----------------+
(keystone_admin)#

5) Disable other “cinder-volumes” to make sure only LVM storage option is used for Cinder Volume creation  

(keystone_admin)# cinder service-disable os_server1.example.com@access-1 cinder-volume
+--------------------------------------------+---------------+----------+
|                    Host                    |     Binary    |  Status  |
+--------------------------------------------+---------------+----------+
| os_server1.example.com@access-1 | cinder-volume | disabled |
+--------------------------------------------+---------------+----------+
 
(keystone_admin)# cinder service-list
+------------------+--------------------------------------------+------+----------+-------+----------------------------+-----------------+
|      Binary      |                    Host                    | Zone |  Status  | State |         Updated_at         | Disabled Reason |
+------------------+--------------------------------------------+------+----------+-------+----------------------------+-----------------+
|  cinder-backup   |     os_server1.example.com      | nova | enabled  |   up  | 2017-06-16T06:07:00.000000 |        -        |
| cinder-scheduler |     os_server1.example.com      | nova | enabled  |   up  | 2017-06-16T06:07:00.000000 |        -        |
|  cinder-volume   | os_server1.example.com@access-1 | nova | disabled |   up  | 2017-06-16T06:07:02.000000 |        -        |
|  cinder-volume   |   os_server1.example.com@lvm    | nova | enabled  |   up  | 2017-06-16T06:07:03.000000 |        -        |
+------------------+--------------------------------------------+------+----------+-------+----------------------------+-----------------+
(keystone_admin)#

6) Now we can start creating Cinder Volumes, which will be by default, created on the LVM that we configured, i.e. “emc_vnx_vg”:  
(keystone_admin)# cinder create  --display_name emc_vnx_1 40
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2017-06-16T06:58:08.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | a93b84b9-4e0d-4333-8382-ce0418c34a26 |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |              emc_vnx_1               |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   9c7aa8dd4fae43f0936582575c4e93c5   |
|       replication_status       |               disabled               |
|              size              |                  40                  |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |               creating               |
|           updated_at           |                 None                 |
|            user_id             |   d0a1b0e83e334258a912957654c64ba2   |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+
(keystone_admin)#
 
(keystone_admin)# cinder create  --display_name emc_vnx_2 40
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2017-06-16T06:58:14.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | 100f9632-a299-4667-a9da-0af3cb39a593 |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |              emc_vnx_2               |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   9c7aa8dd4fae43f0936582575c4e93c5   |
|       replication_status       |               disabled               |
|              size              |                  40                  |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |               creating               |
|           updated_at           |                 None                 |
|            user_id             |   d0a1b0e83e334258a912957654c64ba2   |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+
(keystone_admin)#

7) Verify that new Cinder Volumes reside on the LVM “emc_vnx_vg” by checking “LV Name” and matching it with Cinder Volume “id” :  

(keystone_admin)# vgdisplay -v emc_vnx_vg
  --- Volume group ---
  VG Name               emc_vnx_vg
  System ID
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  11
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               99.84 GiB
  PE Size               4.00 MiB
  Total PE              25560
  Alloc PE / Size       20480 / 80.00 GiB
  Free  PE / Size       5080 / 19.84 GiB
  VG UUID               lW183c-NDOU-hsID-ZmgG-jFH3-Wz79-EjAaCy
 
  --- Logical volume ---
  LV Path                /dev/emc_vnx_vg/volume-a93b84b9-4e0d-4333-8382-ce0418c34a26
  LV Name                volume- a93b84b9-4e0d-4333-8382-ce0418c34a26
  VG Name                emc_vnx_vg
  LV UUID                gARzGs-KnF4-D560-AAx5-IIwS-7TKm-fkHOrm
  LV Write Access        read/write
  LV Creation host, time os_server1.example.com, 2017-06-16 12:28:09 +0530
  LV Status              available
  # open                 0
  LV Size                40.00 GiB
  Current LE             10240
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
 
  --- Logical volume ---
  LV Path                /dev/emc_vnx_vg/volume-100f9632-a299-4667-a9da-0af3cb39a593
  LV Name                volume- 100f9632-a299-4667-a9da-0af3cb39a593
  VG Name                emc_vnx_vg
  LV UUID                KGCLC7-Sabp-3xs9-FtFT-44Yb-vnnF-eTb0Rw
  LV Write Access        read/write
  LV Creation host, time os_server1.example.com, 2017-06-16 12:28:15 +0530
  LV Status              available
  # open                 0
  LV Size                40.00 GiB
  Current LE             10240
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4
 
  --- Physical volumes ---
  PV Name               /dev/sds
  PV UUID               Zv0MVE-eikL-qnIa-Puez-T38c-VQGb-MUQllZ
  PV Status             allocatable
  Total PE / Free PE    5112 / 0
 
  PV Name               /dev/sdt
  PV UUID               jGySO8-tX7l-Tpwj-mQpa-IWHg-Hxwg-eGTfUp
  PV Status             allocatable
  Total PE / Free PE    5112 / 0
 
  PV Name               /dev/sdu
  PV UUID               cz2F8m-GeLF-I3Kk-L3ot-dVnA-u8hH-iYK6o7
  PV Status             allocatable
  Total PE / Free PE    5112 / 5080
 
  PV Name               /dev/sdv
  PV UUID               6fOZMS-fzmy-Ub0K-sii7-LOBl-O7Ir-TuBf0R
  PV Status             allocatable
  Total PE / Free PE    5112 / 0
 
  PV Name               /dev/sdw
  PV UUID               bOgPaD-Eh3m-21sj-mmxV-pHVu-gSG1-3PUmOE
  PV Status             allocatable
  Total PE / Free PE    5112 / 0
 
(keystone_admin)#

8) Use Dashboard to attach these 2 Volumes to any desired insance (not covered here) and then check Volume status on the host:  

(keystone_admin)# cinder list
+--------------------------------------+-----------+-----------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  |    Name   | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+-----------+------+-------------+----------+--------------------------------------+
| 100f9632-a299-4667-a9da-0af3cb39a593 |   in-use  | emc_vnx_2 |  40  |      -      |  false   | d8fda1a8-fe6b-415e-a036-bdc0fc0fd103 |
| a93b84b9-4e0d-4333-8382-ce0418c34a26 |   in-use  | emc_vnx_1 |  40  |      -      |  false   | d8fda1a8-fe6b-415e-a036-bdc0fc0fd103 |
| adca78db-084e-4a1c-9429-393742ae7cdc |   in-use  |   vol20g  |  20  |    iscsi    |  false   | d8fda1a8-fe6b-415e-a036-bdc0fc0fd103 |
| cd6a1c4d-1872-4464-a3a1-4be31551ccb7 | available |    vol1   |  2   |   vrts_nas  |  false   |                                      |
+--------------------------------------+-----------+-----------+------+-------------+----------+--------------------------------------+
(keystone_admin)#
 
[On the OpenStack instance]

9) Login to OpenStack instance and install Veritas InfoScale Storage Foundation and initialize the newly attached Cinder Volumes  
Note: OpenStack instance discovers such Cinder Volumes as VirtIO disks
 
# vxdisk list
DEVICE          TYPE            DISK         GROUP        STATUS
vda          auto:none       -            -            online invalid
virtio0_2    auto:none       -            -            online invalid
virtio0_3    auto:none       -            -            online invalid
#
 
# /etc/vx/bin/vxdisksetup -ivf virtio0_2
! vxdisk define virtio0_2 type=auto format=cdsdisk
! vxdisk online virtio0_2
! vxdisk -f init virtio0_2 type=auto format=cdsdisk privlen=65536
#
 
# /etc/vx/bin/vxdisksetup -ivf virtio0_3
! vxdisk define virtio0_3 type=auto format=cdsdisk
! vxdisk online virtio0_3
! vxdisk -f init virtio0_3 type=auto format=cdsdisk privlen=65536
#
 
# vxdisk list
DEVICE          TYPE            DISK         GROUP        STATUS
vda          auto:none       -            -            online invalid
virtio0_1    auto:cdsdisk    virtio0_1    dg1          online
virtio0_2    auto:cdsdisk    -            -            online
virtio0_3    auto:cdsdisk    -            -            online
#
 
# vxdisk -p list virtio0_2
DISK                : virtio0_2
VID                 : QEMU
UDID                : QEMU%5FVIRTIO%5FVirtIO%5F%2Fdev%2Fvdc
SCSI_VERSION        : 0
PID                 : VIRTIO
PHYS_CTLR_NAME      : c509
MEDIA_TYPE          : hdd
LUN_TYPE            : std
LUN_SNO_ORDER       : 2
LUN_SERIAL_NO       : /dev/vdc
LIBNAME             : libvxvirtio.so
DMP_DEVICE          : virtio0_2
CAB_SERIAL_NO       : VirtIO
ATYPE               : A/A
ANAME               : VirtIO
TRANSPORT           : SCSI
ENCLOSURE_NAME      : virtio0
DMP_SINGLE_PATH     : /dev/vdc
LUN_SIZE            : 83882304
NUM_PATHS           : 1
STATE               : online
DISK_TYPE           : auto
FORMAT              : cdsdisk
DA_INFO             : format=cdsdisk,privoffset=256,pubslice=3,privslice=3
PRIV_OFF            : 256
PRIV_LEN            : 65536
PUB_OFF             : 65792
PUB_LEN             : 83816512
PRIV_UDID           : QEMU%5FVIRTIO%5FVirtIO%5F%2Fdev%2Fvdc
DISKID              : 1497599706.17.rhel72-1.novalocal
DISK_TIMESTAMP      : Fri Jun 16 03:55:06 AM 2017
#

10) Create disk group, volume and file system on these 2 disks and mount it  
# vxdg init emc_vnx_dg virtio0_2 virtio0_3
#
 
# vxassist -g emc_vnx_dg make vol1 maxsize
#
 
# mkfs -t vxfs /dev/vx/rdsk/emc_vnx_dg/vol1
    version 12 layout
    167632896 sectors, 83816448 blocks of size 1024, log size 65536 blocks
    rcq size 4096 blocks
    largefiles supported
    maxlink supported
#
 
# mount -t vxfs /dev/vx/dsk/emc_vnx_dg/vol1 /vol1
#

11) To enable this VxFS mount at boot time (auto-mount upon every reboot), append an entry in /etc/fstab as follows:  
/dev/vx/dsk/emc_vnx_dg/vol1 /vol1               vxfs    _netdev         0 0
 
 

Was this content helpful?

Get Support