create and manage RAID0/RAID10 using EBS volumes in AWS EC2 Ubuntu instance

by jagbir on July 20, 2012

This quick howto guide explains commands to create RAID device using EBS volumes attached to your EC2 instance and related configurations. I have created RAID level 0 (RAID0) here with 2 EBS Volumes (50GB each) but the procedure is same for other RAID levels as well, you just need to supply proper parameters in specific commands i.e. mdadm.

One major issue I faced while doing this exercise is that the RAID device needs to be re-assembled properly after instance reboot/restart and if you don’t take enough care, you may face different issues like instance get stuck while booting, or RAID won’t appear at all or it might get renamed to something else. I have explained the fix for that as well.

I am using m1.medium instance type created using Ubuntu 12.04 server image which can be found here. You need at least 2 EBS volumes attached to your instance. In my case, I used 2 volumes of 50 GB each, referred as /dev/xvdf and /dev/xvdg. Most of commands needs to be run under root, hence you can become root or use sudo (which I have used here) to execute commands as root user.

Step 1. Login into your instance and verify you can see EBS volumes attached:

$ sudo fdisk -l  | grep xvd
Disk /dev/xvda1 doesn't contain a valid partition table
Disk /dev/xvdb doesn't contain a valid partition table
Disk /dev/xvda3 doesn't contain a valid partition table
Disk /dev/xvdf doesn't contain a valid partition table
Disk /dev/xvdg doesn't contain a valid partition table
Disk /dev/xvda1: 8589 MB, 8589934592 bytes
Disk /dev/xvdb: 160.1 GB, 160104972288 bytes
Disk /dev/xvda3: 939 MB, 939524096 bytes
Disk /dev/xvdf: 53.7 GB, 53687091200 bytes
Disk /dev/xvdg: 53.7 GB, 53687091200 bytes

We can see /dev/xvdf and /dev/xvdg here, good to move forward.

Step 2. Install necessary tools and format EBS volumes:
I am installing mdadm for managing RAID and xfsprogs for xfs file system. You can also use other filesystem like ext4 and in that case you don’t need to install xfsprogs package here.

$ sudo apt-get install xfsprogs mdadm

Now format the volumes with XFS file system:

$ sudo mkfs.xfs /dev/xvdf
meta-data=/dev/xvdf              isize=256    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
 
$ sudo mkfs.xfs /dev/xvdg
meta-data=/dev/xvdg              isize=256    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Step 3. Create RAID0 device and format it with XFS filesystem:

$ sudo mdadm --create -l0 -n2 /dev/md0 /dev/xvdf /dev/xvdg
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
 
$ sudo mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0               isize=256    agcount=16, agsize=1638272 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=26212352, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

RAID device /dev/md0 is ready now.

Step 4. Mount RAID device and make a test file there:

$ sudo mkdir /ebsraid0
$ sudo mount -t xfs -o rw,nobarrier,noatime,nodiratime /dev/md0 /ebsraid0 
$ df -h | grep ebs
/dev/md0        100G   33M  100G   1% /ebsraid0
$ echo "just a sample file." | sudo tee /ebsraid0/samplefile.txt
just a sample file.
$ cat /ebsraid0/samplefile.txt
just a sample file.

We just created a directory /ebsraid0 and mounted our RAId device /dev/md0 in it. Also created an optional sample text file.

Step 5. Make our RAID device reboot/restart safe
One challenge is to make sure that our RAID device comes intact after a server reboot/restart. We need to save its configuration for proper re-assembly and mount it:

$ echo 'DEVICE /dev/xvdf /dev/xvdg' | sudo tee -a /etc/mdadm/mdadm.conf
$ sudo mdadm --examine --scan | sudo tee -a /etc/mdadm/mdadm.conf

Here, we saved the settings in mdadm config file which the tool will read upon reboot and rebuild the RAID device.

$ echo '/dev/md0   /ebsraid0 xfs   rw,nobarrier,noatime,nodiratime,noauto   0 0' | sudo tee -a  /etc/fstab
$ sudo update-initramfs -u -v -k `uname -r`

Here we have created entry in /etc/fstab file to try mounting our RAID device upon reboot but it may possible that RAID device won’t be available/ready for mounting when /etc/fstab get executed and hence it’s recommended to put ‘noauto’ keyword in options to skip it in such case. Also we have updated our ramdisk to make sure our RAID device won’t get renamed or lots upon reboot.

In my tests, I have found that RAID never get mounted using /etc/fstab and I have to do it manually, therefore, I have removed the entry from /etc/fstab and put mount command in /etc/rc.local to mount it:

$ sudo vim /etc/rc.local
..
mount -t xfs -o rw,nobarrier,noatime,nodiratime /dev/md0 /ebsraid0
exit 0

Step 6. Reboot your server and verify everything is good.
Time to reboot the instance:

$ sudo reboot

After instance come back online, check details of your RAID device:

$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Jul 20 07:32:46 2012
     Raid Level : raid0
     Array Size : 104855552 (100.00 GiB 107.37 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent
 
    Update Time : Fri Jul 20 07:32:46 2012
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
 
     Chunk Size : 512K
 
           Name : ip-10-131-65-235:0  (local to host ip-10-131-65-235)
           UUID : 1f364c8e:0045cedc:b63b558f:d5ab78ba
         Events : 0
 
    Number   Major   Minor   RaidDevice State
       0     202       80        0      active sync   /dev/xvdf
       1     202       96        1      active sync   /dev/xvdg
 
$ df -h | grep ebs
/dev/md0        100G   33M  100G   1% /ebsraid0
$ cat /ebsraid0/samplefile.txt
just a sample file.

All looks good, start making use of it. In forthcoming articles, I will explore how to take snapshot and use that in other instance if there’s any incident/failure happens. Please put a comment below in case you are facing some issues or have suggestion.

  • Pingback: Create and manage RAID device using EBS volumes in EC2 Ubuntu instance | Thelinuxgeek

  • Jeffrey

    Thanks for you demo. 1 question, do you see much performance gain using this config? I am considering a RAID5 configuration with 6 EBS volumes.

    • jagbirs

      Hi Jeffrey, Yes I have seen much improved performance, in fact, I am considering of using only RAID0 or RAID10. Single EBS or the default storage performed poorly in my performance tests. I am going to post the results here soon.

  • Raghu Addanki

    You Rock Jagbir!!! AMS – Amazon Made Simple

    • jagbirs

      Thanks a lot Raghu for your comment and words of appreciation.

  • Justin Meltzer

    Hi Jagbir, this is really great stuff. Wish I had seen your article earlier because I never updated my ramdisk and now it seems that my raid array as been renamed (from md0 to md/0) and is lost. Any advice on how to deal with this? Should I just start over?

    Also, any advice on how to deal with creating physical and logical volumes from the RAID device and making sure these are created on reboot?

  • Shekhar Tiwatne

    Thank you. Very well written.

  • gnuyoga

    Works great !!

  • Pingback: Create new partition in Ubuntu. | Ali Bozorgkhan

  • Keshav Patidar

    Hello,

    I have setup raid1 on AWS EC2, and on testing found that its working fine.

    Concern is that I am doing this first time, I dont know how to host my site to work with raid1?

    Thanks in advance.

Previous post:

Next post: