Question

I've configured an OpsWorks stack and my layer is set up so that a 50 GB volume will be attached to each instance I launch.

A new EBS volume gets correctly created, attached and mounted to newly launched instances. Here's what I see immediately after the first boot:

[root@biscotti ec2-user]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.9G  1.9G  5.9G  25% /
tmpfs           298M     0  298M   0% /dev/shm
/dev/xvdi        50G   33M   50G   1% /srv/www          <---------
[root@biscotti ec2-user]# mount
/dev/xvda1 on / type ext4 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/xvdi on /srv/www type xfs (rw,noatime)             <---------

However, if I reboot the instance, the volume will not be automatically re-mounted:

[root@biscotti ec2-user]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.9G  1.9G  5.9G  25% /
tmpfs           298M     0  298M   0% /dev/shm
[root@biscotti ec2-user]# mount
/dev/xvda1 on / type ext4 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

If I type

sudo mount -a

everything will get back to normal:

[root@biscotti ec2-user]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.9G  1.9G  5.9G  25% /
tmpfs           298M     0  298M   0% /dev/shm
/dev/xvdi        50G   33M   50G   1% /srv/www
[root@biscotti ec2-user]# mount
/dev/xvda1 on / type ext4 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/xvdi on /srv/www type xfs (rw,noatime)

How do I get OpsWorks to re-mount the EBS volume automatically for me?

Thanks in advance.

Was it helpful?

Solution

I raised the issue with the AWS support and it turned out to be an issue with OpsWorks. One crucial thing for the bug to be reproducible is that I was rebooting the instance by typing sudo reboot on the command line. If I stop and restart the instance through the API or the AWS console instead, the volume gets correctly mounted. Quoting literally the AWS support:

OpsWorks is a very procedural service. It likes to manage its own resources without manual intervention if at all possible. Perhaps the manual OS reboot is overriding some part of the OpsWorks process.

I was told that the OpsWorksdev team is addressing the issue. The problem can also be fixed by adding the auto option to the EBS device in the /etc/fstab file with a custom Chef recipe.

I hope this will help those who bump into the same issue.

OTHER TIPS

A workaround which we had to use was to customize the default attributes of the opsworks_initial_setup cookbook (original).

With the following customize.rb:

default[:opsworks_initial_setup][:bind_mounts][:mounts] = {}

But of course the Amazon recommendation is to use a symlink.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top