You may try a startup script. This is how I got around issues I was having mounting my s3fs at boot time with /etc/fstab.
How to make startup scripts varies with distributions, but there is a lot of information out there on the subject.
Domanda
In the s3fs instruction wiki, we were told that we could auto mount s3fs buckets by entering the following line to /etc/fstab
s3fs#mybucket /mnt/mybucket fuse allow_other,use_cache=/tmp,url=https://s3.amazonaws.com 0 0
This works fine for 1 bucket, but when I try to mount multiple buckets onto 1 EC2 instance by having 2 lines:
s3fs#mybucket /mnt/mybucket fuse allow_other,use_cache=/tmp 0 0
s3fs#mybucket2 /mnt/mybucket2 fuse allow_other,use_cache=/tmp 0 0
only the second line works I tried duplicating s3fs to s3fs2 and to:
s3fs#mybucket /mnt/mybucket fuse allow_other,use_cache=/tmp 0 0
s3fs2#mybucket2 /mnt/mybucket2 fuse allow_other,use_cache=/tmp 0 0
but this still does not work. only the second one gets mounted:
How do I automatically mount multiple s3 bucket via s3fs in /etc/fstab
without manually using:
s3fs mybucket /mn/mybucket2-ouse_cache=/tmp
Soluzione 2
You may try a startup script. This is how I got around issues I was having mounting my s3fs at boot time with /etc/fstab.
How to make startup scripts varies with distributions, but there is a lot of information out there on the subject.
Altri suggerimenti
Perhaps your network wasn't up?
Minimal entry - with only one option (_netdev
= Mount after network is 'up')
<bucket name> <mount point> fuse.s3fs _netdev, 0 0
I am running Ubuntu 16.04 and multiple mounts works fine in /etc/fstab
.
Example similar to what I use for ftp image uploads (tested with extra bucket mount point):
mybucket1.mydomain.org /mnt/mybucket1 fuse.s3fs _netdev,allow_other,passwd_file=/home/ftpuser/.passwd-aws-s3fs,default_acl=public-read,uid=1001,gid=65534 0 0
mybucket2.mydomain.org /mnt/mybucket2 fuse.s3fs _netdev,allow_other,passwd_file=/home/ftpuser/.passwd-aws-s3fs,default_acl=public-read,uid=1001,gid=65534 0 0
sudo mount -a
to test the new entries and mount them (then do a reboot test).
If you wish to mount as non-root, look into the UID,GID options as per above. This isn't absolutely necessary if using the fuse option allow_other
as the permissions are '0777' on mounting.
WARNING: Updatedb (the locate
command uses this) indexes your system. You should check that either PRUNEFS
or PRUNEPATHS
in /etc/updatedb.conf
covers either your s3fs filesystem or s3fs mount point. The default is to 'prune' any s3fs filesystems, but it's worth checking. Otherwise, not only will your system slow down if you have many files in the bucket, but your AWS bill will increase. See the FAQ link for more.
Reference:
https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon
https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ
this may not be the cleanest way, but I had the same problem and solved it this way:
1. Create a mount script
Simple enough, just create a .sh file in the home directory for the user that needs the buckets mounted (in my case it was /home/webuser
and I named the script mountme.sh
)
The content of the file was one line per bucket to be mounted:
s3fs bucket_one /home/webuser/app/www/bucket_one -o url=https://nyc3.digitaloceanspaces.com -o allow_other
s3fs bucket_two /home/webuser/app/www/bucket_two -o url=https://nyc3.digitaloceanspaces.com -o allow_other
(yes, I'm using DigitalOcean spaces, but they work exactly like S3 Buckets with s3fs)
2. Cron your way into running the mount script upon reboot
I set a cron for the same webuser
user with:
@reboot /bin/sh /home/webuser/mountme.sh
(yes, you can predefine the /bin/sh path and whatnot, but I was feeling lazy that day)
I know this is more a workaround than a solution but I became frustrated with fstab very quickly so I fell back to good old cron, where I feel much more comfortable :)
This is what I am doing with Ubuntu 18.04 and DigitalOcean Spaces
In /etc/fstab
s3fs#<space> /<dir> fuse _netdev,allow_other,use_cache=/tmp/cache,uid=<usr>,gid=<grp>,url=https://<url> 0 0
.passwd-s3fs is in root's homedir with appropriate stuff in it
Please notice autofs starts as root. Then, the credentials file .passwd-s3fs, has to be into the root directory, not into a user folder.