Question

I am trying to create a ftp server ( using windows/linux/mac - no concern) which would have its storage as a Amazon S3 storage. Now note that S3 does not support FTP natively so this would need some kind of hack as a solution.

I researched about the topic and found various solutions but am not really convinced be any of those. Them being:

  1. Amazon EC2 + TntDrive
  2. Using SME
  3. Creating an EC2 instance and installing FTP server and mounting S3 as local filesystem.

I am trying to find the best solution in terms of security and flexibility/smoothness. Which solution do you think is the best and how to achieve the above?

Edit 1 :

I am very interested in the following solution. here is what I gather : You can attach the EBS volume to an EC2 instance and run an FTP server on that instance. Point the FTP server to the attached EBS volume, then just FTP up your file - it will be written directly to the EBS volume. You would want to use an FTP server and client that can support resuming interrupted transfers - for example, FileZilla. Am I correct when I assume all of the above ?

Also can anyone give the step by step procedure on how to achieve this?

Was it helpful?

Solution

The answer really depends.

First, let me say FTP is a terrible and insecure protocol. Make sure you have a good reason before going down this route. There are plenty of user-friendly S3 tools.

Second, please note that none of these solutions will scale like S3 does. Each solution has arbitrary limits on how many files it can support, how large the files can be, and what happens if a file is updated frequently (i.e. it may save the wrong version). S3 Filesystems look neat at first, but when they have problems, they are hard to troubleshoot (They can only return generic filesystem error messages) and a harder to fix.

Some ideas:

  • If you really just want cloud backup, consider using EBS instead of S3. Either attach an EBS drive to an EC2 box, or run Storage Gateway on your local box.

  • depending on the read/write patterns and the delays, and the size of the files, etc, you might use something like s3sync instead. Have it download all your files, then do a bi-directional re-sync to S3 periodically to pickup any new files or delete any files that have been deleted in S3.

  • If you only need to support uploads, just have a cron job that uploads new files to S3 periodically, then deletes them.

OTHER TIPS

What you could try.. Using s3fs, mount your s3 bucket to a directory within your Amazon EC2 instance - using a bit of: sudo s3fs -o allow_other,uid=12345,gid=12345 my-bucket my-ftp-directory/

Then set up vsftpd or any other FTP program, create a user and assign their home directory to be that of my-ftp-directory. Chroot this user to this directory, then try and FTP in using the users credentials and the ip of the EC2 Instance.. I haven't tried it yet, but after mounting a bucket using this technique to my public files directory in Drupal, it's worked fine!

You can also use: FTP 2 Cloud

While FTP 2 Cloud is in beta:

it's free.
there are no copy limits.
each account has 100MB storage space.
supports FTP to Amazon S3 copy.
supports FTP to Rackspace copy.
you use at your own risk.
it needs your love to get the word out.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top