About me

Music producer, (ex-)DJ, electronics tinkerer, Linux enthusiast and coder.

Using systemd to mount S3 buckets into your home directory


If you do web development or use cloud storage for backups, you may have heard of Amazon S3 which offers dirt-cheap storage for both private and public use. Data in S3 is split into "buckets", which can be configured individually and may contain folder structures.
 
Then there's S3FS (S3 File System) that allows accessing S3 just as easily as local files. The few tutorials I found used /etc/fstab to auto-mount buckets, but that's not very smart on a desktop computer with multiple users, each managing buckets of their own. Besides, editing /etc/fstab can be very dangerous to newbies. That's why I looked for a way to achieve the same and more without touching any system-wide configuration. The only part that even needs sudo privileges is installing the S3FS package.

If you've used Linux servers or desktops for some time, you've probably heard of systemd, which allows creating your own background services (daemons) relatively easily. The real beauty about it is that adding user-specific services is equally easy, and in my experience they are more reliable than processes started using the classic "Startup Applications" tool.

I'll walk you through to combine those three building blocks into an automatic service that brings your S3 files right under your home directory on a Linux desktop computer. This should work as-is on any recent Debian, Ubuntu or a derivative operating system, and with some changes on others, too (forget about Windows, though).

Don't get scared away by the length of this guide. It's really quite simple, but I'll take time to explain many details to make this how-to both educative and beginner-friendly.

Don't forget to replace all occurrences of YOUR_BUCKET with the name of your actual S3 bucket. Also, instead of gedit you can use your favorite desktop or terminal text editor.

1. Get yourself a bucket

If you already have a bucket and access credentials for it, you can skip this step.

Go to aws.amazon.com/s3, sign up and follow instructions to create a bucket. Write down the AWS root credentials created for you and keep them in a safe place. You can either use those with S3FS or, to enhance account security, create a separate IAM user with a policy limited to accessing your S3 bucket(s). That's beyond the scope of this tutorial, though.

Note which AWS region you choose and write it down along with the name of your bucket, because you'll need them later.

2. Create a mount point for the bucket

Let's create a directory named S3 in your home folder to hold configuration files and mount points for your buckets. This command does both (don't forget to replace YOUR_BUCKET with the actual name of your S3 bucket):

mkdir -p ~/S3/YOUR_BUCKET

3. Install S3FS and store credentials

S3FS does all the heavy lifting here, communicating with an S3 server to make remote files behave much like local ones. Let's install it first (if you don't have sudo privileges, ask your administrator to do this):

sudo apt install s3fs

Now fire up a text editor to create a hidden file for the AWS credentials you got in step 1:

gedit ~/S3/.credentials

Place your access key ID and secret access key on a single line, separated with a colon. The line should look something like this:

AKIXXXXXXXXXXXXXXXXX:abcdefghijklmnopqrstuvwxyz0123456789

Save the file and close the editor. Let's add a little security by ensuring that no other user on your computer (except a sneaky administrator) can read your credentials:

chmod 0600 ~/S3/.credentials

4. Testing

Now we're ready to test mounting your bucket. Here's the command (note that it's a single long line), but let me explain it first before running:

s3fs YOUR_BUCKET ~/S3/YOUR_BUCKET -o passwd_file=~/S3/.credentials,endpoint=eu-west-1,default_acl=public-read

endpoint=eu-west-1

This part defines the AWS region your bucket resides in. Change it if yours is elsewhere.

default_acl=public-read

This defines that anyone with a URL to your S3 file can download it. If you don't want that and only use S3 for private data such as backups, change this into default_acl=private. Note that if you decide to change this later, you must save or copy all the files in the bucket again to apply the change. It may be easier with the S3 console, if you have many files.

S3FS has a built-in caching mechanism to reduce bandwidth usage. If your S3 files are somewhat large and you access them frequently, you may want to activate the cache by appending this to the command line, without a space:

,use_cache=/tmp

There are many more S3FS and FUSE options you can tinker (and fail) with. See their man pages for more info.

Once you've made sure you've added your bucket name to the command, run it. If you've done everything as instructed, you won't see any error messages, and you can now point your file browser to the bucket directory (S3/YOUR_BUCKET) under your home folder. Copy or create a file there, so you can check if it gets transferred to S3 correctly. Try with a small file first, so it won't take ages to transfer.

Open the S3 console and open the bucket. If your file is shown there, you win! If not, check your credentials, mount point (step 2), log information (journalctl -xe), etc. and try again. Note that there have been some region-specific problems in S3FS, so it's wise to do some googling, too.

If you chose to make your S3 files public, you should also be able to open (or download) the file in a web browser with a URL like this (replace FILE_NAME with the actual name and change "eu-west-1" to the region your bucket is in):

http://s3-eu-west-1.amazonaws.com/YOUR_BUCKET/FILE_NAME

This may work, too, but I'm not sure if subdomain access is available in all AWS regions:

http://YOUR_BUCKET.s3.amazonaws.com/FILE_NAME

To unmount the bucket, use this command:

fusermount -u ~/S3/YOUR_BUCKET

5. Creating and starting the systemd unit

Now we're ready to make all this fully automatic. Systemd service configuration files are called units. The directory that hosts user units may not exist by default, so let's create it first:

mkdir -p ~/.config/systemd/user

Then you can create a unit file:

gedit ~/.config/systemd/user/s3fs.service

Now copy the below content into the editor and once again replace the bolded bits with your own values, and don't forget to append any extra S3FS options you want, such as the one for caching:

[Unit]
Description=S3FS mounts
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/s3fs YOUR_BUCKET ${HOME}/S3/YOUR_BUCKET -o passwd_file=${HOME}/S3/.credentials,endpoint=eu-west-1,default_acl=public-read
ExecStop=/bin/fusermount -u ${HOME}/S3/YOUR_BUCKET

[Install]
WantedBy=default.target

Save the file and close the editor.

Note that systemd commands are not run in a shell (bash, dash, etc.), so some shell features such as tilde (~, shortcut to your home directory) don't work. That's why ${HOME} must be used instead. Also note that executables need full paths.

The network-online.target lines ensure that mounting is not attempted until there's a network connection available.

Since the s3fs command won't leave a process running, this unit needs Type=oneshot and RemainAfterExit=yes. This way systemd considers the service active after the mounting is done.

If you want to mount more buckets, simply add more ExecStart and ExecStop lines after the existing ones. Note that each can have different options, most importantly the public/private status. Don't forget to create a mount point for every bucket (see step 2)!

Now let's enable and start the service. Make sure you've first unmounted your bucket(s) as instructed in the end of step 4. Note that in the below systemctl commands "s3fs" refers to the s3fs.service unit file you created, not the s3fs executable.

systemctl --user enable s3fs
systemctl --user start s3fs


That's it! From now on systemd will mount your buckets automatically when you log onto your computer.

If you ever want to unmount the buckets, simply stop the service:

systemctl --user stop s3fs

If you make changes to the unit file later on, run these commands to make sure systemd acknowledges the changes:

systemctl --user daemon-reload
systemctl --user restart s3fs

Wait, there's more loot!

The usefulness of S3FS isn't limited to backups and publishing files. For example, this can be an easy way to share files between your computer and mobile devices with apps like BucketAnywhere.

I'm also thinking of using Nautilus Actions (or actually Caja Actions, as nowadays I primarily use Ubuntu MATE) to add a context menu item for S3 files to copy their public URLs to clipboard (IIRC, the discontinued Ubuntu One service had something similar). That would make sharing files really quick, so stay tuned for more. Edit: Done!

It's a bit confusing that mounted S3 buckets get listed like hard drive partitions in file managers, especially since their icons look like hard drives, and I couldn't figure out how to change them or even hide them, so I'll have to dig deeper.

There's probably much room for improvement, so please leave a comment, if you know something that could make this tutorial better, or if you spot a mistake. Unfortunately I don't have much time for answering questions, so please don't expect much support.

5 comments:

  1. This blog gives a lots of Information, It's Useful,Thanks for the Information


    AWS Online Training

    ReplyDelete
  2. Thanks. Everything worked except for the last line. The test worked great, I then unmounted and ran systemctl --user enable s3fs with no error.

    Then on the last line I get the following. (I'm on a raspberry pi btw)

    Please help!

    pi@piSecurity:~ $ systemctl --user start s3fs
    Job for s3fs.service failed because the control process exited with error code.
    See "systemctl status s3fs.service" and "journalctl -xe" for details.
    pi@piSecurity:~ $ systemctl status s3fs.service
    Unit s3fs.service could not be found.

    ReplyDelete
    Replies
    1. Nevermind,it was a typo in my s3fs.service file. Working great, thanks!

      Delete
  3. For mounting file systems using SystemD, it is better to use Mount units instead of relying on services. This allows SystemD to manage the mount paths automatically, create directories as needed, handle mount dependencies and even auto-mount on access.

    To create a mount unit, the unit name end with `.mount` and the begin with the path to the wanted mount directories - where slashes (/) are converted to dashes (-). It is easy to create FUSE mount units with FUSE's (built-in) subtype resolver helper.

    The main disadvantage of using SystemD mount units, is that users cannot create mount units, even for FUSE file systems - if you try, it will fail due to limitations of the mount model in Linux (relevant bugs to track may be https://github.com/systemd/systemd/issues/13741 and https://bugs.freedesktop.org/show_bug.cgi?id=73809). So mount units must be system units.

    So to mount an s3fs-fuser under `/home/user/S3/YOUR_BUCKET`, one can create this unit file as `/etc/systemd/system/home-user-S3-YOUR_BUCKET.mount` with the following content:

    ```
    [Unit]
    Description=S3 Storage
    After=network.target

    [Mount]
    What=YOUR_BUCKET
    Where=/home/user/S3/YOUR_BUCKET
    Type=fuse.s3fs
    Options=passwd_file=/home/user/S3/.credentials,endpoint=eu-west-1,default_acl=public-read


    [Install]
    WantedBy=multi-user.target
    ```

    You can then enable and start this unit normally.

    ReplyDelete
    Replies
    1. Thanks for mentioning this. I thought about discussing this alternative, but the post was already pretty long and I specifically wanted to introduce a method that doesn't involve messing with system-wide configurations.

      Delete