About me

Music producer, (ex-)DJ, electronics tinkerer, Linux enthusiast and coder.

Context menu additions for sharing files in S3 buckets


Some time ago I wrote about mounting Amazon S3 buckets into your home directory, and I wanted to get automatic public URLs for sharing some of the files I stuff in my cloud buckets. I got it working quite effortlessly by writing a small script and adding a context menu item to my file manager.

This tutorial is suitable for both Nautilus Actions and Caja Actions on any GNU/Linux OS, but I'm assuming Debian or Ubuntu here. Install the action tool for your file manager: sudo apt install nautilus-actions or sudo apt install caja-actions

You'll also need a small clipboard utility xsel, which may not be installed by default, so install it, too: sudo apt install xsel

The script

Let's first create the helper script to handle converting and copying. We want to make it available for all users, so some sudoing is necessary (create file, make it executable and take ownership for editing):

sudo touch /usr/local/bin/s3url2clipboard
sudo chmod +x /usr/local/bin/s3url2clipboard
sudo chown ${USER} /usr/local/bin/s3url2clipboard
gedit /usr/local/bin/s3url2clipboard


Then copy-paste the below code into the editor, save the file and close:

#!/bin/sh
echo "$*" | sed -Ee 's@file:///home/[^/]+/S3/([^/\s]+)(/(\S*))?@https://\1.s3.amazonaws.com/\3@g' -e 's@\s+@\n@g' | xsel -bi

That script takes a file URI (or several) from Nautilus/Caja Actions ("$*" joins them into a single space-separated string), converts them into S3 HTTP(S) URLs with sed, splits them into separate lines and pipes them to xsel, which places them on your clipboard for easy pasting into emails, instant messages, etc. That's quite a lot for a one-liner!

Regular expressions can be confusing, so here's some explanation (@ is used as sed delimiter to avoid ugly escaped slashes like \/\/\/):
file:///home/[^/]+/S3/
Match any user's home directory and the S3 directory, which are ignored (change this if you've mounted your buckets elsewhere)
([^/\s]+)
Capture the bucket name (anything that's not a slash or space)
(/(\S*))?
Capture the rest of the URI (a slash and anything but whitespace), if any
https://\1.s3.amazonaws.com/\3
Place the two captured strings into the final URL

The action

Now you can proceed to create the action for your file manager. Start Nautilus/Caja Actions and click File → New Action or the equivalent toolbar button. Fill in information on the various tabs as follows, and leave the rest as they are:
Action
Context label: "Copy S3 URL" (change if you wish)
Icon: I chose the "applications-internet" icon from Categories, but select any that you see fit or create your own
Command
Path: /usr/local/bin/s3url2clipboard
Parameters: %O %U
Mimetypes
Keep the default "*" rule, but add a new one with "inode/directory" and tick the "Must not match any of" radio button. That excludes directories from this action.
Folders
By default actions are available for any path under the root "/". Change that into "/home/YOUR_NAME/S3/*", replacing YOUR_NAME with your actual username.
Schemes
I'm not sure if this is ever necessary, but I changed the default "*" to "file", so that the action is excluded from regular network mounts.

You may wonder why %U (list of URIs) is used instead of %F (list of files) in the parameters. I tried with the latter first, but then I had to do URL encoding, which is not simple in shell without calling PHP, Python, Perl or something else. That would've been wasteful and slow, since %U provides pre-encoded strings that need less changes.

Finally click File → Save to store your custom action, and you're done. Additionally you may want to get rid of the Actions submenu in your file browser. To do that, open Edit → Preferences and uncheck the boxes below Nautilus/Caja menu layout.

Now if you right-click one or more of the files in your S3 bucket directories, you should see the menu option "Copy S3 URL" or whatever you named it.

Improvements

This is probably good enough for most purposes, but I'd like to try something smarter such as recursive handling for subfolders. That way complex file structures could be shared with a single action.

Also, another action to switch S3 file status between public and private could be useful.

Ideas are welcome, so leave a comment if you have some.

0 comments:

Post a Comment