Efficient CLI file uploads with open-source tools

Uploading files from a command line interface (CLI) keeps hands on keyboards and fits neatly into automation scripts, CI/CD pipelines, and headless server management. This post surveys some popular open source tools, shows how to set them up, and shares practical tips for reliability and security when you need to handle uploads on the CLI.
Why upload from the CLI?
There are several advantages to handling file uploads via the command line:
- Automation: Integrate file transfers seamlessly into build or deployment pipelines.
- Scheduling: Easily schedule tasks like off-peak backups using
cron
orsystemd
timers. - Scripting: Combine CLI tools within shell scripts for complex, reproducible workflows.
- Remote Access: Avoid GUI friction when working over SSH on remote servers.
Compare popular CLI upload tools
Several excellent open-source tools cater to different CLI file upload needs:
Tool | Primary Use Case | Stand-out Features | License |
---|---|---|---|
s3cmd |
Amazon S3 & compatible | Bucket management, sync, server-side encryption | GPL-2.0 |
rclone |
40+ cloud providers | FUSE mount, checksum sync, resumable uploads | MIT |
curl |
Raw HTTP/S uploads | Ubiquitous, scriptable, small footprint | MIT |
rsync |
Local ↔ remote sync | Delta transfer, compression, partial resume | GPL-3.0 |
lftp |
FTP/SFTP/HTTP transfers | Parallel queues, mirroring, scripting | GPL-3.0 |
The rest of this guide focuses on two common choices—s3cmd
for robust cloud object storage uploads
and transfer.sh
for quick, ad-hoc file sharing—but the principles apply broadly.
Use s3cmd
for cloud storage uploads
s3cmd
is a mature, feature-rich Python utility designed for interacting with Amazon S3 and
S3-compatible object storage services like MinIO, Wasabi, Backblaze B2, and others.
Install s3cmd
s3cmd
requires Python 3.7 or newer.
# Recommended: install the latest release via pip
# This usually adds ~/.local/bin to your path on most Linux distributions
pip install --user s3cmd
# Or use system package managers (May lag behind the latest version)
# Debian / Ubuntu
sudo apt-get update && sudo apt-get install s3cmd
# macOS via homebrew
brew install s3cmd
Verify the installation: s3cmd --version
should display 2.4.0 or newer.
Configure credentials
Run the interactive configuration wizard:
s3cmd --configure
You'll be prompted for your Access Key ID, Secret Access Key, default region, and preferences for
encryption and HTTPS. The settings are saved to a .s3cfg
file in your home directory. For
automated environments, consider using environment variables (AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
, AWS_DEFAULT_REGION
) or IAM roles instead of storing keys in the
configuration file.
Upload examples
# Define your target bucket (replace with your actual bucket name)
aws_bucket="s3://your-unique-bucket-name"
# Upload a single file to a specific path within the bucket
s3cmd put report.pdf "$aws_bucket/reports/"
# Synchronize a local directory with a path in the bucket
# --delete-removed ensures files deleted locally are also removed from the bucket
s3cmd sync --delete-removed ./local-data/ "$aws_bucket/data-backup/"
# Upload a file with server-side encryption enabled
s3cmd put --encrypt sensitive-data.zip "$aws_bucket/private/"
Tune for large files
For large file uploads, s3cmd
supports multipart uploads, which break the file into smaller
chunks.
# Upload a large file using 50 mib chunks, retrying up to 5 times on failure
# With a 5-second delay between retries. show a progress bar.
s3cmd put \
--multipart-chunk-size-mb=50 \
--max-retries=5 \
--retry-delay=5 \
--progress \
large-video.mp4 "$aws_bucket/videos/"
Use transfer.sh
for quick file sharing
transfer.sh
provides a simple way to share files quickly from the command line. While public
instances have existed, the project now primarily recommends self-hosting for reliability.
Run your own transfer.sh
instance
You can easily run transfer.sh
using Docker. The following command starts a temporary instance
storing files in /tmp
inside the container:
# Start a disposable server on host port 8080
# Files will be stored in /tmp inside the container
docker run -d --rm -p 8080:8080 \
--name transfersh \
dutchcoders/transfer.sh:latest \
--provider local --basedir /tmp/
For persistent storage, mount a host directory to /tmp
inside the container (e.g.,
-v /path/on/host:/tmp
). Consult the transfer.sh
documentation for more advanced configurations.
Share a file via your instance
Once your instance is running (replace localhost:8080
if needed):
# Upload diagram.png to your self-hosted transfer.sh
# The command outputs the shareable URL upon successful upload
curl --upload-file ./diagram.png http://localhost:8080/diagram.png
Handy public alternatives
If self-hosting isn't practical for a quick, one-off share, several public services offer similar functionality:
# Upload to file.io (file expires after first download)
# Note the use of -f "file=@..." for form-based upload
curl -fsSL -F "file=@backup.tar.gz" https://file.io
# Upload to temp.sh (retained for a default duration, often 24 hours)
# Note the use of -t for direct file upload
curl -fsSLo - -T notes.txt https://temp.sh/notes.txt
# The '-' tells cURL to output the response (containing the url) to stdout
Always review the terms and privacy policies of public file-sharing services before uploading sensitive data.
Secure your CLI uploads
When automating file transfers, security is paramount:
- Use HTTPS: Always prefer HTTPS endpoints (
s3cmd
uses HTTPS by default) to protect credentials and data in transit. Avoid plain HTTP. - Encrypt Sensitive Data: Use server-side encryption options provided by your storage provider
(e.g.,
s3cmd --encrypt
). For client-side encryption before upload, tools likeage
orgpg
are excellent choices. - Manage Credentials Securely: Avoid hardcoding access keys or secrets in scripts. Use
environment variables, dedicated secrets management tools (like HashiCorp Vault), or IAM roles
(for cloud environments like AWS EC2) that grant temporary, least-privilege credentials. Keep
.s3cfg
files out of version control. - Apply Least Privilege: Configure IAM policies or bucket policies to grant only the necessary
permissions (e.g.,
s3:PutObject
for uploads, but nots3:DeleteObject
if deletion isn't required). Restrict public access unless absolutely necessary. Consider bucket policies that enforce encryption on upload. - Rate Limit: If performing bulk uploads, use rate-limiting options (
s3cmd --limit-rate=1M
for 1 MiB/s,rclone --bwlimit 1M
) to prevent saturating network links or hitting API rate limits. Implement exponential backoff in scripts when encountering rate-limiting errors (like HTTP 429).
Automate with Systemd timers
On modern Linux systems, systemd
timers offer a robust alternative to cron
for scheduling tasks.
They handle missed runs (e.g., if the server was down) and provide better logging integration.
Here's how to set up a daily backup upload using s3cmd
:
- Create the service unit file
/etc/systemd/system/backup-upload.service
:
[Unit]
Description=Tar and upload nightly backup to S3
# Ensures network is up before starting
Requires=network-online.target
After=network-online.target
[Service]
Type=oneshot
# Path to your backup script
ExecStart=/opt/scripts/backup-upload.sh
# Run the script as a specific user (create this user if needed)
User=backup
Group=backup
# Set environment variables for s3cmd if not using .s3cfg or iam roles
# Environment="aws_access_key_id=your_key_id"
# Environment="aws_secret_access_key=your_secret_key"
# Environment="aws_default_region=us-east-1"
[Install]
WantedBy=multi-user.target
- Create the timer unit file
/etc/systemd/system/backup-upload.timer
:
[Unit]
Description=Run backup-upload.service daily at 2 AM
[Timer]
# Run daily at 2:00 am
OnCalendar=*-*-* 02:00:00
# Run on boot if the last scheduled run was missed
Persistent=true
Unit=backup-upload.service
[Install]
WantedBy=timers.target
- Create the backup script
/opt/scripts/backup-upload.sh
(ensure it's executable:chmod +x /opt/scripts/backup-upload.sh
and owned by thebackup
user:chown backup:backup /opt/scripts/backup-upload.sh
):
#!/usr/bin/env bash
# Exit immediately if a command exits with a non-zero status.
# Treat unset variables as an error.
# Pipelines fail if any command fails, not just the last one.
set -euo pipefail
# Configuration
S3_BUCKET="s3://your-unique-bucket-name/backups" # Replace with your bucket path
BACKUP_SOURCE_DIR="/var/www/my-app-data" # Replace with the directory to back up
TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)
ARCHIVE_FILE="/tmp/backup-${TIMESTAMP}.tar.gz"
LOG_FILE="/var/log/backup-upload.log" # Ensure 'backup' user can write here
# Redirect stdout and stderr to the log file
exec >>"$LOG_FILE" 2>&1
echo "----------------------------------------"
echo "[$(date)] Starting backup process..."
# Create the compressed archive
echo "[$(date)] Creating archive: $ARCHIVE_FILE from $BACKUP_SOURCE_DIR"
# Use -c to change directory, avoiding leading paths in the archive
tar -czf "$ARCHIVE_FILE" -C "$(dirname "$BACKUP_SOURCE_DIR")" "$(basename "$BACKUP_SOURCE_DIR")"
echo "[$(date)] Archive created successfully."
# Upload to S3 using s3cmd
echo "[$(date)] Uploading $ARCHIVE_FILE to $S3_BUCKET"
s3cmd put \
--storage-class=STANDARD_IA \ # Use Infrequent Access for cost savings
--acl-private \ # Ensure the object is private
--progress \ # Show progress (useful in logs)
"$ARCHIVE_FILE" "$S3_BUCKET/"
echo "[$(date)] Upload completed."
# Clean up the local archive file
echo "[$(date)] Removing local archive $ARCHIVE_FILE"
rm -f "$ARCHIVE_FILE"
echo "[$(date)] Cleanup finished."
echo "[$(date)] Backup process finished successfully."
echo "----------------------------------------"
exit 0
- Enable and start the timer:
# Reload Systemd to recognize the new files
sudo systemctl daemon-reload
# Enable the timer to start on boot
sudo systemctl enable backup-upload.timer
# Start the timer immediately (it will trigger based on oncalendar)
sudo systemctl start backup-upload.timer
# Check the status
sudo systemctl status backup-upload.timer
sudo systemctl status backup-upload.service
# List active timers
sudo systemctl list-timers --all
Remember to configure log rotation (e.g., using logrotate
) for /var/log/backup-upload.log
.
Troubleshoot common issues
Symptom | Possible Root Cause | Potential Fix |
---|---|---|
Timeout on large uploads | Network instability, low timeout | Increase timeout (s3cmd --socket-timeout=300 ), use multipart uploads |
"Access Denied" errors | Incorrect IAM policy/ACLs | Verify permissions (s3:PutObject , bucket policy) for the key/role used |
Partial uploads | Interrupted connection | Use tools with resume support (rclone , s3cmd multipart), ensure stable connection |
429 Too Many Requests error | Hitting API rate limits | Implement exponential backoff in scripts, use --limit-rate (s3cmd , rclone ) |
Configuration not found | .s3cfg missing or wrong path |
Run s3cmd --configure , check permissions, use environment variables |
Command not found | Tool not installed or not in PATH | Verify installation, check ~/.local/bin is in $PATH if installed via pip |
Bring in Transloadit for web & mobile uploads
While CLI tools excel in server-side and automation contexts, handling uploads directly from web browsers or mobile applications requires different solutions that manage network interruptions, provide progress feedback, and offer resumability.
This is where Transloadit's /upload/handle Robot comes in. It's designed specifically for robust client-side uploads.
{
"steps": {
":original": {
"robot": "/upload/handle",
"result": true
}
}
}
You can integrate this Robot with front-end libraries like Uppy (our versatile file uploader) or any client supporting the tus resumable upload protocol. This combination provides features like pausing/resuming uploads, automatic retries, and real-time progress updates, enhancing the user experience for client-facing applications.
Happy uploading!