I recently needed to perform a bulk upload to Dropbox. I had around 390 GB of data, totalling around 70,000 files, on an external drive and wanted to upload the whole lot. I didn’t however want to copy them all onto my intenal drive and let Dropbox handle the syncing, so I broke out rclone to do the heavy lifting for me.
Once it was installed, configuration was very straightforward – simply drop into Terminal and type rclone config
This drops you into the interactive configuration where you can easily set up a connection to Dropbox (a remote in rclone parlance) which includes authentication via MFA in your web browser.
Once the remote is set up (I called my remote Dropbox) I was able to then initiate a bulk upload with the following commands for rclone:
rclone copy \
--transfers 24 \
--checkers 48 \
--dropbox-chunk-size 150M \
--tpslimit 15 \
--buffer-size 256M \
--retries 10 \
--retries-sleep 30s \
--low-level-retries 20 \
--ignore-existing \
--stats 10s \
--progress --progress-terminal-title \
--log-file ~/rclone-progress.log \
--log-level INFO \
"/Volumes/External SSD/Data" "Dropbox:Data Backup"
I wanted to maximise the 100Mbs upload of my internet connection, and the multiple cores in my Mac mini. Here’s a concise overview of each flag:
Transfer control
--transfers 24— upload up to 24 files simultaneously--checkers 48— fire up 48 parallel workers comparing source vs destination to determine what needs uploading--ignore-existing— skips files already present at the destination without checking if they’ve changed as I’m not wanting to sync data with the cloud, just perform a one-off migration
Dropbox-specific
--dropbox-chunk-size 150M— breaks large files up into chunks of 150 MB for upload; larger chunks mean fewer API calls and better throughput
Rate limiting
--tpslimit 15— caps API transactions no more than 15 per second to avoid triggering Dropbox’s rate limiter
Memory and buffering
--buffer-size 256M— each transfer can use up to 256 MB of memory as a read buffer. This smooths out disk I/O and keeps transfers feeding the network consistently
Reliability and retries
--retries 10— retry a failed file transfer up to 10 times before giving up and moving on to the next file--retries-sleep 30s— wait for 30 seconds between retry attempts as this gives the remote API more time to recover before rclone tries again--low-level-retries 20— retries up to 20 times at the HTTP chunk level before escalating to a full file retry
Logging and progress
--progress— displays live transfer statistics in the terminal--progress-terminal-title— displays information like ETA in the title bar of the terminal window--stats 10s— how frequently progress statistics are refreshed. I wasn’t going to sit there for the entire nearly 12 hours this took, so no need to update status every second--log-file— writes a full transfer log to the specified file--log-level INFO— sets log verbosity to INFO which captures completed transfers, errors, and retries without excessive noise