How to resume FTPS uploads and downloads in Node.js
A sad fact of life is that large file transfers fail from time to time. This could be caused by issues like network drops, server timeouts, or even connection resets; and usually at the worst possible moment.
We’re going to show you how to build a Node.js script that resumes interrupted FTPS uploads and downloads by using basic-ftp and p-retry, so you can pick up the pieces and continue from where you left off when your file transfers fail.
What does “resume” mean for FTPS transfers?
You obviously know what resuming a file download or upload means, but what it means for FTPS transfers is that a transfer is continued from the last byte that was successfully written.
This saves you the time it would take to restart your transfer from the beginning, and lets your upload or download continue from where it left off, instead of starting all over again. The process is quite straightforward: check how many bytes have already been transferred at the destination, then continue from that offset.
FTP (and FTPS) handle this with the restart command (REST), which you can learn more about in the RFC 3659 documentation. The client tells the server “hey, start from byte N” before issuing the download (RETR) or upload (STOR/APPE) commands.
The server then has to seek to that position or byte in the file. Where SFTP is different is that it opens a file handle and uses seek() on the handle to jump to the correct offset. You get the same outcomes, but the mechanisms at work live at different layers of their respective protocols.
If you’d like to get a better idea of how SFTP and FTPS compare to one another, then check out our FTPS vs. SFTP article.
Prerequisites
If you’d like to follow along, then you’ll need Node.js 22 or later (any current LTS release) and two packages:
npm install basic-ftp p-retry |
basic-ftp is an FTP/FTPS client for Node.js with built-in support for TLS and passive mode, with the resume method that we will need. p-retry wraps any async function with configurable retry logic and exponential backoff.
basic-ftp (5.3.0 at the time of writing). Older versions have known security vulnerabilities that were patched in 5.2.0 and 5.2.1. Running npm install basic-ftp will pull the latest by default.You’ll also need an FTPS server to test against. We’ll use SFTP To Go later in the article, but any server that supports explicit FTPS on port 21 will work as well.
One more thing worth noting is that p-retry v7+ is an ES module, so we’ll write our script as .mjs (or set “type”: “module”in the package.json.
Connect to your FTPS server
The basic-ftp library connects to an FTPS server through its access()method. Here’s a minimal connection test that lists the remote directory:
import { Client } from "basic-ftp"; |
Setting secure: true enables explicit FTPS, the client connects on port 21, then upgrades to TLS through the AUTH TLS command. This is the mode that we’ll use for connecting to SFTP To Go with FTPS connections. If you’re working with a legacy server that requires implicit FTPS (TLS from the first byte, and usually on port 990), then you should use secure: “implicit” instead.
For development with self-signed certificates, you can pass TLS options with:
await client.access({ |
Once you have a working connection, we can build the transfer logic.
Download with FTPS resume support
The download process goes like this: we save the incoming data to a .part file, check its size on each attempt, and pass that size as the startAt parameter to downloadTo(). When the download completes, we rename the .part to the final filename.
import { stat, rename } from "node:fs/promises"; |
When downloadTo() receives a file path and a startAt offset, it applies the offset to the remote read position and the local file write position. So, if 5 MB of a 20MB file already sits in the .part file, the library sends a REST 5242880 to the server and appends the remaining bytes to the local file from that offset.
The trackProgress() callback fires at regular intervals during the transfer so that we know what it’s up to, and the info.bytes value tells us the byte transfer count in the current session (not to be confused with the total file progress), so we need to add startAt to get the real total.
Upload with FTPS resume support
Uploads follow the same pattern, but in reverse. We upload to a remote .part file and check its size with client.size(), and use appendFrom() with a localStartoffset to skip the bytes that are already on the server. Once the upload finishes, we rename the remote file to its original filename and extension type.
async function uploadWithResume(client, localPath, remotePath) { |
You probably noticed that the first upload uses uploadFrom() (a fresh STOR command) while subsequent attempts use appendFrom() (the APPE command with a localStart offset).
The reason is that appendFrom() tells the server to append data to the existing remote file, and localStart tells the library to skip that many bytes in the local file before reading. Using them together lets us resume from where the previous attempt stopped, which is exactly what we want.
APPE - it catches the 502 response and restarts with a fresh upload instead.Handle FTPS connection drops with retry logic
If at first you don’t succeed, wrap the entire connect-transfer-close cycle with p-retry (I think that’s how the old saying went). But seriously, implementing this logic allows us to reconnect and pick up from the .part file automatically, as you’ll see below:
import pRetry from "p-retry"; |
This implementation lets us wrap the entire connection lifecycle with retry. Each attempt creates a fresh FTP client, connects, runs the operation, and closes. The resume logic inside downloadWithResume and uploadWithResume checks the .part file size at the start of each call, which means that every retry picks up from the last successful byte, and not from zero.
The default configuration starts with a one second delay, and doubles it on each failure until it reaches its cap of 30 seconds. For large files on unstable connections, you could set yourself more retries with a longer maxTimeout.
If you are having a bad day, and you get errors that you don’t want to retry, like authentication failures or “file not found”, then you’ll be pleased to know that p-retry provides us with an AbortError class that we can throw inside the operation function to stop the loop right away.
The final script (don’t worry, it’s coming!) retries all errors by default, but feel free to change these settings to suit you if you want to have the script fail on certain error conditions.
Verify file size after FTPS transfer
After the transfer completes, it's worth checking that the file is indeed all there. In our case, the script compares the local file size against the remote file size to double check that everything ran smoothly:
import { stat } from "node:fs/promises";
|
Size comparison is the right approach here because FTP has no server-side command to work out checksums, so there’s no way to do a remote hash comparison over the protocol.
With SFTP, we can check the modification timestamps to detect file changes between resume attempts. With FTP, the MDTM command returns the server-side timestamp instead of the original. Size is a reliable method in this case because the storage layer handles the data integrity itself.
If you have a situation where you suspect that files may change between resume attempts, you can add a tail-byte comparison for extra reassurance. This involves downloading the last X bytes by setting startAt to the file size minus X, and then hashing that range against the same bytes in the original file. With FTP, downloadTo() writes to a file on your disk instead of returning bytes directly, so it takes a bit more work to add this kind of check.
We recommend this step if data integrity and consistency are a priority for your workflows, especially if you work with files that are accessed and modified often. We haven't implemented this feature in our example for simplicity's sake, but you can totally build on it with your own custom features to cater for your specific requirements using this code as your starting point.
Monitor FTPS transfer progress
Our script relies on basic-ftp to provide us with a serviceable trackProgress method that fires a callback at regular intervals during a file transfer. The callback receives an object of the current filename, the transfer type (upload or download), the bytes transferred in the current session, and the total bytes:
client.trackProgress((info) => { |
The library handles buffering itself, and streams data through Node’s standard readable/writable streams. There’s no chunk size configurations to deal with on our end, so there’s one less thing to worry about.
Use SFTP To Go as your FTPS server
SFTP To Go supports explicit FTPS on port 21, as well as SFTP and S3 access. To connect with the script that we've built, you’ll need to follow the steps below:
1. Navigate to your SFTP To Go Dashboard > Credentials Tab.
2. Copy the host, username, and password for your credential.
3. Connect on port 21 with secure: true (explicit FTPS).
node ftps_transfer.mjs upload \ |
SFTP To Go uses Amazon S3 as its storage backend, which means that your files are encrypted at rest with AES-256, and the transfer is encrypted in transit with TLS. Transfer logs are accessible from the dashboard, (ideal for auditing, and great news if you’re working under compliance frameworks like SOC 2, HIPAA, or GDPR.
If you need to schedule recurring transfers, Cron To Go can handle the scheduling, monitoring, and alerting for you.
Complete FTPS upload resume script
Now that we’re through some of the basic mechanics, it’s probably a good idea for you to look at everything together in the full ftps_transfer.mjs script. This will serve as a great starting point for more complex workflows that need reliable uploads and downloads.
We’ve used a class called FTPSTransfer that wraps both the upload and download functions together, along with the retry logic and size verification. We’ll show you some usage examples further on in the article, all of which can be fired off from the CLI.
#!/usr/bin/env node |
Trying out the FTPS resume script
Now that you have the full script, it's time to take it for a spin. We’ll upload a file, download it, and then deliberately interrupt the transfer to see the resume logic kick in.
For us to transmit something inert, we can create a test file. Our example will be a 10 MB file to keep things relatively light:
dd if=/dev/urandom of=testfile.bin bs=1M count=10 |
Your output should look something like this:
Upload to SFTP To Go with FTPS resume support
Let’s run an upload with the size verification enabled:
node ftps_transfer.mjs upload \ |
The file uploaded to SFTP To Go with no issues.
Download from SFTP To Go with FTPS resume support
Now we can download the same file that we just uploaded to confirm that everything is working as it should:
node ftps_transfer.mjs download \ |
Again the script works great, and we downloaded our file with no issues.
Resuming FTPS support after an interruption
Now that we’ve verified the script works as intended, we can test the interruption and retry parts of our script.
Uploading test
Here, we connect to an FTPS server and upload an even larger file, 1 GB in size. This gives us plenty of time to interrupt the upload and to see how it reconnects and resumes. Notice that after we interrupt the upload and resume it again, the script finds the partial file and resumes from where we left off.
We interrupt it again and then continue, and we see that it reconnects and continues its upload until completion.
Downloading test
Again, we connect to our FTPS server and interrupt our file transfer, this time with a download.
A note on S3-backed FTPS servers
Things work a little differently on cloud storage backends like Amazon S3, which use immutable objects. This means you can't append bytes to an existing file on the server.
FTPS services built on S3 (including SFTP To Go) don't support the APPE command, so the upload resume functionality won't work on these servers. The script detects this and falls back to a fresh upload automatically. The .part pattern still protects you because an interrupted upload never overwrites your final file, and a rerun will always produce a clean result.
Download resume works on any FTPS server, including S3-backed ones, because REST + RETR reads from an offset instead of modifying the object.
If you absolutely need to have upload resume functionality that works with S3 storage, SFTP handles this differently. The SFTP protocol works with file handles with seek(), which lets the server manage the partial write internally. See our companion article on resuming interrupted SFTP downloads in Python for that approach.
Troubleshooting
Things don’t always go to plan, so here’s a list of some common issues with quick fixes that you can use to get back in business if you find that the script isn’t working straight out of the gates.
TLS certificate errors
If you see UNABLE_TO_VERIFY_LEAF_SIGNATURE or SELF_SIGNED_CERT_IN_CHAIN, the server’s TLS certificate isn’t trusted by Node’s default certificate authority list. This happens with self-signed certs in dev environments and isn’t a huge problem to solve.
For development, you can bypass verification by passing to the access() method like so:
secureOptions: { rejectUnauthorized: false }
Don’t do this in production. Instead, you’ll want to install the server’s CA certificate into your system’s trust store, or point Node at it via the NODE_EXTRA_CA_CERTS environment variable, which you can do as follows:
NODE_EXTRA_CA_CERTS=/path/to/ca-cert.pem node ftps_transfer.mjs upload ... |
Connection timeouts on large files
FTP control connections can time out if the server sees them as being idle during long data transfers. If your transfers fail before they’re completed, try increasing the client timeout when you create the Client instance:
const client = new Client(60_000); // 60-second timeout |
Some servers also support keepalive commands on the control channel, so you can also check there if you prefer that method. If timeouts persist, check whether your server allows you to increase the idle timeout limit, or configure an intermediate firewall to allow longer connections to persist.
Transfer finished but the remote file size doesn’t match
If the size check fails immediately after a transfer completes, wait a moment and retry. Some servers can show stale size metadata for a little while after a write completes, but simply adding a short delay before verification usually solves the issue.
Also make sure that you’re transferring in binary mode (you should be if you haven’t modified the script as that is the default for basic-ftp). ASCII mode can sometimes alter line endings during transfer, which will change the file size. Binary mode transfers data verbatim, and that’s what you want when you’re transferring non-text files.
Passive mode and firewall issues
basic-ftp uses passive mode by default, and it's the right choice for most setups because in passive mode, the server tells the client which port to connect to for data transfers. This works great with Network Address Translation (NAT), and most firewalls because the client initiates all the connections.
If connections succeed, but the data transfer hangs or fails outright, it's usually a firewall blocking the passive data port range. The server needs a few ports open for incoming data connections (usually in the 49152-65535 range). If you control the server’s firewall, make sure this range is accessible for clients. SFTP To Go handles this automatically, so you’ll have no worries there.
Wrapping up
You now have your very own Node.js script that handles resumable FTPS uploads and downloads like a champ.
It has basic retry logic and size verification to help you keep the data flowing, even when your connection is not cooperating. The .part file pattern keeps your partial transfers safe, and the retry wrapper reconnects and resumes automatically by just rerunning the script if the transfer fails.
If your workflow uses SFTP alongside FTPS, the same resume patterns apply. We cover the SFTP approach with Paramiko in our previous articles for resuming SFTP uploads and downloads.
Frequently asked questions
FTPS uses the REST (Restart) command to tell the server to start reading or writing from a specific byte offset. For downloads, the client checks how many bytes the local .part file already has, and passes that as the starting position. For uploads, the client checks the remote .part file size and uses the APPE command to append from that offset. The transfer picks up from where it left off instead of starting all over again.
What is the .part file, and why is it important?The .part pattern is just a temporary partial file that exists only while the file transfer is active. After the transfer is complete, the .part file is renamed back to its original filename and file extension, leaving you with a working copy of the file you transmitted or received. The idea of a .part file is that it stops you from ending up with an incomplete file, and it also prevents your latest upload or download from overwriting the existing file, leaving you with a corrupted incomplete file if there is a failure during that transfer.
What is the difference between uploadFrom and appendFrom in basic-ftp?uploadFrom sends a STOR command that creates or overwrites a remote file from scratch. appendFrom sends an APPE command that adds data to the end of an existing remote file. For resume, appendFrom is used with a localStart offset so that the library skips the bytes already on the server and sends only the data that is left.
Why does the script use size and comparison instead of checksum for verification?FTP has no server-side command that can compute checksums, so there is no way to do a remote hash comparison over that protocol. Size comparisons are more practical and less resource-hungry, leaving you with a quick method of verifying your file’s integrity.
What is the difference between FTPS and SFTP?The main difference is that FTPS, or FTP over SSL/TLS, uses SSL/TLS for security and requires a separate channel for data. SFTP, or SSH File Transfer Protocol, was built on SSH, uses a single port for all communication, and is usually more firewall-friendly.