Chunked browser uploads to Google storage with JavaScript

Uploading multi-gigabyte files straight from a browser to Google Cloud Storage (GCS) is tricky. Network hiccups, tab closures, and time-outs can ruin a near-finished transfer. Chunked uploads—with pause, resume, and retry—fix that.
This DevTip walks through a modern, dependency-free approach for the front-end: short-lived V4 signed URLs generated on your back-end and a lightweight JavaScript uploader on the front-end.
Why chunked uploads?
• Resumable – continue after a flaky connection. • Reliable – smaller payloads mean fewer complete restarts. • User-friendly – pause or cancel without losing progress. • Granular feedback – fine-grained progress bars.
Configure Google cloud storage
- Create a bucket in the Cloud Console or using
gcloud storage buckets create gs://your-bucket-name
(ensure your bucket name is globally unique). - Add a CORS (Cross-Origin Resource Sharing) rule to your bucket to allow browsers from your domain
to make
PUT
requests directly to GCS. You can configure this in the Google Cloud Console or usinggcloud
. Here's an example JSON configuration:
[
{
"origin": ["https://yourdomain.com"],
"method": ["PUT"],
"responseHeader": ["Content-Type", "Content-Length", "ETag"],
"maxAgeSeconds": 3600
}
]
- Create a service account identity. Grant it the Storage Object Admin role (or a more narrowly scoped role like Storage Object Creator and Storage Object Viewer if it only needs to write and compose objects in a specific path). Download its JSON key file securely; this key will be used by your back-end to sign URLs.
Back-end: issue short-lived signed URLs
Your server-side code will handle two main tasks: generating signed URLs for each chunk and
composing the chunks into a final file. Below is a Node.js (18+) example using Express and the
@google-cloud/storage
library.
server.js
:
import express from 'express'
import { Storage } from '@google-cloud/storage'
const app = express()
app.use(express.json())
// Initialize storage with credentials from your service account key file
// Ensure GOOGLE_APPLICATION_CREDENTIALS environment variable is set
// or pass the keyFilename to the Storage constructor.
const storage = new Storage()
const bucket = storage.bucket('your-bucket-name') // Replace with your bucket name
// Endpoint to get a signed URL for uploading a chunk
app.post('/signed-url', async (req, res) => {
const { name, type, idx, total } = req.body
// Chunks are stored temporarily, e.g., in a 'tmp/' directory within the bucket
const tempObjectName = `tmp/${name}-${idx}-of-${total}`
const file = bucket.file(tempObjectName)
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: type,
}
try {
const [url] = await file.getSignedUrl(options)
res.json({ url, tempObjectName })
} catch (error) {
console.error('Failed to get signed URL:', error)
res.status(500).json({ error: 'Failed to get signed URL' })
}
})
// Endpoint to compose chunks into a single file and clean up
app.post('/compose', async (req, res) => {
const { name, parts } = req.body // `parts` is an array of temporary object names
const finalFile = bucket.file(`uploads/${name}`) // Destination for the final composed file
const sourceFiles = parts.map((partName) => bucket.file(partName))
try {
await finalFile.compose(sourceFiles)
// Optionally, delete the temporary chunk files after successful composition
await Promise.all(sourceFiles.map((f) => f.delete().catch(console.error)))
res.json({ ok: true, message: 'File composed successfully.' })
} catch (error) {
console.error('Failed to compose file:', error)
// Consider attempting to delete parts even on compose failure if appropriate
res.status(500).json({ error: 'Failed to compose file' })
}
})
const port = process.env.PORT || 3000
app.listen(port, () => {
console.log(`Server listening on port ${port}`)
})
Front-end: a minimal chunked uploader
Below is a vanilla JavaScript ES Module for the client-side. Adjust CHUNK_SIZE
based on your
expected network conditions and file sizes; 5 MiB is a reasonable starting point.
const CHUNK_SIZE = 5 * 1024 * 1024 // 5 MiB
export async function uploadFileWithChunks(file, { onProgress } = {}) {
const totalChunks = Math.ceil(file.size / CHUNK_SIZE)
const temporaryObjectNames = []
for (let idx = 0; idx < totalChunks; idx++) {
const start = idx * CHUNK_SIZE
const end = Math.min(start + CHUNK_SIZE, file.size)
const chunk = file.slice(start, end)
// 1. Get a signed URL for the current chunk
const signedUrlPayload = { name: file.name, type: file.type, idx, total: totalChunks }
let signedUrlResponse
try {
const response = await fetch('/signed-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(signedUrlPayload),
})
if (!response.ok) throw new Error(`Failed to get signed URL: ${response.statusText}`)
signedUrlResponse = await response.json()
} catch (error) {
console.error('Error getting signed URL for chunk:', error)
throw error // Propagate error to be handled by caller
}
// 2. Upload the chunk using the signed URL
try {
// Use the retry function for uploading the chunk
await retry(async () => {
const uploadResponse = await fetch(signedUrlResponse.url, {
method: 'PUT',
headers: { 'Content-Type': file.type }, // GCS requires Content-Type for PUT via signed URL
body: chunk,
})
if (!uploadResponse.ok)
throw new Error(`Chunk ${idx + 1} upload failed: ${uploadResponse.statusText}`)
})
} catch (error) {
console.error(`Error uploading chunk ${idx + 1}:`, error)
throw error // Propagate error
}
temporaryObjectNames.push(signedUrlResponse.tempObjectName)
onProgress?.(((idx + 1) / totalChunks) * 100)
}
// 3. Finalize the upload by composing all chunks
try {
const composeResponse = await fetch('/compose', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: file.name, parts: temporaryObjectNames }),
})
if (!composeResponse.ok)
throw new Error(`Failed to compose file: ${composeResponse.statusText}`)
const result = await composeResponse.json()
console.log('Upload complete:', result)
} catch (error) {
console.error('Error composing file:', error)
throw error // Propagate error
}
}
Retry with exponential back-off
Network requests can be flaky. A simple retry mechanism with exponential back-off can significantly improve reliability for chunk uploads.
async function retry(fn, attempts = 5, delay = 500) {
for (let i = 0; i < attempts; i++) {
try {
return await fn()
} catch (e) {
if (i === attempts - 1) throw e // Rethrow last error
await new Promise((resolve) => setTimeout(resolve, delay * 2 ** i))
console.log(`Retrying (attempt ${i + 1}/${attempts})...`)
}
}
}
Wrap the fetch
call for uploading each chunk (step 2 in uploadFileWithChunks
) inside this
retry
function.
Persist progress across reloads
For true resumability across browser sessions or accidental tab closures, you'll need to persist
upload progress. Store metadata (like the file handle, successfully uploaded chunk
identifiers/tempObjectName
s, and total chunks) in localStorage
after each successful chunk
upload. When the page reloads, check localStorage
. If an incomplete upload is found, prompt the
user to resume. The
File System Access API
can help re-access the file if you stored a handle, or you might need the user to re-select the
file.
Security and performance best practices
Implementing direct browser uploads requires careful attention to security and performance:
- Short-Lived Signed URLs: Generate signed URLs with the shortest practical expiration time, typically 15-60 minutes. The examples use 15 minutes. This limits the window of opportunity if a URL is compromised.
- Server-Side Validation: Before initiating the compose operation on the back-end, always validate metadata about the uploaded chunks. This includes verifying expected MIME types and file sizes (per chunk and total). Client-side validation is a good UX addition but should not be relied upon for security.
- CORS Configuration: Configure Cross-Origin Resource Sharing (CORS) on your GCS bucket
strictly. Only allow origins (e.g.,
https://yourdomain.com
), methods (PUT
for chunks), and headers that are absolutely necessary for your application. - Content Type: Specify the
Content-Type
when generating the signed URL and ensure the client sends it correctly with thePUT
request. This helps GCS handle the file appropriately. - Temporary Chunk Management:
- Ensure temporary chunks are reliably deleted after a successful compose operation, as shown in
the
/compose
endpoint. - Implement a strategy for cleaning up orphaned temporary chunks resulting from incomplete or
abandoned uploads (e.g., a scheduled GCS Lifecycle Management rule or a server job that deletes
files in the
tmp/
prefix older than a certain period, like 24 hours).
- Ensure temporary chunks are reliably deleted after a successful compose operation, as shown in
the
- Error Handling and Monitoring: Robustly handle potential errors during upload (network issues,
GCS errors). Monitor GCS error codes (from
storage.googleapis.com
responses) and provide clear, actionable feedback to the user. The retry mechanism shown helps with transient network issues. - Resource Limits: Consider implementing rate limiting on your back-end endpoints that generate signed URLs and initiate compose operations to prevent abuse.
- Object Versioning: If data integrity and the ability to recover from accidental overwrites or deletions are critical, consider enabling Object Versioning on your GCS bucket.
Transloadit can help, too
For advanced needs like processing files from GCS or integrating a feature-rich upload UI,
Transloadit offers solutions such as the 🤖 /google/import
Robot with Template Credentials, and the versatile Uppy
file uploader.
Happy uploading!