Export files to DigitalOcean Spaces in .Net (C#)

DigitalOcean Spaces offers a cost-effective, S3-compatible object storage service, so you can reach
for the same tooling you already use for Amazon S3. In this guide, you will learn three modern ways
to export files to DigitalOcean Spaces with .NET (C#) 8: using the official AWS SDK, the
polycloud-friendly FluentStorage library, and plain HttpClient
for a zero-dependency approach.
Understand DigitalOcean Spaces S3 compatibility
Because Spaces speaks the S3 protocol, every library that works with Amazon S3 also works with
DigitalOcean. The only real change is the endpoint—https://<region>.digitaloceanspaces.com
—and a
few minor configuration flags.
Install required nuget packages
<!-- AWS SDK for S3-compatible storage -->
<PackageReference Include="AWSSDK.S3" Version="3.7.300.1" />
<!-- Resilience & retry policies -->
<PackageReference Include="Polly" Version="8.5.2" />
<!-- Polycloud abstraction layer -->
<PackageReference Include="FluentStorage" Version="5.6.0" />
Use AWS SDK for .Net with DigitalOcean Spaces
The AWS SDK is feature-rich and automatically handles multipart uploads, encryption headers, and other niceties.
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
var config = new AmazonS3Config
{
ServiceURL = "https://nyc3.digitaloceanspaces.com", // change region as needed
ForcePathStyle = false, // Required for DigitalOcean Spaces
};
await using var client = new AmazonS3Client(
Environment.GetEnvironmentVariable("DO_SPACES_ACCESS_KEY"),
Environment.GetEnvironmentVariable("DO_SPACES_SECRET_KEY"),
config);
var transfer = new TransferUtility(client);
await transfer.UploadAsync("./local/file.txt", "my-space", "file.txt");
Console.WriteLine("✔ Uploaded with AWS SDK");
Async/await best practices
Every network call returns a Task
, so await
each one to avoid blocking threads. Wrapping upload
operations in Task.Run
is unnecessary and actually degrades throughput.
Leverage fluentstorage for a polycloud approach
FluentStorage abstracts multiple providers behind a single API, making it straightforward to swap between Spaces, S3, and others.
using FluentStorage;
using FluentStorage.AWS;
var blobs = StorageFactory.Blobs.AwsS3(
accessKey: Environment.GetEnvironmentVariable("DO_SPACES_ACCESS_KEY"),
secretKey: Environment.GetEnvironmentVariable("DO_SPACES_SECRET_KEY"),
serviceUrl: "nyc3.digitaloceanspaces.com",
bucketName: "my-space",
options: new AwsS3Options { UseHttp = false }); // UseHttp = false for HTTPS
await using var stream = File.OpenRead("./local/file.txt");
await blobs.WriteAsync("file.txt", stream);
Console.WriteLine("✔ Uploaded with FluentStorage");
Call the REST API directly with HttpClient
When you need complete control—or want to ship a single-file tool—you can sign your own requests
with AWS Signature V4 and send them via HttpClient
.
public sealed class SpacesClient
{
private readonly HttpClient _http = new();
private readonly string _key = Environment.GetEnvironmentVariable("DO_SPACES_ACCESS_KEY")!;
private readonly string _secret = Environment.GetEnvironmentVariable("DO_SPACES_SECRET_KEY")!;
private readonly string _region;
private readonly string _space;
public SpacesClient(string region, string space)
{
_region = region;
_space = space;
}
public async Task UploadAsync(string path, string objectKey)
{
await using var fs = File.OpenRead(path);
using var content = new StreamContent(fs);
content.Headers.ContentType = new("application/octet-stream");
var url = $"https://{_space}.{_region}.digitaloceanspaces.com/{objectKey}";
using var req = new HttpRequestMessage(HttpMethod.Put, url) { Content = content };
SignRequest(req); // left-out for brevity—see AWS SigV4 docs
var res = await _http.SendAsync(req);
res.EnsureSuccessStatusCode();
Console.WriteLine("✔ Uploaded with HttpClient");
}
}
The SigV4 signing routine is about 40 lines and available in the AWS docs. Copy that in, or better
yet, reference AWSSDK.S3
even when using HttpClient
so you can reuse its AWSSDKUtils
helper.
Add retry logic and error handling
Transient errors like 503 Slow Down
are common with large uploads. Polly v8 makes retries concise:
using Polly;
using Amazon.S3; // Required for AmazonS3Exception
using Amazon.S3.Transfer; // Required for TransferUtility, assuming 'transfer' is an instance from AWS SDK section
// Assuming 'transfer' is an instance of TransferUtility initialized in the AWS SDK section.
var retryPolicy = new ResiliencePipelineBuilder()
.AddRetry(new RetryStrategyOptions
{
MaxRetryAttempts = 3,
BackoffType = DelayBackoffType.Exponential,
Delay = TimeSpan.FromSeconds(2),
UseJitter = true
})
.Build();
try
{
await retryPolicy.ExecuteAsync(async token => // The cancellation token is optional if not used directly in your delegate
{
await transfer.UploadAsync("./local/file.txt", "my-space", "file.txt");
});
Console.WriteLine("✔ Uploaded with Polly retry logic");
}
catch (AmazonS3Exception ex) when (ex.ErrorCode == "NoSuchBucket")
{
Console.WriteLine($"Error: The bucket 'my-space' does not exist. {ex.Message}");
// Handle missing bucket, e.g., log the error or inform the user
}
catch (AmazonS3Exception ex) when (ex.ErrorCode == "AccessDenied")
{
Console.WriteLine($"Error: Access denied. Check your credentials and permissions. {ex.Message}");
// Handle authentication errors, e.g., prompt for new credentials or log
}
catch (Exception ex) // Catch other potential exceptions during the upload
{
Console.WriteLine($"An unexpected error occurred during upload: {ex.Message}");
// Handle other exceptions
}
Optimize performance with multipart uploads and streaming
The AWS SDK automatically switches to multipart uploads for files larger than 16 MiB. You can tune this threshold and concurrency:
var transfer = new TransferUtility(client, new TransferUtilityConfig
{
MinSizeBeforePartUpload = 16 * 1024 * 1024, // 16 MiB
ConcurrentServiceRequests = 10,
});
Manage access keys and permissions securely
- Generate a dedicated Spaces key with the minimum permissions your app needs.
- Store the key as environment variables—never hard-code credentials.
- Rotate keys periodically and immediately if you suspect a leak.
var key = Environment.GetEnvironmentVariable("DO_SPACES_ACCESS_KEY")
?? throw new InvalidOperationException("Missing DO_SPACES_ACCESS_KEY credential");
var secret = Environment.GetEnvironmentVariable("DO_SPACES_SECRET_KEY")
?? throw new InvalidOperationException("Missing DO_SPACES_SECRET_KEY credential");
Configure CORS for browser uploads
If your front-end talks directly to Spaces, add a CORS rule in the DigitalOcean dashboard:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Compare the approaches
Scenario | Best Fit |
---|---|
Feature-rich uploads, server-side apps | AWS SDK |
One codebase, many providers | FluentStorage |
Single-file utility, no external deps | Raw HttpClient |
Troubleshoot common issues
- 403 Forbidden → Check access keys, permissions, and make sure the Space (bucket) exists and the name is correct.
- NoSuchBucket → Bucket name (Space name) is case-sensitive and must exist.
- CORS errors in browser → Review the CORS rule above and ensure
AllowedOrigin
matches your application's domain exactly. Check for typos. - Slow uploads → Increase
ConcurrentServiceRequests
in the AWS SDKTransferUtilityConfig
or switch to a DigitalOcean Spaces region closer to your users or application server. InvalidOperationException: Missing DO_SPACES_ACCESS_KEY credential
orMissing DO_SPACES_SECRET_KEY credential
→ EnsureDO_SPACES_ACCESS_KEY
andDO_SPACES_SECRET_KEY
environment variables are set correctly where your application is running.
Integrating DigitalOcean Spaces into .NET applications is quick and painless—choose the library that aligns with your project’s architecture and get shipping.
At Transloadit, we also offer a hassle-free way to export files to Spaces with our 🤖 /digitalocean/store Robot, part of our File Exporting service. The Robot accepts these key parameters:
credentials
– Template Credentials pointing to your Spaces access key and secret.space
– the Space (bucket) name.region
– Spaces region, such asnyc3
orams3
.path
– an upload path template. Default:${unique_prefix}/${file.url_name}
.acl
– object ACL, defaulting topublic-read
.
Give it a try for effortless, reliable exports straight from your encoding pipeline.