If you’re using Linode’s Object Storage, DigitalOcean Spaces, or any S3-compatible service, you might run into a puzzling situation, your dashboard shows gigabytes or terabytes of usage, yet when you inspect your files manually or compress your data with tar.gz, the size seems totally reasonable
What gives?
Recently, I encountered this phenomenon while managing a WordPress site hosted on Linode. The bucket showed 7.9 GB in usage, but the entire wp-content/uploads
directory, when compressed and checked locally, was only about 900 MB.
After digging into the problem, the answer was both technical and eye-opening.
Understanding the Layers of S3 Storage
S3 and S3-compatible services like Linode and DigitalOcean aren’t traditional file systems. They use object storage, meaning every file is treated as an object with associated metadata.
Even a 0-byte file has a cost. In our case, we found
- Thousands of zero-byte objects simulating folder structures (e.g.,
2024/05/
) - Orphaned plugin-generated folders (e.g.,
uploads/sucuri/
) with excessive logs - No multipart uploads or versioning (Linode doesn’t support versioning yet)
These objects added up, not necessarily in file size, but in object count and overhead. Linode and DigitalOcean both count this when calculating your usage.
Why This Happens in WordPress
Plugins like Sucuri Security, WP Fastest Cache, and thumbnail regenerators can generate hundreds or thousands of small files that are saved inside wp-content/uploads
.
If you’re syncing that entire directory to S3, especially via s3fs
, all of these objects are stored as is in the bucket.
Some folders, like uploads/sucuri
, contained hundreds of MBs of logs and temporary files that weren’t useful long-term. WordPress itself doesn’t clean these up unless explicitly told to.
Zero-Byte Objects: Not Harmless
Using awscli
to inspect the bucket revealed that many zero-byte objects were actually keys ending in slashes.
These simulate folder structures in S3 (which doesn’t have real directories) and were likely created automatically by tools or plugins.
While small, each one counts toward object count limits and storage calculation due to metadata.
The Final Mystery: Phantom Storage That Won’t Go Away
Even after deleting all visible files using aws s3 rm --recursive
, Linode’s dashboard still showed 7.1 GB in usage and 48 objects remaining, yet no files were listed via CLI or S3 browser tools.
This symptom is not unique to Linode; DigitalOcean Spaces users have reported similar behavior.
This suggests that some storage layers, possibly orphaned internal metadata, system files, or failed low-level uploads—are not visible through standard user tools.
It may also point to a delay in accounting sync between object store and dashboard UI.
Final Thoughts
This experience highlighted how seemingly small or invisible objects can lead to surprisingly large storage bills and usage stats.
If you’re using object storage for WordPress or any CMS, audit it regularly.
More importantly, be aware that some usage may remain hidden even after all visible objects are gone.
For critical systems, always pair bucket use with support tickets and lifecycle automation to maintain control.