It's the classic startup reflex.
You land your first Enterprise customer. Their security questionnaire arrives: "Do you maintain an immutable audit trail?"
You think: "Easy."
You open your IDE, write a quick migration, and 10 minutes later you have this:
CREATE TABLE audit_logs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
action TEXT NOT NULL,
payload JSONB,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- "Performance optimization"
CREATE INDEX idx_audit_user ON audit_logs(user_id);You push to prod. The Security Officer checks the box. Everyone is happy.
It feels cheap, easy, and relational. Until you scale.
Six months later, this table isn't a solution anymore. It's your biggest technical liability. Here is why.
1. The Performance Tax (Vacuum Hell)
Audit logs have a unique IO profile: Write-Heavy, Read-Seldom.
In a modern B2B SaaS, a single user action (like "Update Team Settings") often cascades into 3 to 5 log events.
As your audit_logs table swells to 10M+ rows, it starts bullying your primary Postgres instance:
Cache Pollution: Postgres relies on its shared_buffers to keep hot data (users, active sessions) in RAM. A massive, cold log table eats up this precious space, forcing your actual application queries to hit the disk. Latency spikes.
The RTO Nightmare: Why are you backing up 50GB of static log data every night? If your DB crashes, your Recovery Time Objective (RTO) is now measured in hours, not minutes, because you're restoring gigabytes of history just to get the login screen back online.
Vacuum Hell: This is the silent killer. Even in an append-only table, background tasks and index updates create "dead tuples". Postgres' autovacuum daemon has to work overtime on your biggest table, consuming CPU cycles you needed for API requests.
2. The Security Fallacy: "Root is God"
The definition of an audit log is that it cannot be changed. It is evidence.
A standard SQL database, by design, supports UPDATE and DELETE.
If your logs live next to your users, anyone with db_owner privileges—or a hacker with SQL injection access—can rewrite history.
Picture this: An attacker compromises an admin account. They exfiltrate data. Before they leave, they cover their tracks:
-- The "Evidence Shredder"
DELETE FROM audit_logs
WHERE user_id = 'compromised_admin_id'
AND created_at > NOW() - INTERVAL '1 hour';Gone. And because you don't have an external integrity anchor (like a hash chain), you will never know there is a gap in your timeline.
3. The Compliance Reality Check
Auditors (SOC 2, ISO 27001) are not technically illiterate. They know databases are mutable.
The question they will ask is:
"How do you guarantee that a rogue sysadmin didn't alter these logs to hide an outage?"
If your answer is "We have strict internal policies", you fail.
They want WORM (Write Once, Read Many) storage. They want cryptographic proof.
The "Hard Way" to Fix It: Partitioning
"But wait," I hear the senior engineers say. "I can just use Table Partitioning!"
You're right. You can set up pg_partman. You can partition by created_at, turning one giant table into manageable monthly chunks. You can then move old chunks to slower disks or drop them entirely.
But now you are maintaining:
- Complex partition management scripts.
- A separate database connection (to limit permissions for security).
- A custom API layer to query across partitions.
You are building infrastructure, not product.
The Solution: Separation of Concerns
Audit logs aren't "Data". They are "Evidence". They deserve their own silo.
The modern "Enterprise" architecture looks like this:
Async Ingestion: Logs are sent via a non-blocking queue (Fire & Forget). Your main app never waits for a log write.
Cryptographic Chaining: Every log entry contains the hash of the previous one. Deleting a row breaks the mathematical chain.
[Event A] <--- (Hash A) --- [Event B] <--- (Hash B) --- [Event C]
^ ^ ^
Tamper-Proof Interdependent Immutable
Cold Storage: Data lands in S3 with Object Lock enabled, or a specialized column-store like ClickHouse.
Build vs. Buy
You can build this pipeline yourself. You'll need a queue (Kafka/SQS), a worker, a separate storage engine (ClickHouse/S3), and a custom signing mechanism using HMAC chains.
Or you can use the LogVault SDK and treat compliance as a solved problem.
import logvault
# Non-blocking, cryptographically signed, and off your main DB.
client.log(
action="billing.subscription_updated",
actor="user_123",
resource="sub_999",
metadata={"old_plan": "starter", "new_plan": "pro"}
)Stop treating evidence like data. Keep your primary database clean for features. Keep your audit logs in a vault.