DEV Community

Tim Kang
Tim Kang

Posted on • Edited on

RecordOps: What if Your Database Records Could Provision Infrastructure?

Here's a scenario you might recognize: You're building a multi-tenant SaaS platform. A new customer signs up, and their data gets inserted into your database. Perfect. Now you need to provision their infrastructure—namespace, deployment, service, ingress. So you:

  1. Write YAML manifests
  2. Commit to Git
  3. Wait for PR approval
  4. Wait for CI/CD
  5. Hope nothing breaks
  6. Update the customer record with their URL

It works, but doesn't it feel... disconnected? Your application already knows everything about this customer through the database. Why are you manually coordinating with a completely separate infrastructure system?

What if Infrastructure Just Read From the Same Database?

That's the idea behind RecordOps.

RecordOps (Record Operations) is a pattern where your database records define infrastructure state. Instead of maintaining YAML files or Terraform code, you define infrastructure parameters as columns in your database. (I'm coining this term to describe a pattern I've been using—maybe it resonates with you too.)

INSERT a row     ->  Infrastructure provisions
UPDATE a column  ->  Resources reconfigure
DELETE a record  ->  Everything cleans up
Enter fullscreen mode Exit fullscreen mode

Every active row in your database represents a running stack in your cluster.

A Concrete Example

Let's say you have a customers table:

CREATE TABLE customers (
  customer_id VARCHAR(50) PRIMARY KEY,
  domain VARCHAR(255) NOT NULL,
  plan VARCHAR(20),
  active BOOLEAN DEFAULT TRUE,
  replicas INT DEFAULT 2
);
Enter fullscreen mode Exit fullscreen mode

With RecordOps, you define a template once: "For each active customer, create a namespace, deployment (with N replicas), service, and ingress (pointing to their domain)."

Now when you onboard a customer:

INSERT INTO customers VALUES
  ('acme-corp', 'acme.example.com', 'enterprise', true, 5);
Enter fullscreen mode Exit fullscreen mode

Within 30 seconds, infrastructure provisions automatically:

  • Namespace: acme-corp
  • Deployment: 5 replicas
  • Service: acme-corp-app
  • Ingress: acme.example.com -> service

No YAML. No Git. No manual steps. Just a database transaction.

Why This Feels Different

Your Database Already Has All the Answers

Think about what information you need to provision infrastructure:

  • Customer ID
  • Domain name
  • Plan/tier
  • Region
  • Resource limits
  • Feature flags

All of this is already in your database. You're just duplicating it in YAML files or Terraform variables.

Operations Become Data Changes

Common operational tasks are just database operations you already know:

Scale a customer:

UPDATE customers SET replicas = 10 WHERE id = 'acme-corp';
Enter fullscreen mode Exit fullscreen mode

Enable a feature flag:

INSERT INTO feature_flags VALUES ('acme-corp', 'ai-assistant', true);
Enter fullscreen mode Exit fullscreen mode

Blue-green deployment:

UPDATE deployments SET active_version = 'green' WHERE customer_id = 'acme-corp';
Enter fullscreen mode Exit fullscreen mode

No new tooling. No context switching. Just SQL.

Testing Becomes Trivial

Want to clone your production environment to staging? With traditional infrastructure, that's a project. You're exporting state, modifying variables, coordinating across systems.

With RecordOps, it's just cloning database rows:

INSERT INTO customers SELECT * FROM customers WHERE environment = 'prod';
UPDATE customers SET environment = 'staging', domain = CONCAT(domain, '.staging');
Enter fullscreen mode Exit fullscreen mode

30 seconds later, you have a perfect staging environment. Every service, every configuration, every dependency—recreated automatically.

How Does This Compare to GitOps?

GitOps is excellent for cluster-level infrastructure. Your operators, CRDs, system services—these absolutely should be in Git with proper review.

But for per-customer infrastructure? Git becomes tedious. You're creating YAML files for each customer, managing merge conflicts, waiting for pipelines. Meanwhile, your application already knows about these customers.

They work well together:

  • GitOps: Cluster-level config (changes rarely, needs review)
  • RecordOps: Customer-level stacks (changes frequently, follows data)

What About Infrastructure-as-Code?

Terraform and Pulumi are great for cloud infrastructure. If you're provisioning AWS resources or managing your cluster itself, absolutely use them.

But if you're provisioning the same pattern repeatedly—one stack per customer, one environment per project—you might not need infrastructure-as-code. You might just need infrastructure-as-data.

Instead of writing code to describe infrastructure, you're adding rows to describe state. It's a different mental model that maps naturally to database-driven applications.

Common Patterns

Feature Flags Control Infrastructure

Instead of deploying optional features for everyone:

CREATE TABLE feature_flags (
  customer_id VARCHAR(50),
  feature VARCHAR(50),
  enabled BOOLEAN
);

-- AI assistant appears only for this customer
INSERT INTO feature_flags VALUES ('acme-corp', 'ai-assistant', true);
Enter fullscreen mode Exit fullscreen mode

Your template includes conditional logic. If the flag exists and is enabled, the AI service deploys. Otherwise, it skips it.

Blue-Green as a Column

CREATE TABLE deployments (
  customer_id VARCHAR(50),
  active_version VARCHAR(10) -- 'blue' or 'green'
);

-- Switch traffic
UPDATE deployments SET active_version = 'green';
Enter fullscreen mode Exit fullscreen mode

Your service selector updates to point to green. Traffic switches in seconds. Roll back by changing it back to 'blue'.

Ephemeral Environments with TTL

CREATE TABLE environments (
  id VARCHAR(50),
  domain VARCHAR(255),
  ttl TIMESTAMP
);

INSERT INTO environments VALUES
  ('demo-123', 'demo-123.example.com', NOW() + INTERVAL 7 DAY);

-- Database trigger cleans up expired environments
CREATE TRIGGER cleanup_expired
AFTER INSERT OR UPDATE ON environments
BEGIN
  DELETE FROM environments WHERE ttl < NOW();
END;
Enter fullscreen mode Exit fullscreen mode

Demo environments provision on insert and auto-cleanup after their TTL.

When Does RecordOps Make Sense?

This pattern works well when:

  • You're building multi-tenant platforms where each customer needs isolated infrastructure
  • You provision infrastructure frequently (multiple times per day)
  • Your infrastructure closely follows your data model
  • You want less coordination between application and infrastructure

It's probably not right if:

  • You rarely provision infrastructure (once a month or less)
  • Every change requires manual approval
  • You need deep cloud provider integrations beyond Kubernetes

Honestly, you can mix approaches. GitOps for cluster-level, RecordOps for tenant-level, manual for critical changes. They complement each other.

Things to Keep in Mind

Your Database Becomes Critical Infrastructure

It's not just storing application data anymore—it's controlling infrastructure. This means:

  • Database availability matters more (though existing infrastructure keeps running if DB goes down)
  • Schema migrations affect infrastructure (test carefully)
  • Database permissions become infrastructure permissions

Security Model Shifts

SQL injection vulnerabilities can become infrastructure vulnerabilities. If user input can manipulate your queries, they could trigger unwanted infrastructure changes. Use parameterized queries and validate inputs.

Sync Delays Exist

With RecordOps, there's typically a sync interval (e.g., 30 seconds) between database changes and infrastructure updates. For most cases this is fine, but if you need instant provisioning, you'll need to tune this or reconsider.

Why I'm Exploring This

I kept running into the same problem: syncing my application's database state with infrastructure state. I'd add a customer to the database, then manually coordinate with my infrastructure tooling. Eventually I realized—they could just be the same thing.

RecordOps isn't revolutionary. It's actually pretty obvious once you see it. If your infrastructure maps to your data, why not let your data drive your infrastructure?

This pattern won't replace every tool in your stack. But for the specific problem of provisioning repeated patterns (per-customer stacks, per-project environments), it might simplify your life.

If You Want to Try It

I built Lynq, an open-source operator that implements RecordOps for Kubernetes. You point it at your database, define your templates, and it handles the rest.

But the pattern itself is tool-agnostic. You could build your own implementation, use a different tool, or just take the concepts and apply them however makes sense for your stack.


What do you think? Have you felt this pain before? How are you handling per-customer infrastructure provisioning today?

I'd love to hear your thoughts and experiences in the comments. 👇

Top comments (0)