DEV Community

Cover image for Vibe Coding is Technical Debt. Vibe Engineering is the Fix
Ve Sharma
Ve Sharma

Posted on

Vibe Coding is Technical Debt. Vibe Engineering is the Fix

TL;DR for the Busy Dev

  • Vibe Coding is "Single Player Mode": Prompting based on intuition, pasting code, and moving fast. It’s great for POCs but creates "Context Amnesia" and security risks in production.
  • Vibe Engineering is "Multiplayer Mode": Architecting the constraints, rules, and agents to produce reliable software at scale.
  • The Fix: Move context from your head to the Repo. Use Context Engineering Primitives (Instructions, Prompts, Agents) to enforce standards.
  • The Result: A workflow where the prompt remains the same, but the output shifts from "insecure" to "production-ready" automatically.

The "Vibe" Shift

We’ve all been there. You have a Lo-Fi playlist on, a fresh coffee, and an LLM chat window open. You ask for a React component, you paste it, it works.

This is Vibe Coding. The strategy is simple: "Prompt, Paste, and Pray."

At my last company (YGG), we Vibe Coded our way to an MVP at breakneck speed and kept up with major business pivots that required constant rework. It felt like magic and we got it down 4x faster with less devs saving us 7 figures of costs. But when we prepared for launch, we hit a wall that "vibes" couldn't fix.
We discovered that vibe coding can actually be vulnerability-as-a-service.

We had an external security audit, and the report came back: 163 pages of vulnerabilities, including 15 rated Severe. Too name a few issues, we had SQL injection risks, SSRF threat vectors, and inconsistent authentication patterns.

The diagnosis wasn't that the AI was "bad." The diagnosis was Context Amnesia and discovering that vibe coding can also be vulnerability-as-a-service. We were prompting in a chat window that didn't know our security protocols, didn't know our auth patterns, and didn't know our infrastructure rules.

That is when we shifted to Vibe Engineering.

The Result?

By applying these methods, we didn't just fix the bugs. We shipped the product on time. We addressed every single High and Severe vulnerability before launch, and in our subsequent sprints, our security ticket volume dropped significantly compared to our "Vibe Coding" days.

Defining the Terms

To fix the problem, we first have to define the methodology. What exactly is the difference between just using AI and engineering with AI?

1. Vibe Coding

vibe-coding-meme

Definition: The practice of writing software using natural language, intuition, and heavy reliance on AI "vibes" rather than syntax.
E.g: "It looks correct, so it is correct."

  • The Strategy: "Prompt, Paste, and Pray."
  • The Vibe: Fast, magical, and chaotic.
  • The Trap: Single Player Mode. It relies entirely on your mental context. If you forget to tell the AI to secure the endpoint, it won't.

2. Vibe Engineering

Definition: The discipline of architecting context, constraints, and agents to produce reliable software at scale.
E.g: "Trust the Agent, but Verify the Spec."

  • The Strategy: "Plan, Orchestrate, and Verify."
  • The Vibe: Disciplined, context-aware, and consistent.
  • The Upgrade: Multiplayer Mode. It follows the team's rules and the repo's context, regardless of which developer is prompting.

The Comparison: Coding vs. Engineering

Here is the breakdown of how the workflow shifts when you move from individual usage to team-based orchestration.

Feature Vibe Coding (Individual) Vibe Engineering (Team/Agent)
The Human Role The Typist / Prompter The Architect / Orchestrator
The Context Whatever is in your chat window The entire Repo + *.agents.md, + other .md ruleset files
The Quality Check "Does it run?" (Eye test) "Does it pass the test suite?"
The Danger Breaking Production / Security Over-engineering
The Tooling Chatbots & Tab-Complete Agents, Plan Mode, & MCP

It’s Not a Boolean, It’s a Spectrum

Before we dive into the fix, let's be clear: Vibe Coding isn't "wrong." It’s a tool.

  • Vibe Coding (0-60% Maturity): Excellent for prototyping, hackathons, and exploring new APIs. If you need to test an idea in 30 minutes, Vibe Code it.
  • Vibe Engineering (60-100% Maturity): Critical for production, teams, and long-term maintenance.

The danger lies in staying in "Vibe Mode" when you move to production. Conversely, you don't want to Over-Engineer a weekend project with complex agent rules. It’s a gradient, and knowing when to switch gears is the skill of the future AI-native developer.

Context Engineering Primitives

Vibe Engineering relies on codifying your team's "vibes" into the repository. At GitHub & Microsoft, we call these Context Engineering Primitives:

Feature File Pattern Purpose Best For
Custom Instructions *.instructions.md Rules of Engagement. Always-on guidelines that influence all interactions. 1. Coding Standards (No any types)
2. Security Rules (No raw SQL)
3. Tech Stack (Always use Tailwind)
Reusable Prompts *.prompts.md Executable Commands. Specific tasks you run frequently. 1. Generating boilerplate components
2. Writing Unit Tests
3. Creating Atomic Commits
Custom Agents *.agents.md Personas & Workflows. Specialized contexts with specific tools. 1. Security Review Agent
2. Terraform/SRE Agent
3. Migration Agent

Context Engineering Primitives in a Repo

Here's what these primitives look like in an actual repo:

context-eng-primitives

Real World Example 1: The App Layer (Next.js)

Let's look at how "Vibe Coding" vs. "Vibe Engineering" handles the exact same prompt.

The User Prompt:

"Write a quick endpoint to fetch user data by ID."

❌ Scenario A: Vibe Coding (No Context)

Without instructions, the AI optimizes for speed. It assumes you just want the data now.

// src/app/api/legacy-vibe/route.ts
import { NextResponse } from 'next/server';
import { db } from '@/lib/db'; 

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const id = searchParams.get('id');

  // 🚩 RISK 1: SQL Injection vulnerability via template literal
  // 🚩 RISK 2: No input validation (Zod is missing)
  const users = await db.query(`SELECT * FROM users WHERE id = ${id}`); 

  // 🚩 RISK 3: Returning full object including password_hash and salt
  return NextResponse.json(users[0]);
}
Enter fullscreen mode Exit fullscreen mode

✅ Scenario B: Vibe Engineering (With agents.md)

We add a .github/agents/security.agent.md file to the repo with non-negotiables: 1. Always use ORM methods. 2. Validate inputs with Zod. 3. Never return sensitive fields.

Here's what the agent file looks like (it's 3x longer than what you see here):

agent-file

Now here's the updated code running that same initial user prompt.

// src/app/api/secure-vibe/route.ts
// Context: .github/copilot-instructions.md active

import { NextResponse } from 'next/server';
import { z } from 'zod';
import { db } from '@/lib/db';

// ✅ SUCCESS 1: Zod Schema for validation
const QuerySchema = z.object({
  id: z.string().uuid(),
});

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);

  // ✅ SUCCESS 2: Validate input
  const result = QuerySchema.safeParse({ id: searchParams.get('id') });
  if (!result.success) {
    return NextResponse.json({ error: "Invalid ID" }, { status: 400 });
  }

  // ✅ SUCCESS 3: Use ORM method (No raw SQL)
  const user = await db.user.findUnique({
    where: { id: result.data.id },
    // ✅ SUCCESS 4: Return only safe fields (DTO)
    select: { id: true, name: true, email: true } 
  });

  return NextResponse.json(user);
}
Enter fullscreen mode Exit fullscreen mode

Real World Example 2: The Infrastructure Layer (Terraform)

This is even more critical in Cloud Ops. A "Vibe Coded" infrastructure often leads to public buckets and leaked keys.

The User Prompt:

"Write Terraform for an Azure Function to process payments."

❌ Scenario A: Vibe Coding (The "It Works" Trap)

resource "azurerm_linux_function_app" "payment_api" {
  name                = "vibe-payment-api"
  location            = "East US"

  # 🚩 MISTAKE 1: Hardcoded Secrets (The Cardinal Sin)
  app_settings = {
    "STRIPE_API_KEY" = "sk_live_12345_DONOTCOMMIT" 
    "DB_CONN"        = "Server=tcp:db.windows.net;Pwd=Password123!"
  }

  site_config {
    # 🚩 MISTAKE 2: Public & Insecure
    http2_enabled = false
    minimum_tls_version = "1.0"
  }

  # 🚩 MISTAKE 3: Governance (Missing tags)
}
Enter fullscreen mode Exit fullscreen mode

✅ Scenario B: Vibe Engineering (With terraform.agent.md)

We use a Terraform Agent context file that enforces: "Always use Key Vault references" and "Enforce Managed Identity."

Here's what the agent file looks like:

agent-file-eng

Now here's the updated code, using that same prompt:

resource "azurerm_linux_function_app" "payment_api" {
  name                = "secure-payment-api"
  location            = "East US"

  # ✅ SUCCESS 1: Managed Identity (Identity as Perimeter)
  identity {
    type = "SystemAssigned"
  }

  # ✅ SUCCESS 2: Secrets via Key Vault References
  app_settings = {
    "STRIPE_API_KEY" = "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.stripe.id})"
    "DB_CONN"        = "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.db.id})"
  }

  site_config {
    # ✅ SUCCESS 3: Modern Security Standards
    http2_enabled = true
    minimum_tls_version = "1.2"
  }

  # ✅ SUCCESS 4: FinOps Happy + compliance success
  tags = {
    CostCenter = "Payments-Team"
  }
}
Enter fullscreen mode Exit fullscreen mode

The Enterprise Win: True "Shift Left" Security

For Enterprise teams, this methodology solves a massive headache: The Shift Left.

Usually, "Shift Left" means catching security issues in the CI/CD pipeline or during a Pull Request review. While better than fixing it in production, it still creates friction and rework.

Vibe Engineering shifts security all the way to the Prompt.

By Codifying your constraints (e.g., "Always use Managed Identity") into the Repo Context:

  1. Prevention > Detection: You aren't catching a bad pattern in a scan; you are preventing the AI from suggesting it in the first place.
  2. Velocity: Developers don't have to rewrite code after a failed pipeline run.
  3. Governance: You ensure that every junior developer (and every AI agent) defaults to your organization's architectural standards without needing to memorize the wiki.

The Workflow: From Vulnerability Github Issue to Merged PR

How does this look in practice when fighting a real security fire? Here is an actual workflow to fix an SSRF vulnerability using a security Agent.

1. The Vulnerability:
We identified a CVE in our Next.js middleware handling. It needed a surgical fix.

issue-image

2. Assigning the Agent:
Instead of pulling a developer off their sprint, I opened a GitHub Issue and assigned it to our custom @security-agent (which is configured to focus solely on vulnerabilities).

assigning-the-agent

3. Orchestration & Execution:
The agent analyzed the repo, found the vulnerable middleware pattern, and proposed a fix. It didn't just guess - it traced the data flow.

agent-execution

4. Verification:
The agent ran the linting rules and ensured the fix didn't break existing routing, along with running CodeQL. As well as a security impact report!

agent-verification

5. The Merge:
I reviewed the PR. Because the agent followed our copilot-instructions.md the code style matched ours exactly. The security.agent.md ensured all other security best practices specific to our repo meet our standards. I clicked merge.

agent-merge

Success!

Entering Agent HQ: Orchestration at Scale

We are moving beyond simple chat. We are entering the era of Agent Orchestration.

ghu-agent-hq

At GitHub Universe, we announced Agent HQ. This turns GitHub into an open ecosystem where you can choose the right model for the specific job and a centralized agent orchestration platform.

  • Need complex architectural reasoning? Route the agent to Claude 3.5 Sonnet.
  • Need massive context analysis? Route to Gemini.
  • Need fast execution? Route to OpenAI.

You don't just prompt a chatbot anymore - you act as the General Contractor, hiring the right specialized agent for the right task.

This is already being done at scale at Github itself!

ghu-agent-commmiter

Summary

vibe-code-vibe-eng-comparison

To start Vibe Engineering tomorrow:

  1. Avoid Single Player Mode: Don't rely on your mental context.
  2. Codify Your Vibes: Create a root .github/copilot-instructions.md file today.
  3. Leverage Context Engineering Primitives: use .*.agents.md, .*.instructions.md, .*.prompts.md files.
  4. Orchestrate: Don't just generate code—Engineer the System that generates the code. Use Agents to support this system.

Vibe coding is fun for hackathons. Vibe Engineering is for production.

last-meme-whoosh


You can find the source code for this article here too:
https://github.com/VeVarunSharma/contoso-vibe-engineering


I’m Ve Sharma, a Solution Engineer at Microsoft focusing on Cloud & AI working on Github Copilot. I help developers become AI-native developers and optimize the SDLC for teams. I also make great memes. Find me on LinkedIn or GitHub.

Top comments (7)

Collapse
 
capestart profile image
CapeStart

Vibe coding is great for generating ideas, but the "prompt and pray" approach quickly breaks down once you get to real infrastructure and authentication patterns. The cheat code that most teams are still unaware of is essentially locking context into the repository.

Collapse
 
achnaf_adam profile image
Achnaf Adam

This was a chef’s kiss breakdown of why “Prompt, Paste, and Pray” only gets you as far as the demo day 😂.

The shift from Vibe Coding to Vibe Engineering honestly feels like going from solo-queue chaos to a fully buffed raid team! with actual rules instead of vibes and caffeine.

Loved the part about turning security from “oh no” into “oh… it’s already fixed?” by moving context into the repo.

Guess we’re finally admitting that our chat window is not a source of truth 😅.

Seriously though, this is the clearest explanation I’ve seen on how teams should evolve from just using AI to actually engineering with AI.

More of this. Fewer 163-page security reports, please. 🙏🔥

Collapse
 
spo0q profile image
spO0q

Orchestrator!

You need to filter the results, and stop the machine, which can be hard, as it often hide its errors.

Very cool for prototyping, but dangerous for production code without proper supervision.

I honestly don't understand why big tech companies are kinda validating this story telling.

Saving money today, fixing hard bugs tomorrow.

Collapse
 
vevarunsharma profile image
Ve Sharma

Definitely agree with you! Exactly - "vibe coding"" is can be cool for prototyping but without supervision and the right guard rails and constraints (as seen via the .github AI files) it becomes dangerous for a production app!

Collapse
 
uratmangun profile image
uratmangun

im more into ai assisted then vibing lol vibing is kinda like unresponsible behaviour while ai assited is more like im getting assisted while im still in control of anything

Collapse
 
vevarunsharma profile image
Ve Sharma

100% Vibe coding can definitely be irresponsible way for development - by utilizing those guard rail files, at least output it much more controlled and safe - per the shift left security posture strategy!

Collapse
 
embernoglow profile image
EmberNoGlow

I agree, it’s a fascinating concept! However, from my personal journey, “vibe coding” often feels like a coin flip – you either hit the jackpot or face a significant loss.

I’ve had quite a few frustrating experiences, like using up my entire free Copilot quota because the AI just wouldn’t commit to writing the actual code. I’d try to guide it by saying “write code,” and it would respond with analysis phrases like “let me analyze the code” or “I’m ready to help write the code.” Even after explicit instructions to begin, it often failed to deliver. The ultimate outcome was usually code that didn’t work, and my available usage was exhausted.

It seems like this type of AI-assisted coding, where intuition plays a big role, has a substantial path ahead in terms of maturity and reliability!