Techjectory

Replit AI: a lesson in environment segregation

Published on
Authors
  • Name
    Dan Bradley

The recent news that Replit’s AI coding assistant wiped out a user's production database and made changes to code despite instructions to implement a code freeze has got people worrying about AI agents’ ability to destroy databases and codebases.

While Replit is a ‘vibe-coding’ product where the AI builds and hosts your app in a closed ecosystem, it highlights the need for businesses to ensure the right level of segregation between their environments.

The risk isn't (just) AI

If your developers have direct access to prod from their machine, what happens if an AI agent goes rogue? What happens if a virus gets on their machine? Or they make a mistake and forget which database they are connected to?

This isn’t an AI agent problem, it’s an environment segregation problem. And it really is not new.

Segregation is a business issue

While security and segregation of environments can seem to non-technical founders and business leaders to be a “technical issue”, as this incident demonstrates, it can easily be a business continuity and risk issue.

When did you last check in on the level of risk that your tech team think your business is carrying with regards to its environments?

Overcoming objections

Getting segregation right takes effort and discipline and if your current processes have been in place for a long time, you may face resistance to making changes.

“It’ll slow us down.” “It’ll take longer to deploy.” “How will I test if we don’t have production data?” “It’ll take too long”

All of these objections are valid, but are also possible to overcome, given the will, and a good understanding of the risks.

Red flags

  • Developers can force push to the main branch of your source code repositories. If you’re not yet using source code management, this is a critical first step.
  • Deployment to production is done from the developers’ machines. Worse, if it’s done manually.
  • Developers have direct access to the production database directly from their machines. Which is especially risky if credentials are stored locally.

If any of these are true, you should be looking to mitigate immediately by implementing policies in the source control provider, creating CI/CD pipelines to deploy and closing prod databases to workstations as a minimum.

Other common patterns such as lack of segregated test and prod environments, using prod databases for testing or publicly available infrastructure might be less likely to be impacted by an developer’s AI agent, but also create significant risks for your business.

Protect your business

So my message to you is this: don’t focus on the AI agents, look at how you are protecting your software and infrastructure estate regardless of the threat.

Would your systems survive a rogue AI agent? Or even simple human error?

If you’re unsure where your risks lie or need help with your DevOps and cloud infrastructure, let’s talk.

Ready to talk?

Contact us to see how we can help with your business's tech trajectory.