AI agents and software: You say you want a revolution?

You say you want a revolution (…)
But when you talk about destruction
Don’t you know that you can count me out

What we are witnessing now is a shift. Maybe a seismic one. It’s the agentic AI moment when it comes to writing software. Revolution — or…something nasty?

Let’s go back in time to find an analogy.

For most of human history, the art of writing was an exclusive privilege of elite classes across different societies. In Europe during the Middle Ages, writing and reading were largely reserved for monks in monasteries. In most countries in Western Europe, the church was the sole provider of education. It had almost total control over quality and output.

When Johannes Gutenberg made printing with movable type accessible to the masses, it was a liberating moment for human history. Suddenly it was possible to create and distribute revolutionary ideas — scientific, political, and theological. The ramifications cannot be overstated: Renaissance humanism, the Reformation, and the scientific revolution all were made possible thanks to the democratization of the printed word.

But here’s the crucial distinction: Gutenberg democratized reproduction, not creation. After 1450, most people still couldn’t read or write — that took centuries of education reform. What’s happening now with AI-assisted coding is something even more radical: it democratizes creation itself.

Until very recently, the development of software was limited to those who had invested years mastering programming languages and development tools. Unlike the monks of the Middle Ages, who actively guarded access to knowledge, developers didn’t control access — but the barrier to entry was high. Learning to code was open to anyone willing to make the journey, but it was a long and demanding one. Most people simply couldn’t build the software they were envisioning in their mind.

This all changed a couple of months ago. Now, thanks to tools like Claude Code, Codex, Mistral Vibe, or Google’s Antigravity, virtually everyone can write software. People are coding the tools and games they always wanted to build.

What a liberating moment.

But…

With great power comes great responsibility. But here lies the critical problem: while you can fully understand and comprehend a written text, it is extraordinarily difficult to do the same for software written by AI tools. The people creating these applications often cannot read, audit, or verify the code running behind them. Gosh, even experts struggle to understand what the LLMs have created (not because it is so brilliant). A compromised token here, an exposed API key there — and these aren’t hypothetical scenarios. They’re already happening.

This is where the Gutenberg analogy breaks down — and where the stakes get higher. A poorly printed pamphlet might spread bad ideas, but it doesn’t autonomously execute actions on your behalf. Bad software does. And when that software scales to millions of users who also don’t understand its internals, you’ve created an attack surface of unprecedented proportions. Malicious actors don’t need to compromise the creators — the creators have already done the hard work for them, shipping code they never truly understood.

Regulation? Technical guardrails? The answer is probably: all of the above, and more. Here are three paths forward — each with its own logic, and its own trade-offs.

Path One: A New Kind of Literacy. Just as Gutenberg’s revolution eventually demanded mass literacy campaigns, the AI coding revolution demands a new form of education. Not teaching everyone to program — that ship has sailed — but teaching people to evaluate what AI-generated code does. Call it „code literacy for the vibe-coding era.“ The question is: who provides this education? Schools? The AI companies themselves? And how do you teach someone to assess risks in code they fundamentally cannot read?

Path Two: Platform Responsibility. If Anthropic, OpenAI, and Google are the new printing presses, they bear responsibility for what rolls off their machines. This means building automated security scanning into every code generation step, sandboxing applications by default, flagging dangerous patterns before they ship. The tools that make creation easy must also make recklessness hard. This isn’t about limiting creativity — it’s about building safety into the infrastructure itself.

Path Three: Trust the Process. When Gutenberg’s press arrived, the Catholic Church panicked about losing its monopoly on truth. Moral guardians warned that widespread printing would unleash chaos. And yes, it did unleash some chaos — but on balance, the democratization of knowledge produced far more good than harm. Perhaps the same will prove true here. Perhaps the security problems will be solved not by restricting access, but by building better tools — AI that audits AI, automated vulnerability detection that improves faster than the threats evolve.

Revolution or Destruction?

The answer is: probably both. Every democratization creates new vulnerabilities alongside new freedoms. The question isn’t whether to embrace the revolution. It’s already here. The question is whether we can build the institutions, tools, and literacy to ensure that when destruction comes knocking, we can count it out.

Don’t you know that you can count me out.