The 2025 DORA report says what everyone has been thinking

Reflecting on the latest DORA analysis of AI in engineering organizations:

I’ve been using AI coding tools daily for over a year now, and this DORA report is refreshingly honest about something most of us have been muttering in Slack channels: AI isn’t fixing anyone’s broken processes—it’s just making them break faster and louder.

The headline finding? AI adoption hit ninety percent, but organizations without solid foundations are seeing decreased performance and increased instability. Shocking absolutely no one who’s watched an AI-generated PR sit in review purgatory for a week because your team still hasn’t figured out async code reviews.

Here’s what actually resonates: the “spray and pray” approach of handing out Copilot licenses and calling it a strategy is hitting a dead end. I’ve seen this firsthand. You generate code thirty percent faster, then spend that saved time waiting for CI to fail, waiting for staging to deploy, waiting for someone to approve your infrastructure change. The bottleneck just moves downstream.

DORA’s pushing organizations to focus AI on system-level problems rather than individual task speedup. This makes sense theoretically, but let’s be real—most companies can barely get internal tooling prioritized without AI in the mix. Now we’re supposed to build AI-powered platform capabilities? The organizations that can pull this off are already high-performers.

The measurement point hits hard though. If you’re still tracking lines of code generated or adoption rates, you’re measuring the wrong things. We need to know if AI is actually moving the needle on cycle time, stability, and developer satisfaction. But here’s the uncomfortable truth: proper measurement requires instrumentation most teams don’t have and a data culture many organizations are still fumbling toward.

One thing DORA nails: the need for explicit AI policies isn’t about mandates, it’s about operational clarity so developers can experiment confidently. The current state of limbo where nobody knows what’s allowed is exhausting. I’ve personally avoided certain AI use cases not because they wouldn’t work, but because I had zero idea if they’d violate some unwritten policy.

The report’s central metaphor—that AI holds up a mirror to organizations, amplifying both good and bad practices—is dead-on. If your team ships quality code with good observability and small batches, AI makes you dangerous. If you ship garbage with manual deploys and no monitoring, AI just helps you ship garbage faster.

What worries me is that this creates a growing divide. Teams already winning with DevOps best practices will accelerate further. Everyone else will struggle with AI-generated technical debt and wonder why their velocity gains evaporated.

Still, I appreciate DORA dropping the elite/high/medium/low clusters. Those always felt like engineering team astrology. The new seven team types supposedly map to actionable improvements, which could be useful if organizations actually use them instead of just checking where they rank.

Bottom line: This report won’t tell you anything you didn’t suspect if you’ve been paying attention, but it‘s valuable to have data backing up what we‘ve been experiencing. AI coding tools are powerful multipliers, but they multiply whatever you’re already doing. Fix your processes first, then let AI amplify them.