87% Faster Code Delivery with AI? Here’s What They’re Not Telling You
AI-powered code assistants are no longer theoretical. Tools like Google’s Gemini Code Assist are starting to impact real engineering workflows—and organizations are beginning to see tangible results. But what should executives realistically expect? And what are the operational trade-offs?
At Google Cloud Next, I sat down with Ryan Salva, VP of Product at Google Cloud, to talk about where Gemini Code Assist is today, where it performs best, and what challenges still lie ahead for enterprise adoption.
After sharing a post about developers achieving 25% productivity gains, the feedback was clear: decision-makers want more context. Ryan and I went deep—breaking down real use cases, measuring impact, and discussing the skills gap that AI tools are starting to expose.
Key Takeaways from My Conversation with Google Cloud
✅ 25% Productivity Gains Are Real—But Context Matters
Private preview customers using Gemini Code Assist have seen up to 25% improvement in time to market along with measurable reductions in delivery cost. But these gains depend on teams having:
-
A structured engineering workflow
-
Strong documentation and test coverage
-
Skilled engineers crafting and refining prompts
AI assistance isn’t plug-and-play. It amplifies well-run teams—it won’t fix broken ones.
⚙️ Where AI Shines: Modernization Projects
Ryan shared compelling examples of where AI drives the most value:
-
COBOL to Java migration:
A USPS application estimated at 150–160 hours was modernized in 20 hours by a developer with no COBOL experience, using Gemini and Google’s migration tooling. -
GraphQL to PostgreSQL conversion:
A major retailer cut conversion time per query from two days to six hours by using a structured, AI-assisted process.
Modernization and refactoring projects are ideal starting points—especially when the source and target states are clearly defined.
👨🏽💻 Human-in-the-Loop Is Still Mandatory
Gemini isn’t replacing engineers—it’s augmenting them. Human guidance is still required for:
-
Writing high-quality prompts
-
Interpreting ambiguous results
-
Validating architectural decisions
As Ryan put it: we’re not waiting for the models to improve—we’re improving how we work with them.
🧪 AI Output Needs to Be Measured by Quality, Not Quantity
Too much of the AI conversation has focused on code volume. The real opportunity is improving quality—test coverage, documentation, and resilience.
The 2024 Dora Report supports this pivot. It found that teams where 30%+ of developers adopted AI tools saw a 7.2% regression in delivery stability.
That’s a wake-up call: if you’re measuring AI’s impact by lines of code, you’re missing the point.
🧩 Prompting Is a Skill—and a Discipline
High-performing teams are starting to treat prompt creation like any other engineering practice:
-
Versioning prompts alongside code
-
Reviewing them during code reviews
-
Measuring their impact over time
The gap isn’t in raw talent—it’s in prompt fluency. As Ryan noted, many engineers are close to becoming “AI engineers,” but few have crossed the bridge. It’s time to start building that competency.
What About Platform Engineering?
Beyond the developer conversation, there’s an operational elephant in the room: platform teams aren’t ready for this.
At Google Cloud Next, I raised the question directly: “How are customers handling the governance and pipeline implications of AI-assisted development?”
The answer: most aren’t. Today’s CI/CD pipelines aren’t built for:
-
Non-deterministic outputs
-
Prompt-as-code workflows
-
Agent-generated changes requiring audit trails
-
New testing models for LLM output validation
Google acknowledged this gap and sees a future where some orgs may leapfrog traditional DevOps—but most enterprises will need new constructs before that’s possible.
This isn’t just a toolchain change. It’s a governance, observability, and security challenge that platform teams haven’t solved yet.
What Executives Still Need to Address
As your organization evaluates or pilots AI dev tools, here are the open questions to answer:
-
Are our platform and DevOps teams equipped to manage AI-generated outputs?
-
Do we have governance frameworks for prompt versioning and approval?
-
What new test and quality gates do we need for LLM-powered pipelines?
-
How do we evaluate “success” when productivity isn’t measured in lines of code?
-
How do we train engineers to become AI-capable—not just AI consumers?
Final Thought
AI code assistants like Gemini aren’t a magic wand but a real opportunity. The value is there for teams with the discipline, structure, and engineering maturity to harness them.
As always, the organizations that move deliberately—investing in enablement, governance, and process—will be the ones that gain the most from this shift.
Let’s keep the conversation going. Are your teams experimenting with AI development? What’s working? What’s holding you back?
Share This Story, Choose Your Platform!
Keith Townsend is a seasoned technology leader and Chief Technology Advisor at Futurum Group, specializing in IT infrastructure, cloud technologies, and AI. With expertise spanning cloud, virtualization, networking, and storage, Keith has been a trusted partner in transforming IT operations across industries, including pharmaceuticals, manufacturing, government, software, and financial services.
Keith’s career highlights include leading global initiatives to consolidate multiple data centers, unify disparate IT operations, and modernize mission-critical platforms for “three-letter” federal agencies. His ability to align complex technology solutions with business objectives has made him a sought-after advisor for organizations navigating digital transformation.
A recognized voice in the industry, Keith combines his deep infrastructure knowledge with AI expertise to help enterprises integrate machine learning and AI-driven solutions into their IT strategies. His leadership has extended to designing scalable architectures that support advanced analytics and automation, empowering businesses to unlock new efficiencies and capabilities.
Whether guiding data center modernization, deploying AI solutions, or advising on cloud strategies, Keith brings a unique blend of technical depth and strategic insight to every project.