A new data point reveals the staggering pace of AI adoption within Google's own engineering ranks. According to a post by Kimmo Kärkkäinen, a Google software engineer, 75% of all new code at Google is now AI-generated and approved by engineers. This marks a rapid acceleration from just 50% reported in the fall of 2025—a 25 percentage point increase in roughly six months.
The post projects this trend continuing, asking: "2027 90%, and 2028…?" suggesting an expectation that AI will be responsible for nearly all new code at Google within the next few years.
Key Takeaways
- Google reports 75% of all new code is now AI-generated and engineer-approved, a sharp increase from 50% last fall.
- This indicates a massive, accelerating shift in software development practices at the tech giant.
What Happened
The core claim is straightforward but significant: three out of every four new lines of code written at Google now originate from an AI coding assistant, with a human engineer providing the prompt and final review. The metric implies a wholesale transformation of the developer workflow at one of the world's largest software engineering organizations.
The jump from 50% to 75% in a single half-year period is the most critical detail. It indicates that adoption has moved past early experimentation and pilot phases into a dominant, standard practice. The growth curve is not linear; it is accelerating.
Context: Google's Internal AI Tooling
While the post does not specify the tool, the context strongly points to the use of Google's internal AI coding systems. Publicly, Google has offered Gemini Code Assist (formerly Duet AI for Developers), an enterprise-grade coding assistant integrated into IDEs like VS Code and JetBrains, and Cloud Workstations. Internally, Google engineers likely have access to more advanced, proprietary versions of these models, potentially fine-tuned on Google's massive internal codebase.
This internal adoption serves as the ultimate dogfooding test for Google's AI-for-coding products. The rapid scaling from 50% to 75% usage suggests the tools are meeting a critical bar for productivity, code quality, and reliability that justifies mandatory or near-mandatory use.
What This Means in Practice

A 75% AI-generated code rate fundamentally changes the engineering job. The role of a software engineer at Google is shifting from primarily writing code to:
- Articulating Problems: Precisely defining requirements and breaking them down into prompts.
- Reviewing & Curating: Critically evaluating AI-generated code, checking for logic errors, security flaws, and alignment with system architecture.
- Integrating & Debugging: Assembling AI-generated modules and troubleshooting complex, system-level issues.
This level of adoption suggests AI is handling a vast amount of boilerplate, routine implementation, and perhaps even complex algorithm translation, freeing engineers to focus on higher-level design and problem-solving.
gentic.news Analysis
This data point is a seismic indicator for the software industry. When the company that builds and sells AI coding tools reaches 75% internal usage, it validates the technology's core value proposition at scale. This isn't a cherry-picked benchmark; it's a measure of daily, operational reality.
This follows a clear trend we've tracked of internal tooling driving external product strategy. Google's aggressive integration of AI across its products—from Search (SGE) to Workspace—is now mirrored in its own engineering halls. The velocity here (50% to 75% in ~6 months) suggests the productivity gains are substantial enough to overcome inevitable organizational inertia and skepticism. If the gains were marginal, adoption would plateau.
This also raises immediate questions for the competitive landscape. Microsoft, with its deep integration of GitHub Copilot across the developer stack, and Amazon with its CodeWhisperer, will be watching closely. Google's internal metric sets a new benchmark for what's possible. It will pressure competitors to release similar adoption data and could accelerate enterprise sales cycles, as CTOs ask, "If it's good enough for 75% of Google's code, why not for ours?"
However, key unknowns remain. The metric "AI-generated" could encompass anything from a single-line completion to an entire generated function. The "approved by engineers" clause is crucial—it means this is augmented intelligence, not autonomous coding. The real test of quality will be long-term system stability and security. A high generation rate is impressive, but the next question is: what is the defect rate in that AI-generated code compared to human-written code?
Frequently Asked Questions
What AI tool does Google use to generate code?
While not explicitly named in the source, Google engineers are almost certainly using an internal, proprietary version of Google's AI coding technology. The public-facing counterpart is Gemini Code Assist, an enterprise coding assistant. The internal version is likely more advanced and deeply integrated with Google's unique internal development environment and codebase.
Does this mean Google engineers are just reviewing AI code now?
Essentially, yes, for the majority of new code. The role is shifting from author to editor and architect. Engineers provide the specification (via prompt), review the AI's output for correctness, security, and style, and integrate it into the larger system. They remain responsible for the final product.
Is 75% AI-generated code a good thing?
It indicates massive productivity gains, allowing engineers to focus on complex problems rather than routine syntax. The risks include over-reliance on AI, potential for generating subtle bugs or security vulnerabilities that are hard to spot, and the "deskilling" of engineers in lower-level implementation details. Google's high adoption rate suggests they believe the benefits significantly outweigh these risks.
How does this compare to other big tech companies?
No other major tech company has publicly released a comparable internal adoption metric. Microsoft promotes widespread use of GitHub Copilot but cites user counts, not percentage of code generated. Google's 75% figure sets a new public benchmark that its competitors will now be measured against.









