Google builds elite team to close the coding gap with Anthropic Key Points - Google Deepmind has formed a dedicated team to strengthen the coding capabilities of its Gemini models, particularly for complex tasks like building new software from scratch. - The move comes after an internal assessment concluded that Anthropic's programming tools currently outperform Google's own offerings. - To close the gap, Google is increasingly training its AI models on internal code while also tracking employee usage of internal coding tools and, in some teams, making AI training mandatory. Google is doubling down on AI coding, using more AI internally and aiming for models that can eventually improve themselves. Google Deepmind has put together a specialized team of researchers and engineers to sharpen the programming chops of its Gemini models, The Information reports.

The group is led by Deepmind engineer Sebastian Borgeaud, who previously ran pre-training for the company's models. The team is focused on complex, long-horizon programming tasks like writing new software from scratch, work that requires models to read files and figure out what the user actually wants. Part of the motivation: Google researchers think Anthropic's coding tools are better. Coding has become the battleground for every major AI lab this year, with OpenAI and Google both scrambling to catch up to Anthropic. OpenAI recently pulled the plug on its Sora video generator to free up compute for training and running other AI models. Brin pushes for self-improving AI Google co-founder Sergey Brin and Deepmind CTO Koray Kavukcuoglu are directly involved in the effort.

"To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers" of code, Brin wrote in an internal memo. He also required every Gemini engineer to use internal agents for complex, multi-step tasks. Brin told employees that stronger coding skills are a stepping stone toward AI that can improve itself. A sophisticated coding agent, paired with AI that handles math problems and experiments, could eventually automate much of the work done by AI researchers and engineers. Internally, Google tracks how much its coding tool "Jetski" gets used and ranks teams accordingly, a setup similar to Meta, which tracks token usage as its metric.

Some teams outside Deepmind also require engineers to attend AI training sessions. According to The Information's sources, Google is leaning more heavily on models trained on its internal code. Google's internal codebase looks very different from the public code typically used to train general-purpose coding agents, so these internally trained models can't be released publicly. They could, however, help Google build better models that eventually ship to users, while also speeding up internal development. AI News Without the Hype – Curated by Humans Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive "AI Radar" frontier report six times a year, full archive access, and access to our comment section. Subscribe now.

မူရင်းသတင်းရင်းမြစ်များ

ဤဆောင်းပါးကို အောက်ပါမူရင်းသတင်းများမှ စုစည်း၍ မြန်မာဘာသာသို့ ပြန်ဆိုထားခြင်း ဖြစ်ပါသည်။ အားလုံးသော အကြွေးအရ မူရင်းစာရေးသူများနှင့် ထုတ်ပြန်သူများကို သက်ဆိုင်ပါသည်။

မျှဝေပါ: