Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

AWS engineer using SageMaker development environment with Kiro agent interface, listing Llama, Qwen, DeepSeek, and…

Amazon's SageMaker Agentic Fine-Tuning Supports Llama, Qwen, DeepSeek, Nova

Amazon launched an AI agent on SageMaker that automates fine-tuning of Llama, Qwen, DeepSeek, and Nova models via plain-language instructions, abstracting API fragmentation.

·1d ago·3 min read··10 views·AI-Generated·Report error
Share:
Source: the-decoder.comvia the_decoderSingle Source
What does Amazon's new agentic fine-tuning on SageMaker AI support?

Amazon SageMaker AI now includes an AI agent that lets developers describe use cases in plain language to fine-tune models like Llama, Qwen, DeepSeek, and Nova, automating data prep, training, and code generation.

TL;DR

Amazon launches agentic fine-tuning on SageMaker AI. · Kiro agent recommends methods, prepares data, trains models. · Supports Llama, Qwen, DeepSeek, and Nova model families.

Amazon launched an AI agent for SageMaker that automates fine-tuning of Llama, Qwen, DeepSeek, and Nova models. The Kiro agent, preinstalled in the development environment, replaces manual API and data-format wrangling with plain-language instructions.

Key facts

  • Kiro agent preinstalled in SageMaker AI development environment.
  • Supports Llama, Qwen, DeepSeek, and Nova model families.
  • Nine prebuilt skills manage dataset checking to model deployment.
  • Claude Code can substitute for Amazon's Kiro agent.
  • All generated code is editable and reusable Jupyter notebooks.

Amazon SageMaker AI now includes an AI agent designed to help developers customize language models. Instead of wrestling with different APIs and data formats, developers can now describe their use case in plain language. The agent then recommends the right training method, prepares the data, kicks off training, and delivers the finished code as Jupyter notebooks [According to The Decoder].

Amazon's Kiro AI agent comes preinstalled in the development environment, but developers can also use Claude Code or other agents. Nine prebuilt "skills" handle the workflow, from checking the dataset to deploying the finished model. The agent supports model families like Llama, Qwen, Deepseek, and Amazon's own Nova. All generated code is editable and reusable.

Why This Matters

The unique angle here is that Amazon is abstracting away the fragmentation of fine-tuning APIs across model families. Most cloud providers support fine-tuning, but require developers to switch between vendor-specific SDKs and data formats. SageMaker's agentic approach — using nine prebuilt skills and a natural-language interface — directly competes with offerings from Google Vertex AI and Microsoft Azure AI, which also offer managed fine-tuning but lack a unified agent layer. By integrating support for Claude Code as an alternative agent, Amazon hedges against lock-in while embedding Anthropic's tooling deeper into AWS workflows, a move consistent with its $25 billion Anthropic investment announced in April 2026 [per the knowledge graph].

What's Missing

Amazon did not disclose performance benchmarks, pricing for the agentic fine-tuning feature, or whether the Kiro agent uses a specific underlying model. The company also didn't specify which training methods (e.g., LoRA, full fine-tuning, RLHF) the agent can recommend, leaving developers to discover the supported methods through trial.

What to watch

Fine-tuning GPT-3 for Improved Performance on Custom Data on AWS | by ...

Watch for Amazon to release performance benchmarks comparing agentic fine-tuning with manual workflows, and whether the Kiro agent expands to support additional model families like Mistral or Cohere. Also track adoption of Claude Code as a SageMaker agent given Amazon's deep Anthropic ties.


Sources cited in this article

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from 1 verified source, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala AYADI.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Amazon's move is less about fine-tuning innovation and more about reducing friction in the developer workflow. The key insight is that model fine-tuning has become a multi-vendor headache — Llama from Meta, Qwen from Alibaba, DeepSeek from a Chinese lab, Nova from Amazon itself. Each has its own API, data format, and training requirements. SageMaker's agentic layer, with nine prebuilt skills, essentially creates a unified control plane for fine-tuning, which is a structural moat for AWS. By also supporting Claude Code, Amazon acknowledges that its Kiro agent isn't necessarily best-in-class, but the platform play keeps developers inside SageMaker regardless of which fine-tuning tool they prefer. This contrasts with Google Vertex AI's approach, which tightly integrates with Gemini and Google's own toolchain, and Microsoft Azure AI's focus on OpenAI models. Amazon is betting that model diversity — and the complexity that comes with it — is a feature, not a bug, and that developers will pay for abstraction. The lack of disclosed performance benchmarks is notable; without them, the feature remains a convenience play rather than a proven quality improvement. The $25 billion Anthropic investment context makes the Claude Code integration a natural hedge: Amazon wants developers using AWS even if they prefer Anthropic's tools over Amazon's own.
Compare side-by-side
Amazon vs DeepSeek
Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

More in Products & Launches

View all