New framework simplifies the complex landscape of agentic AI

A new study proposes a clear framework to help developers and enterprises navigate the growing complexity of agentic AI. By dividing AI adaptation into four strategies—A1, A2, T1, and T2—the framework suggests cost-effective paths that preserve modularity and flexible system design.

Key Takeaways:

  • A new framework divides AI adaptation into agent-based (A1/A2) and tool-based (T1/T2) strategies.
  • Agent retraining can be powerful but is often expensive and risks overfitting.
  • Tool-based methods are typically more efficient and preserve a model’s general knowledge.
  • Different tasks and budgets may call for one or more of these strategies.
  • Enterprises can start with simpler, modular approaches before committing to costly model-level training.

A Growing Landscape for AI

Agentic AI—the technology that empowers AI systems to autonomously plan, search, and make decisions—presents both promise and complexity for developers. As tools proliferate, choosing which approach to adopt can overwhelm even the most knowledgeable teams. A new study offers clarity with a framework which organizes how to adapt and integrate these agents.

Agent vs. Tool Adaptation

Researchers make a crucial distinction between adapting the AI “agent” itself (agent adaptation) and optimizing the ecosystem of external tools (tool adaptation).
• Agent adaptation (A1/A2) involves fine-tuning the core large language model or training it via reinforcement learning. While flexible, these methods are expensive and risk making the model too specialized.
• Tool adaptation (T1/T2) focuses instead on components around a frozen model—like retrievers, memory modules, or specialized sub-agents—to handle tasks efficiently without modifying the core agent.

Diving into the Four Strategies

The framework further divides each approach into two sub-strategies:
• A1 (Tool Execution Signaled): The system is trained by verifying the agent’s interaction with a tool, such as code execution or database queries. This real-world feedback refines the agent’s proficiency in verifiable tasks like coding or SQL.
• A2 (Agent Output Signaled): The agent is optimized based on the correctness of its final answer alone. It encourages better planning and orchestration of multiple tools, though it demands extensive data—Search-R1 needed 170,000 examples.
• T1 (Agent-Agnostic): Tools like dense retrievers are trained broadly, then plugged into a powerful but untouched model. The main agent can leverage these tools without retraining its own parameters.
• T2 (Agent-Supervised): Tools adapt to the agent’s requirements by learning from the agent’s output. This tactic uses much less data. The s3 system, for instance, trained a search module with just 2,400 examples—about 70 times fewer than comparable agent-focused systems.

Costs and Tradeoffs

Deciding between agent and tool adaptation often boils down to budget, scalability, and system goals. Training an entire agent may yield deep integration but can be prohibitively expensive. There is also the risk of overfitting—Search-R1 excelled in training tasks but hit only 71.8% accuracy in specialized medical questions. In contrast, the s3 system, which used a frozen agent alongside a trained tool, performed at 76.6% on similar tasks.

Guidance for Enterprise Teams

For most businesses, experts suggest starting with the simplest approach: T1, using standard off-the-shelf retrievers or connectors. If necessary, move to T2, training specialized sub-agents to handle enterprise data more efficiently. A1 is best for refining granular tasks—like coding in a proprietary environment—while A2 remains a last resort, used only when broad agent-level learning is essential.

Building a Smarter Ecosystem

Increasingly, AI strategy involves orchestrating multiple specialized tools around a stable, high-capacity model. This modular approach minimizes cost while protecting core capabilities. As the study notes, many enterprises will find it more sensible to upgrade sub-agents and tools rather than continually re-train a vast, monolithic AI system. The result is smarter AI, tailored to real-world needs without ballooning budgets or losing general knowledge.

More from World

Skilled Trades: Building America's Future
by Fast Company
18 hours ago
3 mins read
The next great American innovation is in the trades
GT Thompson Pursues 10th Congressional Term
by Laconiadailysun
18 hours ago
1 min read
Glenn ‘GT’ Thompson announces 10th run for congress seat
Woodland Revitalization: New Welcome Center Proposed
by The Daily News
21 hours ago
1 min read
Downtown Woodland Revitalization floats idea for welcome center with Woodland City Council
Winning Through Joy: Bayern Munich’s Fun Formula
by Bayern Munich
21 hours ago
2 mins read
The Fun Bunch: Bayern Munich’s key to winning is having fun
Iowa's Biofuel Breakthrough: Progress & Future Steps
by Nonpareilonline
21 hours ago
1 min read
Renewable fuels summit celebrates progress, stresses need for new markets
Building the Beatles' Universe: Sam Mendes' Vision
by Indiewire
1 day ago
2 mins read
More ‘Beatles’ Movies Casting: Sam Mendes Films Find Their Ravi Shankar, Jane Asher, Cynthia Lennon, and Stu Sutcliffe
BlockDAG Mainnet Launch Signals Crypto Evolution
by Analytics And Insight
1 day ago
1 min read
BlockDAG’s Mainnet and TGE Go Live as SHIB Slides and Solana ETF Surges
Idaho's Quieting Human Rights Voice
by Magic Valley
1 day ago
1 min read
Idaho’s top elected officials have turned their back on human rights
Georgia's Growth: Balancing Business with Welfare
by The Atlanta Voice
1 day ago
2 mins read
In Georgia’s “Original Vision,” a 21st Century Call to Action
Politics Sparks Debate in Montana Schools
by Helenair
1 day ago
1 min read
Letter to the editor: Political clubs have no place in public schools
Cram the Car: HSHS Home Care's Food Drives
by Herald & Review
1 day ago
1 min read
Cram the Car events scheduled at HSHS Home Care facilities
Millionaire Tax Ends 93-Year Income Levy Pause
by Bloomberg
1 day ago
1 min read
Millionaire Tax Tests a State’s 93-Year Aversion to Income Levy