New framework simplifies the complex landscape of agentic AI

A new study proposes a clear framework to help developers and enterprises navigate the growing complexity of agentic AI. By dividing AI adaptation into four strategies—A1, A2, T1, and T2—the framework suggests cost-effective paths that preserve modularity and flexible system design.

Key Takeaways:

  • A new framework divides AI adaptation into agent-based (A1/A2) and tool-based (T1/T2) strategies.
  • Agent retraining can be powerful but is often expensive and risks overfitting.
  • Tool-based methods are typically more efficient and preserve a model’s general knowledge.
  • Different tasks and budgets may call for one or more of these strategies.
  • Enterprises can start with simpler, modular approaches before committing to costly model-level training.

A Growing Landscape for AI

Agentic AI—the technology that empowers AI systems to autonomously plan, search, and make decisions—presents both promise and complexity for developers. As tools proliferate, choosing which approach to adopt can overwhelm even the most knowledgeable teams. A new study offers clarity with a framework which organizes how to adapt and integrate these agents.

Agent vs. Tool Adaptation

Researchers make a crucial distinction between adapting the AI “agent” itself (agent adaptation) and optimizing the ecosystem of external tools (tool adaptation).
• Agent adaptation (A1/A2) involves fine-tuning the core large language model or training it via reinforcement learning. While flexible, these methods are expensive and risk making the model too specialized.
• Tool adaptation (T1/T2) focuses instead on components around a frozen model—like retrievers, memory modules, or specialized sub-agents—to handle tasks efficiently without modifying the core agent.

Diving into the Four Strategies

The framework further divides each approach into two sub-strategies:
• A1 (Tool Execution Signaled): The system is trained by verifying the agent’s interaction with a tool, such as code execution or database queries. This real-world feedback refines the agent’s proficiency in verifiable tasks like coding or SQL.
• A2 (Agent Output Signaled): The agent is optimized based on the correctness of its final answer alone. It encourages better planning and orchestration of multiple tools, though it demands extensive data—Search-R1 needed 170,000 examples.
• T1 (Agent-Agnostic): Tools like dense retrievers are trained broadly, then plugged into a powerful but untouched model. The main agent can leverage these tools without retraining its own parameters.
• T2 (Agent-Supervised): Tools adapt to the agent’s requirements by learning from the agent’s output. This tactic uses much less data. The s3 system, for instance, trained a search module with just 2,400 examples—about 70 times fewer than comparable agent-focused systems.

Costs and Tradeoffs

Deciding between agent and tool adaptation often boils down to budget, scalability, and system goals. Training an entire agent may yield deep integration but can be prohibitively expensive. There is also the risk of overfitting—Search-R1 excelled in training tasks but hit only 71.8% accuracy in specialized medical questions. In contrast, the s3 system, which used a frozen agent alongside a trained tool, performed at 76.6% on similar tasks.

Guidance for Enterprise Teams

For most businesses, experts suggest starting with the simplest approach: T1, using standard off-the-shelf retrievers or connectors. If necessary, move to T2, training specialized sub-agents to handle enterprise data more efficiently. A1 is best for refining granular tasks—like coding in a proprietary environment—while A2 remains a last resort, used only when broad agent-level learning is essential.

Building a Smarter Ecosystem

Increasingly, AI strategy involves orchestrating multiple specialized tools around a stable, high-capacity model. This modular approach minimizes cost while protecting core capabilities. As the study notes, many enterprises will find it more sensible to upgrade sub-agents and tools rather than continually re-train a vast, monolithic AI system. The result is smarter AI, tailored to real-world needs without ballooning budgets or losing general knowledge.

More from World

San Francisco's 2025: From Decline to Renewal
by San Francisco Examiner
18 hours ago
2 mins read
From ‘doom loop’ to ‘boom loop’: Looking Back at San Francisco in 2025
December 30 Arrests Highlight Crime Updates
by Themercury
21 hours ago
1 min read
Police report for Tuesday, Dec. 30, 2025
Sacramento Delegates Seek Global Investment Opportunities
by Davis Enterprise
21 hours ago
2 mins read
GSEC delegation visits Germany
Algorithms Eclipse Follower Counts on Social Media
by Tech Crunch
21 hours ago
2 mins read
Social media follower counts have never mattered less, creator economy execs say
Christian Gonzalez Understands Patriots Historic Road Record
Meigs County Grand Jury Issues 12 Indictments
by Wv News
21 hours ago
2 mins read
Meigs County Grand Jury hands down indictments for December
Cormier Reflects on Mistakes in Jones Rematch
by Yardbarker
24 hours ago
1 min read
Daniel Cormier reveals what he’d do differently against Jon Jones if he could relive their rematch
Medicaid Budget Changes Alarm WA Health Providers
by Yakima Herald-republic
1 day ago
2 mins read
WA health care groups raise alarm over Ferguson budget
Texas Secures $1.4B for Rural Health Care
by Weatherforddemocrat
1 day ago
1 min read
Governor Abbott Announces historic $1.4 billion in federal funding secured for ‘Rural Texas Strong’ projects
Rams and Falcons Clash on Monday Night
by New York Post
1 day ago
2 mins read
Rams vs. Falcons prediction: NFL Week 17 ‘Monday Night Football’ odds, props, best bet
Weidner, Bell Named Athletes of the Week
by News-gazette
1 day ago
1 min read
Athletes of the Week: Weidner balls out on birthday, Bell dominates weight class
90-Second Football Quiz Challenges Fans
by Fourfourtwo
1 day ago
2 mins read
Quickfire Quiz 21: Can you answer 10 questions in 90 seconds?