top of page

The Rise of AI Agents

  • Writer: Staff Desk
    Staff Desk
  • 3 days ago
  • 6 min read

The Rise of AI Agents

Artificial Intelligence continues to evolve rapidly, but 2026 will mark a major shift in how AI systems are built, used, and integrated into real-world workflows.


1. The Shift from Monolithic Models to Compound AI Systems


1.1 What Are Monolithic Models?

A monolithic AI model is a standalone large language model (LLM) trained on a fixed dataset. Its capabilities are constrained by several factors:

  • Knowledge cutoff - The model only knows what existed in its training data.

  • Lack of access to real-time information. It cannot check databases, personal files, or external systems unless specifically connected to them.

  • Limited adaptability - Improving performance requires:

    • Collecting new data

    • Annotating or cleaning it

    • Retraining or fine-tuning the model

    • Allocating significant compute resources


Because of these constraints, monolithic models struggle with tasks requiring:

  • Personal data

  • Up-to-date information

  • External system interaction

  • Real-world operations

This leads to the next major shift.


2. Understanding Compound AI Systems

The transcript emphasizes that the next phase in AI is not a single powerful model—but systems. A compound AI system combines:

  • A core LLM

  • Tools and external programs

  • Data sources and databases

  • Reasoning or planning mechanisms

  • Control logic (i.e., how queries move through the system)


2.1 Why Compound Systems Are More Powerful

Standalone models are limited because they cannot access external sources. By contrast, compound systems allow:

  • Real-time data retrieval

  • Workflow automation

  • Task decomposition

  • Verification and correction

  • Integration with existing business processes

2.2 Example Given: Vacation Days Query

The transcript uses a simple scenario to illustrate limitations of standalone LLMs.

User’s question: “How many vacation days do I have left?”


Monolithic Model Response

A monolithic LLM:

  • Does not know who the user is

  • Cannot access HR databases

  • Cannot retrieve personal policy data

  • Will produce an incorrect or hallucinated answer


Compound System Response

A well-designed compound system:

  1. Receives the user’s question

  2. Uses the LLM to convert the question into a structured search query

  3. Sends that query to the relevant HR database

  4. Retrieves accurate vacation-day information

  5. Feeds the result back into the LLM

  6. The LLM generates a final answer

This illustrates how combining a model with tools and data solves the alignment and accuracy problem.


3. Why System Design Matters in Modern AI


Compound AI systems require modularity. Each component handles a specific part of the task. Examples include:


3.1 Modular Components in Compound Systems

  • LLMs(general reasoning, writing, summarization)

  • Tuned or specialized models(translation models, image models)

  • Search engines(retrieve documents, data, real-time info)

  • Database connectors(SQL, APIs, document stores)

  • Output verifiers(check correctness or formatting)

  • Task decomposers(break complex queries into steps)

  • Tools/APIs(calculators, external services)

  • Memory systems(conversation logs and internal reasoning traces)


3.2 Programmatic Control Logic

Control logic defines how a compound system handles an input from start to finish.

  • Determines which components get used

  • Controls when the system searches a database

  • Dictates when tools are called

  • Ensures consistent responses

For narrow tasks, strict control logic ensures accuracy and efficiency.


4. The Limitation of Fixed Control Logic


In traditional compound AI systems, humans define the logic path:

  • “When this type of question arrives, search this database.”

  • “When retrieving information, use Tool A before Tool B.”

  • “Never deviate from the predefined sequence.”


This rigid approach works only when:

  • Tasks are simple

  • Inputs follow predictable patterns

  • The domain is tightly scoped


However, if the user changes topics—for example, suddenly asks about weather instead of vacation policy—the system fails because the logic path is fixed.

This limitation sets the stage for AI agents.


5. Introducing AI Agents

AI agents represent an evolving approach where the LLM is placed in charge of the system logic itself, not just the output text. Rather than following static human-programmed rules, the agent reasons about:

  • What the user wants

  • What steps are required

  • Which tools to call

  • When to revise its plan


5.1 Why Agents Have Become Possible Now

The transcript notes that LLMs have recently achieved:

  • Improved reasoning capabilities

  • Ability to break down complex problems

  • Capability to plan sequentially

  • Better tool-use decisions

  • More reliable iteration and self-correction

These advancements allow LLMs to function as autonomous decision makers within larger systems.


6. The Sliding Scale: From Fast Thinking to Slow Thinking


The transcript introduces a conceptual spectrum:


Fast Thinking (Programmed Behavior)

  • Follow predefined rules

  • No deviation

  • High efficiency

  • Useful for narrow use cases


Slow Thinking (Agentic Behavior)

  • Analyze the problem deeply

  • Create a plan

  • Execute the plan step-by-step

  • Reassess problems

  • Use tools when needed

  • Iterate and adjust

AI agents operate on the “slow thinking” side, enabling complex problem solving.


7. Core Capabilities of AI Agents

The transcript identifies three primary components of AI agents:


7.1 Reasoning

Reasoning allows the agent to:

  • Break down tasks

  • Plan a multi-step workflow

  • Understand dependencies

  • Prioritize steps

  • Decide which tools are required

  • Evaluate mistakes

  • Revise the plan

Reasoning is the foundational capability enabling autonomy.


7.2 Action (Tools)

Agents can call tools. Tools can be:

  • Search engines

  • Database query functions

  • Math calculators

  • External APIs

  • File generators

  • Translators

  • Data-manipulation scripts

  • Even other LLMs

Tools extend the agent beyond its fixed training data.


7.3 Memory

Memory in agentic systems includes:

Internal memory

  • Reasoning traces

  • Step-by-step logs

  • Plans

  • Intermediate decisions

User-interaction memory

  • Past conversations

  • Preferences

  • Stored data from earlier queries

Memory creates personalization and continuity.


8. The ReAct Framework (Reason + Act)

ReAct is one of the most popular agent frameworks. It integrates:

  • Reasoning steps

  • Action/tool use

  • Observation

  • Iteration

  • Final answer generation


8.1 How ReAct Works Step-by-Step

  1. User input arrives

  2. LLM analyzes the query

  3. LLM produces a plan

  4. LLM decides which tools to use

  5. The system executes the tool

  6. LLM observes the tool results

  7. LLM evaluates correctness

  8. LLM revises the plan (if needed)

  9. Repeats until a final answer is formed

This loop continues until the agent reaches a satisfactory result.


9. Example: Calculating Sunscreen Bottles


Query:

“How many 2-ounce sunscreen bottles should I bring for a Florida trip?”


An agent would need to:

  1. Retrieve vacation day count (from memory or database)

  2. Estimate sun exposure time

    • Pull weather forecast for next month

    • Identify average sunshine hours

  3. Retrieve sunscreen dosage guidelines

    • Query public health websites

  4. Perform math:

    • Calculate total sunscreen needed

    • Convert to number of 2-oz bottles


This example demonstrates the agent’s ability to:

  • Combine tools

  • Handle multi-step plans

  • Pull from memory

  • Use reasoning

  • Perform calculations

  • Generate a final actionable output


A scripted system cannot handle this level of complexity or flexibility.


10. Why AI Agents Are Critical for the Future


Several major themes emerge:


10.1 Increased Autonomy

Agents reduce human micromanagement by making decisions about:

  • How to solve problems

  • What steps to take

  • What data to retrieve

  • Which tools to use


10.2 Scalability

For systems with:

  • Many tasks

  • Unpredictable requests

  • Broad domains

Manually programming logic for every path becomes unrealistic.Agents scale decision-making automatically.


10.3 Flexibility

Agents adapt to new workflows faster than traditional systems.


10.4 Easier Development

Instead of engineering complex control logic, developers can rely on LLM reasoning.


10.5 Suitable for Wide Problem Spaces

Examples include:

  • Code debugging

  • GitHub issue solving

  • Research assistance

  • Customer support

  • Personal task automation

  • Planning and scheduling

  • Data coordination tasks


11. When Programmatic Systems Still Make Sense


The transcript notes that not all systems need agentic logic. Programmatic approaches are ideal when:

  • Queries are predictable

  • Tasks are narrow

  • Efficiency is critical

  • There is no need for iterative reasoning


Examples:

  • Checking remaining vacation days

  • Reading a specific database field

  • Inventory lookups

  • Simple CRUD operations

  • Repetitive automated workflows

For these, agents may introduce unnecessary overhead.


12. The Future of Agentic Systems

The transcript suggests that:

  • Compound systems are here to stay

  • Agentic capabilities will be layered on top

  • LLM autonomy will increase gradually

  • Human-in-the-loop verification will remain important


Key predictions include:

  • More reliable reasoning

  • More sophisticated planning abilities

  • Better tool orchestration

  • Richer memory integration

  • Wider adoption across industries


13. Summary of Key Concepts

Below is a consolidated view of all core ideas:


Monolithic Models

  • Limited by data

  • Lack adaptability

  • Cannot access external systems


Compound Systems

  • Combine models, tools, databases

  • Use programmatic control logic

  • Provide accuracy and flexibility


Agents

  • LLM directs the logic

  • Performs reasoning

  • Uses tools to act

  • Accesses memory

  • Handles complex tasks


ReAct Framework

  • Think

  • Act via tool

  • Observe

  • Iterate

  • Final answer


When to Use Programmatic

  • Narrow tasks

  • Highly structured inputs

  • Performance-critical workflows


When to Use Agents

  • Complex problem-solving

  • Broad input variety

  • Need for adaptive planning


Conclusion


AI agents represent a fundamental transformation in modern artificial intelligence. Instead of relying solely on a static model that only generates text, agents combine reasoning, tools, memory, and modular components into a fully functional system capable of dynamic problem solving.



Comments


Talk to a Solutions Architect — Get a 1-Page Build Plan

bottom of page