Home About Projects Writings

The Key Components That Create Intelligence and Enable Good Decisions

Think of an AI agent as a person. The base knowledge (the LLM) is their education and raw intelligence. But to be effective in the real world, they need more
Sep 1st 2025
1. Agency and Goal-Oriented Behavior (The "Why")
This is the fundamental shift from reactive to proactive.
  • What it is: Instead of just answering a question, an agent is given a goal or a high-level task (e.g., "Organize my files for tax season," "Research the best new laptops under $1000").
  • How it enables intelligence: The agent must now reason about how to achieve that goal. It breaks the large, ambiguous goal into smaller, manageable sub-tasks. This hierarchical planning is a hallmark of intelligent behavior.
  • Resource Efficiency: By breaking down the task, the agent can focus its computational power (i.e., expensive LLM calls) on the most relevant sub-problems instead of trying to solve everything in one massive, inefficient step.
2. Tools and Function Calling (The "How")
A knowledgeable brain is useless without ways to interact with the world.
  • What it is: The agent is equipped with tools (APIs, functions, code execution environments). These can be a calculator, a web search API, a database query function, a file editor, etc.
  • How it enables intelligence: The base model knows about math, but it's bad at arithmetic. It knows about current events, but its knowledge is frozen in time. With tools, the agent can decide when to use a calculator for precise math, when to perform a web search for the latest news, or when to query a database for specific facts. This decision-making—which tool to use and when—is a critical form of intelligence that vastly extends its capabilities beyond its static knowledge.
  • Resource Efficiency: Using a simple, cheap calculator tool is infinitely more efficient and accurate than trying to make the LLM reason through a complex math problem step-by-step.
3. Memory (The "Experience")
Intelligence requires learning from the past, both short-term and long-term.
  • Short-Term Memory (Context Window): This is the conversation history. The agent remembers what it has already done and said in the current session, allowing it to maintain coherence.
  • Long-Term Memory (Vector Databases/External Storage): This is a game-changer. The agent can save the results of its work, lessons learned, and user preferences to an external database. The next time it works on a similar task, it can recall this information.
  • How it enables intelligence: Memory allows for learning and adaptation. An agent that remembers you prefer concise summaries can provide them in the future. An agent that solved a complex bug can recall the solution when it sees a similar problem. This mimics human learning.
  • Resource Efficiency: Memory prevents wasteful repetition. Instead of re-researching a topic every time, the agent can quickly retrieve its previous findings, saving a huge number of API calls and computation.
4. Planning and Reasoning Loops (The "Thought Process")
This is the "secret sauce" that makes agents seem truly intelligent. The agent doesn't just act; it thinks before it acts.
  • What it is: The agent engages in an internal dialogue or uses structured frameworks to reason. The most famous examples are:
    • Chain of Thought (CoT): Encouraging the LLM to think step-by-step.
    • ReAct (Reason + Act): A loop where the agent Reasons about the current situation, then Acts by using a tool, and then observes the result before repeating.
    • Tree of Thought (ToT): The agent explores multiple reasoning paths simultaneously, like branching possibilities, and then decides on the most promising one.
  • How it enables intelligence: This forces the agent to simulate outcomes, correct its own mistakes, and choose optimal paths. It might realize a web search is needed before it can write a summary, or that step 3 must come before step 2. This meta-cognition is a high-level intellectual function.
  • Resource Efficiency: While thinking steps use tokens, they prevent far more wasteful action steps. A little time spent planning can avoid a dead end that would have wasted multiple API calls and computation.
5. Self-Reflection and Critique (The "Quality Control")
An intelligent agent can evaluate its own work.
  • What it is: After generating an output or completing a task, the agent is prompted to critique its own work. ("Check your answer for errors." "Is this summary comprehensive and accurate?")
  • How it enables intelligence: This leads to iterative improvement. The agent might spot a logical flaw, realize it missed a key point, or find a calculation error. It can then loop back and correct itself, leading to a higher-quality final output.
  • Resource Efficiency: This is a trade-off but a wise one. Spending a few extra tokens on self-critique can mean the difference between a correct answer and a wildly wrong one, preventing the entire task from being useless and needing to be completely redone.

Summary: How It All Fits Together for Efficient Intelligence
You don't just feed a model more knowledge and compute. You architect a system where the model acts as a central reasoning engine that orchestrates a set of tools, guided by memory and a planning loop.