Agents and Tasks
Communication Patterns
Human in the Loop
Environment Integration
Deploying in Production environments: state management, tracing, monitoring
The problem with autonomous agentic systems
The Autonomous Agentic frameworks provide a seductive abstraction to build LLM-based agentic software. At their core, they automated a lot of the complexity around the implementation of agents and the way they interact. They can actually be considered as wrappers around event-based stateful graph orchestrators (I tend to call that “macro-orchestration“) to handle the complexity around the application flow and around micro-orchestrators to handle LLM-specific flow of data and API connections.
There are a few Autonomous Agentic frameworks out there, but the biggest ones are Autogen and CrewAI. LangRoid and Genworlds are simpler options that start to have some success as well. We can also note the newer OpenAI Swarm package that focuses on implementing the handoff and routines patterns.
Agents and Tasks
LLM-based agents are simple! They can be described as a combination of:
An LLM: this is the decision engine taking a state and generating an action.
Instructions: the instructions provided as the system prompt will define the goal of the agent. It plays a similar role as the reward function in reinforcement learning-based agents.
A set of actions: actions in the context of LLM-based agents are usually called “tools“ or “functions“. Those are the available actions the LLM can choose from. Actions can be search engines, data store retrieval, or simple Python functions.
The core idea behind those agentic frameworks is to restructure the pipeline orchestration by abstracting the application flow as multiple agents collaborating by exchanging messages.
To scale the complexity, we introduce more agents exchanging messages in a structured manner. The hierarchical structure is often used and tends to resemble the corporate organization structure.
For example, in CrewAI, we create an agent as follows:
from crewai import Agent
from crewai_tools import SerperDevTool
role = '{topic} Senior Data Researcher'
goal = 'Uncover cutting-edge developments in {topic}'
backstory = (
"You're a seasoned researcher with a knack for uncovering the latest "
"developments in {topic}. Known for your ability to find the most relevant "
"information and present it in a clear and concise manner."
)
agent = Agent(
llm='gpt-4o-mini',
tools=[SerperDevTool()],
role=role,
goal=goal,
backstory=backstory
)
The role, goal, and backstory are just a way to deconstruct the system prompt in a user-friendly manner:
agent.agent_executor.prompt['system']
> You are {topic} Senior Data Researcher. You're a seasoned researcher with a knack for uncovering the latest developments in {topic}. Known for your ability to find the most relevant information and present it in a clear and concise manner.
Your personal goal is: Uncover cutting-edge developments in {topic}
You ONLY have access to the following tools, and should NEVER make up tools that are not listed here:
{tools}
Use the following format:
Thought: you should always think about what to do
Action: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.
Action Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using " to wrap keys and values.
Observation: the result of the action
Once all necessary information is gathered:
Thought: I now know the final answer
Final Answer: the final answer to the original input question
This is a typical ReAct prompt where we present the available tools to the LLM, and we push it to iterate through Though, Action and Observation. The specific task is passed as a user message:
agent.agent_executor.prompt['user']
> Current Task: {input}
Begin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!
Thought:
And we can pass a task to the agent:
from crewai import Task
research_task = Task(
description="""
Conduct a thorough research about AI Agents.
Make sure you find any interesting and relevant information given
the current year is 2024.
""",
expected_output="""
A list with 10 bullet points of the most relevant information about AI Agents
"""
)
agent.execute_task(research_task)
> 1. **Market Growth**: The AI agents market is projected to grow from USD 5.1 billion in 2024 to USD 47.1 billion by 2030, with a compound annual growth rate (CAGR) of 44.8% during this period. (Source: GlobeNewswire)
2. **Rise of Frameworks**: In 2024, there has been significant growth in AI agent building frameworks such as AutoGen, CrewAI, LangGraph, and LlamaIndex, which simplify the process of creating AI agents. (Source: Analytics Vidhya)
3. **Generative AI Expansion**: Key innovations in generative AI continue to shape AI agents, with milestones marking advancements in their functionalities throughout 2024. (Source: Channel Insider)
4. **Hyperautomation**: There is a noticeable shift towards hyperautomated processes in enterprises, combining AI, machine learning, and automation for broader AI adoption, which heavily involves AI agents. (Source: Activepieces)
5. **Composite Systems**: The evolution of AI systems is moving towards modular and composite systems, allowing AI agents to solve complex tasks more efficiently. (Source: PCG)
6. **Revenue Impact**: AI agents are predicted to generate $280 billion in new software sales revenue by 2032, indicating significant financial impact and relevance in the tech landscape. (Source: GetOdin)
7. **Sustainability Goals**: Approximately 66% of organizations deploying AI agents report that these systems help in achieving their sustainability objectives, showcasing an important application for AI technologies. (Source: GetOdin)
8. **GenAI Platform Integration**: DigitalOcean's new GenAI Platform enables developers to easily incorporate AI agent capabilities into their applications, indicating a trend towards democratizing access to AI technologies. (Source: DigitalOcean)
9. **Agent-First Applications**: There is a growing trend towards "agent-first" applications that prioritize integrating AI functionalities from the onset, which is redefining development approaches in AI. (Source: Medium)
10. **Framework Comparisons**: A variety of AI agent frameworks have emerged, and their pros and cons are being thoroughly analyzed, helping developers choose the best tools for their needs in 2024. (Source: Chatbase)
Note that the agent works as a typical ReAct:
it takes the task
establish the tool to call
execute the tool call
pass the tool call result to the LLM
and iterate this process until it reaches a terminal state
In Autogen, the way we define “agents“ and how they interact is a bit different. The decision engine and the tool executor are separated into different agents, and instead of iterating on the same agent, we have two agents having a “conversation“:
So, because agentic conversations arise by binding agents together in Autogen, there is no explicit concept of “tasks“. By default, Autogen agents can execute code, and there is no tool available other than the custom user-defined tools.
Autogen might put a stronger emphasis on the human-in-loop concept by designing agents that are meant to interact with humans:
import autogen
config_list = [{
"model": "gpt-4o-mini", "api_key": os.environ['OPENAI_API_KEY']
}]
llm_config = {
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
"temperature": 0,
}
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
llm_config=llm_config,
system_message="""
Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet.
"""
)
This agent will take the user's message and send it to another agent. As the agents exchange messages, the user proxy will stop the iterative process either because of an upper bound on the number of iterations or because it determined that the user query has been answered.
assistant = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
)
user_proxy.initiate_chat(
assistant,
message="""
Conduct a thorough research about AI Agents.
Make sure you find any interesting and relevant information given
the current year is 2024.
"""
)
First, the user proxy passes the user query to the assistant:
user_proxy (to assistant):
Conduct a thorough research about AI Agents. Make sure you find any interesting and relevant information given the current year is 2024.
Then, the assistant replies by providing a code that should lead to getting more information:
assistant (to user_proxy):
To conduct thorough research on AI Agents, I will gather information from various sources, including recent articles, papers, and news related to AI Agents in 2024. This will include their applications, advancements, and any notable trends or developments.
I will start by searching for relevant information online.
Let's proceed with the search.
```python import requests from bs4 import BeautifulSoup # Function to search for articles about AI Agents in 2024 def search_ai_agents(): query = "AI Agents 2024" url = f"https://www.google.com/search?q={query}" headers = {'User-Agent': 'Mozilla/5.0'} response = requests.get(url, headers=headers) soup = BeautifulSoup(response.text, 'html.parser') # Extracting titles and links of the articles articles = [] for g in soup.find_all('div', class_='BVG0Nb'): title = g.find('h3').text link = g.find('a')['href'] articles.append((title, link)) return articles # Execute the search ai_agents_articles = search_ai_agents() for title, link in ai_agents_articles: print(f"Title: {title}\nLink: {link}\n") ```
This code will search for articles about AI Agents in 2024 and print the titles and links of the articles found. I will execute this code to gather the information.
The user proxy executes the code but gets an error, so it provides the information to the assistant:
>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
Pulling image python:3-slim
user_proxy (to assistant):
exitcode: 1 (execution failed) Code output: Traceback (most recent call last): File "", line 1, in <module> import requests ModuleNotFoundError: No module named 'requests'
After a few back-and-forths, the assistant addressed all the errors and provided a script that led to useful information:
assistant (to user_proxy):
It appears that the `chardet` module is also not available in the execution environment. Given the limitations, I will simplify the approach and provide a script that does not rely on external libraries. Instead, I will use a basic method to read the response without attempting to decode it. This will allow us to see the raw HTML content, which may still contain useful information.
Here’s the revised Python code:
```python # filename: search_ai_agents.py import http.client import urllib.parse # Function to search for articles about AI Agents in 2024 def search_ai_agents(): query = "AI Agents 2024" params = urllib.parse.urlencode({'q': query}) conn = http.client.HTTPSConnection("www.google.com") conn.request("GET", f"/search?{params}") response = conn.getresponse() # Read the response data = response.read() # Print the raw HTML content print(data) # Execute the search search_ai_agents() ```
Please save this code in a file named `search_ai_agents.py` and execute it. This version will print the raw HTML content of the search results. Let me know if you were able to retrieve the information successfully.
Now, the user proxy provides the result of the code execution to the assistant:
>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
user_proxy (to assistant):
exitcode: 0 (execution succeeded) Code output: b'<!doctype html><html lang="en"><head><meta charset="UTF-8"><meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop="image"><title>AI Agents 2024 - Google Search</title><script nonce="fXg4DPSzxxEMh1zK8OU9WQ">(function(){\ndocument.documentElement.addEventListener("submit",function(b){var a;if(a=b.target){var c=a.getAttribute("data-submitfalse");a=c==="1"||c==="q"&&!a.elements.q.value?!0:!1}else a=!1;a&&(b.preventDefault(),b.stopPropagation())},!0);document.documentElement.addEventListener("click",function(b){var a;a:{for(a=b.target;a&&a!==document.documentElement;a=a.parentElement)if(a.tagName==="A"){a=a.getAttribute("data-nohref")==="1";break a}a=!1}a&&b.preventDefault()},!0);}).call(this);(function(){window.google=window.google||{};var a=window.performance&&window.performance.timing&&"navigationStart"in window.performance.timing,b=google.stvsc&&google.stvsc.ns,c=a?…
Finally, the assistant concludes that it is enough information to answer the user's query and terminate the conversation:
assistant (to user_proxy):
The search for information about AI Agents in 2024 has successfully returned raw HTML content from Google. Here are some key findings based on the search results:
1. **Increased Adoption**: In 2024, AI agents are no longer a niche interest. Companies across various industries are increasingly incorporating AI agents into their workflows, automating mundane tasks, assisting with data analysis, and even writing code.
2. **Ethical Considerations**: There is a growing discussion around the ethics of AI agents, particularly as they become capable of mimicking human personalities and acting on behalf of individuals.
3. **Types of AI Agents**: Various types of AI agents are being explored, including simple reflex agents, model-based reflex agents, goal-based agents, learning agents, and utility-based agents.
4. **Corporate Innovations**: Companies like Workday are announcing new AI agents designed to transform HR and finance processes, indicating a trend towards specialized AI applications in business environments.
5. **Decision-Making Capabilities**: AI agents are evolving to make decisions independently, raising questions about readiness for such technology in practical applications.
6. **Resources and Guides**: There are comprehensive guides available that detail the different types of AI agents, their operations, and the benefits they provide.
For further reading, here are some notable articles from the search results:
- [LangChain State of AI Agents Report](https://www.langchain.com/stateofaiagents)
- [We need to start wrestling with the ethics of AI agents](https://www.technologyreview.com/2024/11/26/1107309/we-need-to-start-wrestling-with-the-ethics-of-ai-agents/)
- [7 Types of AI Agents to Automate Your Workflows in 2024](https://www.digitalocean.com/resources/articles/types-of-ai-agents)
- [Workday Announces New AI Agents to Transform HR and Finance Processes](https://newsroom.workday.com/2024-09-17-Workday-Announces-New-AI-Agents-to-Transform-HR-and-Finance-Processes)
This information provides a comprehensive overview of the current landscape of AI agents in 2024.
TERMINATE
At this point, the system requests the user input to validate the answer. If the user validates the answer, the process stops here; if not, the user can provide feedback so the agent system can understand what is missing.
Communication Patterns
Both CrewAI and Autogen provide templated interaction flows between agents, but Autogen tends to have more options. In CrewAI, the collaboration between agents is established by a Crew
. For example, let’s get two agents:
Keep reading with a 7-day free trial
Subscribe to The AiEdge Newsletter to keep reading this post and get 7 days of free access to the full post archives.