Week 7: Ontology-based Agents
Learning Objectives
This week explores how ontologies can enhance AI agents for action planning and multi-agent coordination. You will learn to build agents that reason over structured knowledge.
1. Introduction to Ontology-based Agents
1. Introduction to Ontology-based Agents
The "GPS Navigation" Analogy
Think of an LLM as a powerful Sports Car engine. It has incredible speed (reasoning capabilities), but it doesn't inherently know the "rules of the road." If you let it drive anywhere, it might crash (hallucinate) or go off-road.
An Ontology is the GPS Map. It defines:
- Valid Roads: What actions are actually possible?
- Traffic Rules: What are the preconditions? (e.g., "Must be logged in to delete").
- Destinations: Clear goal definitions.
By combining the LLM (Engine) with the Ontology (Map), you build a safe, reliable Autonomous Agent.
Ontology-based agents use formal knowledge representations to guide their reasoning.
Why Ontologies for Agents?
| Benefit | Description |
|---|---|
| Structured Knowledge | Agents access organized domain knowledge |
| Semantic Reasoning | Understand relationships between concepts |
| Action Planning | Map goals to available actions via ontology |
| Interoperability | Multiple agents share common vocabulary |
| Explainability | Reasoning process is traceable |
Agent Architecture
┌─────────────────┐
│ Ontology │
│ (Knowledge) │
└────────┬────────┘
│
┌──────────┐ ┌────────────┴────────────┐ ┌──────────┐
│ User │────│ Agent Core │────│ Tools │
│ Query │ │ - Planner │ │ & APIs │
└──────────┘ │ - Reasoner │ └──────────┘
│ - Executor │
└─────────────────────────┘2. Action Ontologies
Modeling Actions
@prefix onto: <http://example.org/agent-ontology#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
# Action class hierarchy
onto:Action a owl:Class .
onto:DatabaseAction rdfs:subClassOf onto:Action .
onto:APIAction rdfs:subClassOf onto:Action .
onto:FileAction rdfs:subClassOf onto:Action .
# Specific actions
onto:QueryDatabase a owl:Class ;
rdfs:subClassOf onto:DatabaseAction ;
onto:hasInput onto:SQLQuery ;
onto:hasOutput onto:QueryResult ;
onto:requires onto:DatabaseConnection .
onto:SendEmail a owl:Class ;
rdfs:subClassOf onto:APIAction ;
onto:hasInput onto:EmailContent ;
onto:hasOutput onto:DeliveryStatus ;
onto:requires onto:SMTPCredentials .Action Properties
# Action properties
onto:hasInput a owl:ObjectProperty ;
rdfs:domain onto:Action ;
rdfs:range onto:DataType .
onto:hasOutput a owl:ObjectProperty ;
rdfs:domain onto:Action ;
rdfs:range onto:DataType .
onto:hasPrecondition a owl:ObjectProperty ;
rdfs:domain onto:Action ;
rdfs:range onto:Condition .
onto:hasPostcondition a owl:ObjectProperty ;
rdfs:domain onto:Action ;
rdfs:range onto:Condition .3. Building an Ontology-Driven Agent
Agent with Knowledge Base
from owlready2 import *
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import Tool
class OntologyAgent:
def __init__(self, ontology_path):
# Load ontology
self.onto = get_ontology(ontology_path).load()
self.llm = ChatOpenAI(model="gpt-4")
self.tools = self._create_tools()
def _create_tools(self):
"""Create tools based on ontology actions."""
tools = []
# Get all action classes from ontology
for action_class in self.onto.Action.subclasses():
tool = self._action_to_tool(action_class)
if tool:
tools.append(tool)
return tools
def _action_to_tool(self, action_class):
"""Convert an ontology action to a LangChain tool."""
name = action_class.name
description = self._get_action_description(action_class)
def execute_action(**kwargs):
# Implementation based on action type
return self._execute(action_class, kwargs)
return Tool(
name=name,
description=description,
func=execute_action
)
def _get_action_description(self, action_class):
"""Generate description from ontology annotations."""
inputs = list(action_class.hasInput) if hasattr(action_class, 'hasInput') else []
outputs = list(action_class.hasOutput) if hasattr(action_class, 'hasOutput') else []
desc = f"Action: {action_class.name}\n"
desc += f"Inputs: {[i.name for i in inputs]}\n"
desc += f"Outputs: {[o.name for o in outputs]}"
return desc
def query(self, user_query):
"""Process user query using ontology-guided reasoning."""
# Get relevant concepts from ontology
context = self._get_relevant_context(user_query)
# Plan actions based on ontology
plan = self._create_plan(user_query, context)
# Execute plan
result = self._execute_plan(plan)
return resultOntology-Guided Planning
class OntologyPlanner:
def __init__(self, ontology):
self.onto = ontology
def plan(self, goal, current_state):
"""Create a plan to achieve goal from current state."""
# Find actions that can achieve the goal
candidate_actions = self._find_applicable_actions(goal)
# Build plan using backward chaining
plan = []
subgoals = [goal]
while subgoals:
current_goal = subgoals.pop()
for action in candidate_actions:
postconditions = list(action.hasPostcondition)
if self._satisfies(postconditions, current_goal):
plan.append(action)
# Add preconditions as new subgoals
preconditions = list(action.hasPrecondition)
for pre in preconditions:
if not self._is_satisfied(pre, current_state):
subgoals.append(pre)
break
return list(reversed(plan))
def _find_applicable_actions(self, goal):
"""Find actions whose postconditions match the goal."""
applicable = []
for action in self.onto.Action.subclasses():
if hasattr(action, 'hasPostcondition'):
for post in action.hasPostcondition:
if self._matches(post, goal):
applicable.append(action)
return applicable4. Multi-Agent Systems
Agent Coordination via Shared Ontology
class MultiAgentSystem:
def __init__(self, shared_ontology_path):
self.onto = get_ontology(shared_ontology_path).load()
self.agents = {}
self.message_queue = []
def register_agent(self, agent_id, capabilities):
"""Register an agent with its capabilities."""
self.agents[agent_id] = {
'capabilities': capabilities,
'status': 'idle'
}
def delegate_task(self, task):
"""Delegate task to appropriate agent based on ontology."""
# Find required capability from ontology
required_capability = self._get_required_capability(task)
# Find agent with matching capability
for agent_id, info in self.agents.items():
if required_capability in info['capabilities']:
return self._assign_task(agent_id, task)
return None
def _get_required_capability(self, task):
"""Query ontology for required capability."""
# Use ontology reasoning to determine capability
task_class = self.onto.search_one(label=task.type)
if task_class:
capabilities = task_class.requiresCapability
return capabilities[0] if capabilities else None
return NoneAgent Communication Protocol
class AgentMessage:
def __init__(self, sender, receiver, performative, content):
self.sender = sender
self.receiver = receiver
self.performative = performative # request, inform, confirm, etc.
self.content = content
class CommunicatingAgent:
def __init__(self, agent_id, ontology):
self.id = agent_id
self.onto = ontology
self.inbox = []
def send_request(self, receiver, action, parameters):
"""Send a request to another agent."""
message = AgentMessage(
sender=self.id,
receiver=receiver,
performative="request",
content={
"action": action,
"parameters": parameters
}
)
return message
def handle_message(self, message):
"""Process incoming message based on ontology."""
if message.performative == "request":
action = self.onto.search_one(iri=message.content["action"])
if self._can_perform(action):
result = self._execute(action, message.content["parameters"])
return self._create_response(message, result)
elif message.performative == "inform":
self._update_beliefs(message.content)5. Semantic Tool Selection
Tool Matching via Ontology
class SemanticToolSelector:
def __init__(self, ontology, embedding_model):
self.onto = ontology
self.model = embedding_model
self.tool_embeddings = self._compute_tool_embeddings()
def _compute_tool_embeddings(self):
"""Pre-compute embeddings for all tools in ontology."""
embeddings = {}
for tool_class in self.onto.Tool.subclasses():
description = self._get_tool_description(tool_class)
embedding = self.model.encode(description)
embeddings[tool_class.name] = {
'class': tool_class,
'embedding': embedding
}
return embeddings
def select_tools(self, task_description, top_k=3):
"""Select most relevant tools for a task."""
task_embedding = self.model.encode(task_description)
# Compute similarities
similarities = []
for name, info in self.tool_embeddings.items():
sim = self._cosine_similarity(task_embedding, info['embedding'])
similarities.append((name, info['class'], sim))
# Sort by similarity
similarities.sort(key=lambda x: x[2], reverse=True)
# Return top-k tools
return [
{
'name': name,
'class': cls,
'score': score
}
for name, cls, score in similarities[:top_k]
]
def _get_tool_description(self, tool_class):
"""Generate rich description from ontology."""
desc = tool_class.name
if hasattr(tool_class, 'comment'):
desc += " " + str(tool_class.comment[0])
if hasattr(tool_class, 'hasInput'):
inputs = [i.name for i in tool_class.hasInput]
desc += f" Takes inputs: {', '.join(inputs)}"
return desc6. Reasoning-Enhanced Agents
Agent with OWL Reasoning
class ReasoningAgent:
def __init__(self, ontology_path):
self.onto = get_ontology(ontology_path).load()
def infer_capabilities(self, entity):
"""Use reasoning to infer entity capabilities."""
# Run reasoner
with self.onto:
sync_reasoner()
# Get inferred types
inferred_types = entity.is_a
# Extract capabilities from types
capabilities = []
for t in inferred_types:
if hasattr(t, 'hasCapability'):
capabilities.extend(t.hasCapability)
return capabilities
def check_preconditions(self, action, state):
"""Use reasoning to check action preconditions."""
with self.onto:
# Create temporary individual representing current state
current = self.onto.State("current_state")
for prop, value in state.items():
setattr(current, prop, value)
# Run reasoner
sync_reasoner()
# Check if preconditions are satisfied
preconditions = list(action.hasPrecondition)
for pre in preconditions:
if not self._check_condition(pre, current):
return False, f"Precondition not met: {pre}"
return True, "All preconditions satisfied"Project: Movie Recommendation Knowledge Graph
Progress
| Week | Topic | Project Milestone |
|---|---|---|
| 1 | Ontology Introduction | ✅ Movie domain design completed |
| 2 | RDF & RDFS | ✅ 10 movies converted to RDF |
| 3 | OWL & Reasoning | ✅ Inference rules applied |
| 4 | Knowledge Extraction | ✅ 100 movies auto-collected |
| 5 | Neo4j | ✅ Graph DB constructed |
| 6 | GraphRAG | ✅ Natural language query system completed |
| 7 | Ontology Agents | New movie auto-update agent |
| 8 | Domain Expansion | Medical/Legal/Finance cases |
| 9 | Service Deployment | API + Dashboard |
Week 7 Milestone: Auto-Update Agent for New Movies
Build an agent that automatically collects information about newly released movies and adds them to the knowledge graph.
Agent Architecture:
[Scheduler] Runs daily at 00:00
↓
[Monitor Agent] Detects new movie releases
↓ New movie found
[Extractor Agent] Extracts info from Wikipedia/IMDB
↓
[Validator Agent] Validates against ontology schema
↓ Validation passed
[Updater Agent] Adds to Neo4j
↓
[Notifier Agent] Sends Slack notificationAgent Tools:
search_new_movies: Search for recently released moviesextract_movie_info: Extract detailed movie informationvalidate_schema: Validate against ontology schemaadd_to_graph: Add nodes/relationships to Neo4jsend_notification: Send update notifications
Ontology-based Planning:
# Ontology rules: Required relationships when adding movies
REQUIRED_RELATIONS = [
("Movie", "DIRECTED", "Person"), # Director required
("Movie", "HAS_GENRE", "Genre"), # Genre required
]In the project notebook, you'll build an agent that automatically collects new movies.
In the project notebook, you will implement:
- Multi-agent pipeline with LangGraph
- Monitor → Extractor → Validator → Updater chain
- Ontology rule-based action validation (SHACL)
- Slack notifications for updates
What you'll build by Week 9: An AI agent that answers "Recommend sci-fi movies like Nolan's style" by reasoning over director-genre-rating relationships in the knowledge graph
Practice Notebook
For deeper exploration of the theory:
The practice notebook covers additional topics:
- Agent Tool definition patterns
- Multi-agent collaboration protocols
- SHACL rule writing and validation
- Agent debugging and monitoring
Interview Questions
How do ontologies improve AI agent planning?
Key Benefits:
- Action modeling: Formal preconditions and effects
- Goal decomposition: Ontology hierarchy guides subgoal creation
- Resource reasoning: Understand what's needed for each action
- Constraint checking: Verify plans against domain rules
- Reusability: Same ontology works across different planners
- Explainability: Can trace why specific actions were chosen
Premium Content
Want complete solutions with detailed explanations and production-ready code?
Check out the Ontology & Knowledge Graph Cookbook Premium (opens in a new tab) for:
- Complete notebook solutions with step-by-step explanations
- Real-world case studies and best practices
- Interview preparation materials
- Production deployment guides
Next Steps
In Week 8: Domain Projects, you will apply your knowledge to real-world case studies in medical, legal, and finance domains.