Exploring Oracle’s new AI Agent Memory Python Library with OpenAI

Hi everyone!

In this post, I want to show you a small but useful demo application that uses Oracle AI Agent Memory from Python. The complete code for this example is in the agent-memory repository. You can learn more about Oracle AI Agent Memory on the Oracle website.

The demo is a customer support assistant. That is a nice shape for an agent memory example because it gives us all the things agents usually need to remember: who the user is, what happened before, which device or account is involved, what the current case state is, and whether this new problem sounds like an earlier one.

The important point is that oracleagentmemory is the memory layer. It is not tied to one agent framework. You can use it with different frameworks and SDKs. This particular sample uses the OpenAI SDK for the agent-style tool-calling loop, and it uses Oracle AI Agent Memory as the durable memory backend.

In other words, OpenAI drives the agent turn. Oracle stores and retrieves the memory.

Let’s walk through it.

What we are building

The sample application does five things:

  • starts a local Oracle Database Free container
  • creates the Oracle AI Agent Memory managed schema
  • creates a small companion schema for customer, device, case, policy, JSON state, and graph data
  • runs a scripted support conversation with memory-aware tool calls
  • prints a database inspection report so we can see where the memory went

The scenario centers on Alex, a support user with a River House account and a Model X router. Alex had a prior Wi-Fi dropout issue. Later, Alex comes back and says video calls are unstable again. The agent needs to figure out whether that sounds related, what facts it already knows, what relationships matter, and what to do next.

That gives us a realistic demo without needing a real ticketing system, CRM, or router telemetry feed.

Before you begin

You need Python, Docker, uv, and an OpenAI API key.

The quickstart from the repository is:

cp .env.example .env
# Edit .env and set OPENAI_API_KEY.
uv sync
uv run agent-memory-demo run

The demo uses gvenzl/oracle-free:23.26.1-slim-faststart by default. That is helpful for a local demo because the database starts faster than a normal first-start image.

The repository also sets these defaults:

  • OPENAI_MODEL=gpt-5-mini
  • OPENAI_EMBEDDING_MODEL=text-embedding-3-small
  • OPENAI_MEMORY_LLM_MODEL=gpt-5-mini
  • ORACLE_MEMORY_TABLE_PREFIX=OAM_DEMO_
  • ORACLE_APP_TABLE_PREFIX=OAM_DEMO_APP_

The two prefixes matter. The OAM_DEMO_ tables are managed by Oracle AI Agent Memory. The OAM_DEMO_APP_ tables are the companion business tables created by this sample application.

That separation makes the demo easier to understand. We can see what the library owns, and we can also see the normal application data that the agent works with.

Why agent memory is not just one thing

When people first talk about agent memory, they often mean one thing: chat history. That is useful, but it is not enough.

A useful agent may need several kinds of memory:

Memory kindWhat it means in this demoOracle storage usedWhen to use it
Working or thread memoryThe current and previous support messagesRelational rows in managed Agent Memory tablesUse this when the agent needs conversation continuity.
Durable fact memoryPreferences, facts, and case summaries that should survive the current conversationManaged Agent Memory records, plus vector chunks for retrievalUse this when the agent should remember something later.
Profile memoryUser and agent profilesRelational rowsUse this for stable actor information such as user preferences or agent identity.
State memoryThe mutable status of a support caseJSON in the app-owned case tableUse this when the shape of the state may evolve over time.
Relationship memoryUser to account to device to case to policy pathsSQL Property Graph over relational vertex and edge tablesUse this when the important question is about connected things.
Similarity memoryPrior cases or memories that are semantically close to the current issueOracle VECTOR columns in record chunksUse this when the same thing may be described in different words.

That is the main architectural idea in the demo. Different memory types have different access patterns, so they should not all be forced into the same shape.

Relational data is great when identifiers, constraints, ownership, and joins matter. JSON is great when the shape of state changes as the case moves forward. Graph is great when paths and relationships are the point. Vector data is great when similarity matters more than exact matching.

The nice thing here is that all of those can live in Oracle Database. The agent does not need a separate relational database, graph database, document database, and vector database just to remember one support case.

The repository structure

The application code lives under src/agent_memory_demo.

The key files are:

FileWhat to look for
cli.pyThe Typer commands: run, interactive, inspect-db, verify-memory, and reset-db.
container.pyThe local Oracle container lifecycle.
config.pyEnvironment variable loading and defaults.
memory.pyCreation of the OracleAgentMemory client.
agent.pyThe OpenAI tool-calling loop and tool schemas.
tools.pyTool handlers for memory search, saving memory, context, JSON state, graph paths, and inspection.
schema.pyThe app-owned relational, JSON, and graph schema.
seed.pyDeterministic demo data.
inspect.pyDatabase inspection output, including vector storage evidence.

The sample is intentionally small, but it is not a toy in the sense of hiding the database. It shows the database because that is the point of the demo.

Creating the memory client

The memory setup happens in memory.py.

The demo creates an OracleAgentMemory client with:

  • an Oracle database connection
  • an embedding model
  • an LLM model for memory extraction and summaries
  • a schema policy
  • a table name prefix

The schema policy is important. Normal startup uses SchemaPolicy.CREATE_IF_NECESSARY, so the managed Agent Memory schema is created if it is not already there. The reset-db command uses the recreate policy for an explicit destructive reset.

That gives the sample a clean local developer workflow. You can run it, inspect it, reset it, and run it again without needing a manually installed database.

The agent loop

The OpenAI side of the sample lives in agent.py.

The agent loop sends a user message, instructions, and a list of function tools to the OpenAI SDK. When the model returns tool calls, the application executes the local Python handler, sends the tool output back, and repeats until the model produces final text.

The useful part is that the tools map to different memory operations:

ToolWhat it demonstrates
search_memoryScoped Agent Memory search.
save_memoryExplicit durable memory writes.
get_contextA thread context card.
update_case_stateJSON state updates in the support case table.
find_related_caseVector-backed semantic retrieval of similar case memories.
explain_relationshipsSQL Property Graph traversal.
inspect_memory_tablesDatabase evidence for the demo.

This is a useful pattern for agent applications. The model does not get direct database access. It gets tools. Each tool has a focused job, a scoped input shape, and a handler that decides what database operation is safe and appropriate.

The companion schema

The sample creates app-owned tables for customers, devices, support cases, policies, graph vertices, and graph edges.

This is separate from the managed Agent Memory schema. That is a good design choice because most real applications already have business data. Agent memory should not replace that data. It should work with it.

The support case table uses JSON for mutable state. A case might start as open, then get a next action, then become escalation-ready, then later get a resolution. That kind of state is structured, but it can change over time. JSON is a good fit.

The graph tables show relationships. Alex owns an account. The account has a router. The router has a case. The case uses a policy. That is exactly the kind of question where graph traversal is easier to read than a pile of joins.

The managed Agent Memory tables store threads, messages, memory records, actor profiles, and record chunks. The record chunks are where the vector-backed similarity story becomes visible.

Run the scripted demo

Start with the main command:

uv run agent-memory-demo run

The run command starts the Oracle container, creates demo schema objects, seeds deterministic data, runs a scripted support conversation, shows memory tool usage, prints the final assistant answer, and includes a database inspection report before the container is removed.

There are a few things to watch for in the output.

First, the demo creates both user and agent profiles. That shows profile memory, not just chat memory.

Second, it creates an initial thread and stores messages. That gives the agent working memory and a durable record of the conversation.

Third, it saves explicit memory. The demo records facts like Alex’s router and contact preferences.

Fourth, it creates a second thread for a follow-up problem and shows the difference between broad thread matching and exact thread matching. That is a subtle but important behavior. Sometimes you want memories from the same user and agent across threads. Sometimes you only want the current thread.

Fifth, it shows scope isolation. A search scoped to another user should not see Alex’s memories.

Finally, the OpenAI tool calls are printed with their JSON arguments and compact results. That makes the agent loop much easier to reason about because you can see what the model asked for and what the database returned.

The output is color-coded:

  • green for user messages
  • magenta for assistant messages
  • yellow for OpenAI tool calls and arguments
  • blue for Oracle database, graph, and tool-result evidence
  • cyan for progress and memory setup or search visibility

That may sound like a small thing, but it makes the demo much easier to follow while it runs.

Run the interactive demo

The repository also includes an interactive mode:

uv run agent-memory-demo interactive

This starts a memory-enabled assistant using the same command-scoped Oracle container lifecycle. It seeds the same companion data and stores your turns in a scoped Agent Memory thread.

One practical detail: the container is command-scoped. It exists while the command is running and is removed when the command exits. So if you want to inspect the database manually, leave the interactive session open.

The README says to wait for output like this:

Started Oracle demo database at localhost:32838/FREEPDB1
Seeded companion relational, JSON, graph, and policy data.
Interactive Oracle AI Agent Memory demo. Type 'quit' to exit.
you>:

Then, before inspecting the database, ask a prompt that creates some memory activity:

For user_id=user_alex and agent_id=support_agent, inspect memory tables and tell me what you can see in one sentence.

Now keep that terminal open and inspect the database from another terminal.

Inspect the database

There is a command for a quick database evidence report:

uv run agent-memory-demo inspect-db

This starts a temporary Oracle container, seeds the deterministic companion data, prints table counts, JSON case state, and graph paths, and then tears the container down.

One thing to know: inspect-db does not create Agent Memory records. That means managed memory table counts are expected to be zero for that command. Use run or verify-memory when you want to populate and inspect memory and vector chunk tables.

For hands-on SQL inspection, keep the interactive session open and connect from a second terminal. Replace the port with the one printed by your run:

sql agent_memory_demo/AgentMemoryDemo1@localhost:32838/FREEPDB1

A good first query is to list the demo tables:

SELECT table_name
FROM user_tables
WHERE table_name LIKE 'OAM_DEMO%'
ORDER BY table_name;

Then look at the columns and data types:

SELECT table_name, column_name, data_type
FROM user_tab_columns
WHERE table_name LIKE 'OAM_DEMO%'
ORDER BY table_name, column_id;

That is where the storage story becomes concrete. You should see the normal relational columns, the JSON columns in the app-owned tables, and, after running a memory-populating command, the vector-related storage in the managed record chunk table.

The graph edge indexes are also useful to inspect:

SELECT index_name, table_name, column_name
FROM user_ind_columns
WHERE index_name LIKE 'OAM_DEMO_APP_GRAPH_EDGE%'
ORDER BY index_name, column_position;

To see the JSON case state, run:

SELECT case_id,
title,
json_value(state_json, '$.status') AS status,
json_value(state_json, '$.next_action') AS next_action,
json_value(state_json, '$.escalation_ready') AS escalation_ready
FROM OAM_DEMO_APP_CASE
ORDER BY case_id;

This is a good example of why JSON is useful here. The case state is still queryable from SQL, but the state document can evolve as the workflow evolves.

Now look at the graph tables:

SELECT vertex_id, vertex_type, label
FROM OAM_DEMO_APP_GRAPH_VERTEX
ORDER BY vertex_type, vertex_id;
SELECT source_vertex_id, relationship_type, target_vertex_id
FROM OAM_DEMO_APP_GRAPH_EDGE
ORDER BY edge_id;

And confirm that the property graph exists:

SELECT object_name, object_type
FROM user_objects
WHERE object_name = 'OAM_DEMO_APP_PROPERTY_GRAPH';

Finally, after running a memory-populating command, look at the managed Agent Memory tables:

SELECT *
FROM OAM_DEMO_MEMORY
FETCH FIRST 5 ROWS ONLY;
SELECT *
FROM OAM_DEMO_RECORD_CHUNKS
FETCH FIRST 5 ROWS ONLY;

This query shows the vector columns:

SELECT column_name, data_type
FROM user_tab_columns
WHERE table_name = 'OAM_DEMO_RECORD_CHUNKS'
AND data_type = 'VECTOR';

And this one shows vector indexes:

SELECT index_name, index_type
FROM user_indexes
WHERE table_name = 'OAM_DEMO_RECORD_CHUNKS'
ORDER BY index_name;

That is the part I like most in this demo. We are not just saying that memory is persistent. We can actually look at the tables and see how different kinds of memory are represented.

Verify graph and vector behavior

The verify-memory command is a nice acceptance check:

uv run agent-memory-demo verify-memory

It seeds explicit Agent Memory records, runs graph traversal, runs vector-backed similarity search, and prints metadata evidence for the managed OAM_DEMO_RECORD_CHUNKS table.

The expected similar prior router case is case_wifi_dropout_001.

That matters because the follow-up issue does not have to use the exact same words as the earlier issue. Vector search can connect “video calls freeze” with a prior router dropout case because the meaning is similar.

This is the right place to use vectors. You are not asking for the one row with a known primary key. You are asking, “Have we seen something like this before?”

When to use each Oracle data type

Here is the practical version.

Use relational tables for the things you must identify and constrain: users, accounts, devices, cases, policies, threads, messages, and profile rows. Relational data gives you keys, constraints, indexes, joins, and ownership boundaries. That is still the backbone of most useful applications.

Use JSON for flexible state. In this sample, support case state lives in JSON because the state can change as the workflow changes. The update_case_state tool uses JSON_MERGEPATCH to update the state document without replacing the whole application model.

Use graph for connected context. If the agent needs to understand that Alex owns an account, the account has a router, the router has a case, and the case uses a policy, graph traversal makes that relationship path explicit. In this sample, explain_relationships uses a SQL Property Graph query to return user-account-device-case-policy paths.

Use vectors for similarity. If the user describes the same issue with different words, exact search is not enough. Vector search lets the agent find semantically similar memories and prior cases. In this sample, case summary memories are embedded into record chunks, and the find_related_case tool searches those chunks through Oracle Agent Memory.

The real value is not that any one of these exists. The value is that the sample can use all of them together.

Reset the demo

If you want to exercise the destructive reset path, the repository includes this command:

uv run agent-memory-demo reset-db

That resets the app-owned companion schema and recreates the managed Agent Memory schema. It is a demo command, not something to point at a production schema.

What this sample teaches

There are a few patterns here that are worth carrying into real applications.

First, keep the memory backend separate from the agent framework. This demo uses the OpenAI SDK tool loop, but the memory concepts are not OpenAI-specific. The agent needs tools. The memory system needs scoped APIs. Those two things meet at a clean boundary.

Second, scope everything. The demo uses user, agent, and thread boundaries. It also shows that another user should not see Alex’s memories. That is not just a demo flourish. It is table stakes for real multi-user agents.

Third, use the right data shape for the job. Chat messages, durable memories, JSON state, graph relationships, and vector chunks are not the same thing. Treating them differently makes the system easier to reason about.

Fourth, inspect the database. Agent demos can feel magical if all you see is a final answer. This demo is better because it shows the rows, JSON state, graph paths, and vector storage evidence. That makes the behavior testable and explainable.

Wrap up

We built and inspected a memory-enabled support assistant using oracleagentmemory, the OpenAI SDK, and a local Oracle Database container.

The sample shows working memory through threads and messages, durable memory through explicit memories and extracted facts, profile memory for users and agents, JSON state for support cases, relationship memory through SQL Property Graph, and similarity memory through vector-backed record chunks.

The important idea is simple: agents need more than chat history. They need memory that is durable, scoped, queryable, and connected to the data the application already trusts.

This demo gives you a compact way to see that pattern end to end.

About Mark Nelson

Mark Nelson is a Developer Evangelist at Oracle, focusing on microservices and AI. Mark has served as a Section Leader in Stanford's Code in Place program that has introduced tens of thousands of people to the joy of programming, he is a published author, a reviewer and contributor, a content creator and a lifelong learner. He enjoys traveling, meeting people and learning about foods and cultures of the world. Mark has worked at Oracle since 2006 and before that at IBM since 1994.
This entry was posted in Uncategorized and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply