AI agents need persistent state. Not just a key/value store or a blob of JSON, but actual structured data with tables, columns, indexes, and queries. An agent managing customer interactions needs a contacts table. An agent tracking inventory needs product and order tables. An agent doing research needs a table to accumulate findings.
The storage route in AgenticMail gives every agent its own relational database, accessible through 28 REST endpoints. It’s a full DBMS exposed over HTTP.
Why a full DBMS?
The simple approach would be a key/value store. Give agents put(key, value) and get(key) and call it done. But agents that work on complex tasks need relational capabilities. They need to query across dimensions (“find all contacts who haven’t been emailed in 30 days and are in the enterprise tier”), aggregate data (“total revenue by quarter”), and join related tables.
A key/value store forces the agent to load all data into memory and do this processing itself, which is slow, memory intensive, and error prone. A relational database does it natively. The agent describes what it wants, and the database figures out how to retrieve it efficiently.
DDL: Schema management
The DDL (Data Definition Language) endpoints let agents create and modify their database schema:
Create table accepts a table name and column definitions with types (text, integer, real, blob, datetime). Each column can specify constraints like NOT NULL, UNIQUE, and DEFAULT values.
Alter table adds or modifies columns on existing tables. Adding a column to a populated table is non destructive; existing rows get the default value (or NULL if no default).
Drop table removes a table entirely. This requires a confirmation parameter to prevent accidental deletion. The agent has to explicitly pass confirm: true.
Clone table creates a copy of a table, either structure only or with data. Useful when an agent wants to experiment with a modified schema without risking its production table.
Rename table does what it says. References in stored queries or other tables are not updated automatically; the agent is responsible for consistency.
Agents can evolve their schema over time as their needs change. A research agent might start with a single “findings” table and later add “sources” and “citations” tables as its workflow matures.
DML: Data operations
The DML (Data Manipulation Language) endpoints are the core of the storage system:
Insert adds one or more rows. Batch insert accepts an array of rows for bulk loading.
Upsert inserts a row or updates it if a conflict occurs on a specified unique column. This is the workhorse operation for agents that sync data from external sources. Pull in the latest data, upsert it, and the table stays current without duplicate handling logic.
Query is the read operation. It supports WHERE clauses with comparison operators, ORDER BY, LIMIT, OFFSET, and column selection. The query interface is structured JSON rather than raw SQL, so the agent builds a query object instead of constructing a SQL string.
Aggregate computes COUNT, SUM, AVG, MIN, and MAX across columns with optional GROUP BY and HAVING clauses. An agent can ask “average response time per customer segment” without pulling all the rows.
Update modifies existing rows matching a WHERE clause. Like drop, destructive updates (those without a WHERE clause that would affect all rows) require explicit confirmation.
Delete removes rows matching a WHERE clause, with the same confirmation safeguard for unscoped deletes.
Truncate empties a table while preserving its schema. Faster than DELETE without a WHERE clause because it doesn’t log individual row deletions.
Index management
Agents can create and drop indexes on their tables. The API supports single column indexes, composite indexes across multiple columns, and unique indexes.
Indexes are important for query performance once tables grow beyond a few hundred rows. An agent that frequently queries contacts by email address should have an index on that column. The storage system doesn’t create indexes automatically (because every index has a write performance cost), but it does include query plan analysis that suggests when an index would help.
Import and export
JSON import loads an array of objects into a table. Each object becomes a row, with object keys mapping to column names.
CSV import loads a CSV file with configurable delimiter, quote character, and header handling. The first row can be treated as column headers or as data.
JSON export dumps a table (or query result) as an array of objects.
CSV export dumps a table or query result as CSV with configurable formatting.
Import and export make it easy for agents to work with external data. Pull a CSV from an email attachment, import it into a table, query it, and export the results.
Raw SQL (guarded)
For operations that don’t fit the structured query interface, agents can execute raw SQL. This endpoint is heavily guarded. Only SELECT statements are allowed by default. DDL and DML via raw SQL require explicit admin authorization per agent.
The guard parses the SQL before execution, rejecting statements that contain prohibited keywords or attempt to access system tables. It’s not a perfect sandbox (SQL injection via creative syntax is always a risk), but it’s a strong first line of defense. The primary use case is complex joins and subqueries that the structured query interface can’t express.
Maintenance operations
Stats returns table sizes, row counts, index sizes, and storage usage per agent.
Vacuum reclaims unused space after large deletions. Agents that frequently insert and delete data benefit from periodic vacuuming.
Analyze updates the query planner’s statistics so it makes better optimization decisions. Recommended after large data loads.
Explain returns the query execution plan for a given query without running it. Useful for debugging slow queries.
Archive moves old data from a table to cold storage based on a date column threshold. The archived data can be restored later if needed, but it no longer occupies active storage or affects query performance.
28 endpoints for a database layer sounds like a lot of surface area. But agents that need structured persistence deserve real database capabilities, not a watered down abstraction. The storage route gives them the tools to build, populate, query, and maintain their own data store, all through the same REST API they use for email.