How OpenClaw memory actually works: MEMORY.md, daily notes, search, and dreaming
Most people talk about OpenClaw memory like it is a hidden magic layer. It is not. OpenClaw remembers things by writing plain Markdown files to disk, then retrieving the useful parts when the agent needs them. Once you get that, the rest of the system gets much less mysterious.
My opinion up front: most confusion around OpenClaw memory happens because people mix up four different things that are not the same thing. MEMORY.md is durable memory. Daily files are short-term running notes. memory_search and memory_get are retrieval tools. Compaction is conversation compression. If you blur those together, the system feels random. If you separate them, it starts to feel sensible. Memory makes the agent more useful. It also makes sloppy setups more consequential.
Quick Note: If you are still shaping the personality layer, read my companion piece on OpenClaw SOUL.md examples and personality templates. And if you are connecting real accounts or work tools, also read my OpenClaw security checklist.
Short answer: does OpenClaw really remember things?
Yes, but only if the information makes it to disk. The official docs are very clear here: OpenClaw remembers things by writing plain Markdown files inside the agent workspace. There is no magical hidden state you can rely on forever. If the agent never saved the thing, or saved it badly, you should assume it is gone from long-term memory. That one idea explains a lot of the weirdness people blame on the model.
What sticks
MEMORY.md holds durable facts, preferences, and decisions that should survive beyond today.
What rolls
memory/YYYY-MM-DD.md is where recent context and observations accumulate day by day.
What retrieves
memory_search finds relevant notes and memory_get reads specific files or ranges.
What is not memory
Compaction keeps a conversation moving when context grows too large. It is not the same thing as durable recall.
The three files that matter most
OpenClaw’s memory overview names three memory-related files: MEMORY.md, daily notes in memory/YYYY-MM-DD.md, and the optional DREAMS.md. They live in the agent workspace, which defaults to ~/.openclaw/workspace. If you are new to this stack, that file-first design is the whole point. Your memory is inspectable. You can open it, edit it, prune it, and understand what the agent is leaning on.
| File | What it is for | What I would put there | What I would not put there |
|---|---|---|---|
MEMORY.md |
Long-term memory. Durable facts, preferences, and decisions. | Stable preferences, recurring tools, long-lived constraints, naming conventions. | Today’s clutter, temporary experiments, noisy session debris. |
memory/YYYY-MM-DD.md |
Daily running notes and observations. | Fresh discoveries, short-term facts, loose context, things worth revisiting tomorrow. | Permanent truths that should survive weeks of drift. |
DREAMS.md |
Human-readable output from the optional dreaming system. | Reviewable summaries of what the system noticed during consolidation. | Your main source of truth for durable memory. Promotions still land in MEMORY.md. |

A real MEMORY.md example you can copy
This is the sample I keep coming back to. Notice how thin it is. Durable memory should read like a terse dossier, not a journal. Every line earns its keep.
# ~/.openclaw/workspace/MEMORY.md
## Preferences
- Ahmad prefers terse, no-fluff responses. Skip closing summaries.
- Default to Markdown output. Code fences for anything runnable.
- Timezone is Europe/London. Dates in ISO format.
## Tools
- Primary editor: VS Code. Shell: zsh on macOS.
- WordPress REST API is the source of truth for chatgptguide.ai posts.
- Embeddings provider: Ollama (nomic-embed-text) for privacy.
## Constraints
- Never run destructive git commands without confirmation.
- Do not publish drafts automatically. Leave status as "draft".
## Decisions
- 2026-03-22: Moved blog memory backend from QMD to builtin SQLite.
- 2026-04-02: Standardised on SOUL.md for voice, MEMORY.md for facts.
And here is what a daily note looks like on a normal working day. Daily notes are allowed to be messier. They are the scratchpad, not the dossier.
# ~/.openclaw/workspace/memory/2026-04-07.md
## Context
- Reviewing OpenClaw memory post draft (id 8554) before publishing.
- User asked for copy-pasteable examples and a troubleshooting block.
## Observations
- memory_search recalls "SOUL.md" reliably once embeddings are on.
- BM25-only retrieval missed the phrase "hybrid search" twice this morning.
## Follow-ups
- Promote "review MEMORY.md monthly" rule into MEMORY.md if it sticks.
- Check whether dreaming Deep phase wrote anything last night.
What memory_search and memory_get actually do
This is the part that makes the file system useful instead of annoying. OpenClaw gives the agent two memory tools by default through the active memory plugin, usually memory-core. memory_search finds relevant notes, even when your search terms do not exactly match the original wording. memory_get reads a specific memory file or line range. In practice, that means one tool is for recall and the other is for inspection.
The memory search docs add the important detail: retrieval can run as hybrid search, combining vector search for meaning and BM25 keyword search for exact strings like IDs, error messages, and config keys. That is why OpenClaw memory feels much better once embeddings are configured. Semantic matches handle fuzzy recall. Keyword matches save you when the exact string matters.

If you only configure one thing, configure the provider cleanly. OpenClaw supports OpenAI, Gemini, Voyage, Mistral, Bedrock, Ollama, and a local option for embeddings, and the docs also expose optional quality helpers like MMR for diversity and temporal decay for recency weighting. That is a fancy way of saying: make the search find the right note, not five versions of the same note from last month.
Compaction is not memory, and this is where people get lost
Compaction and memory touch each other, but they are not the same system. Compaction exists because every model has a context window. When a conversation gets too large, OpenClaw summarizes older turns so the session can continue. The full history still stays on disk, but what the model sees on the next turn gets compressed. That is different from durable memory files like MEMORY.md.
The bridge between the two is memory flush. Before compaction summarizes the conversation, OpenClaw runs a silent turn that reminds the agent to save important context to memory files. In plain English: compaction is trying not to forget the good stuff before it tidies the room. That does not guarantee perfect recall, but it is why compaction is adjacent to memory rather than identical to it.
| Memory | Compaction | |
|---|---|---|
| Job | Keep durable or retrievable knowledge available later. | Compress older conversation so the session can keep going. |
| Main place it writes | MEMORY.md, daily files, and optionally DREAMS.md. |
The session transcript summary. |
| Why you feel it | The agent recalls something useful later. | The chat survives a long session without blowing the context window. |
| Common mistake | Expecting perfect recall without good notes or a search provider. | Assuming a compacted conversation automatically became long-term memory. |

What dreaming actually is in plain English
Dreaming is OpenClaw’s optional background memory consolidation system in memory-core. It is disabled by default, and that is probably wise. The job of dreaming is not to make the agent feel poetic. Its job is to look at short-term signals, score what keeps recurring, and promote only qualified items into long-term memory. The docs describe it as explainable and reviewable, which is exactly the right design choice. If a system is going to decide what becomes durable memory, you want receipts.
Light
Stages recent short-term material, dedupes it, and records reinforcement signals. It never writes to MEMORY.md.
REM
Reflects on themes and recurring ideas. Useful for patterns, but still not the phase that writes durable memory.
Theme and reflectionDeep
Ranks candidates with thresholds and writes successful promotions to MEMORY.md. This is the phase that matters most.
0 3 * * *, but dreaming stays off until you enable it.
My practical take: do not enable dreaming just because the name sounds clever. Enable it when you are using OpenClaw long enough that short-term observations keep surfacing across days and you want the system to promote only the recurring signal. If you are still learning the basics of MEMORY.md and daily notes, dreaming is probably not your first bottleneck.
Which memory backend should most people use?
For most people, the built-in engine is the right default. The docs say it stores the index in a per-agent SQLite database, needs no extra dependencies, and supports keyword search, vector search, and hybrid search. It also watches memory files for changes and can rebuild the index when the provider or chunking settings change. That is a sane default, which is not something I say lightly about AI tooling.
The same docs point to QMD if you need reranking, query expansion, or indexing beyond your default workspace memory files, and to Honcho if you want more AI-native cross-session memory behavior. That does not mean you should reach for those on day one. It means the built-in engine is the baseline, and the others are for when your use case grows teeth.
The minimum setup I would actually recommend
If you want OpenClaw memory to feel useful without becoming a side project, I would keep the setup boring. Start with the built-in backend. Pick a real embedding provider. Leave hybrid search on. Enable MMR and temporal decay if your history starts getting noisy. Only then worry about dreaming or alternate backends. The goal is not maximum cleverness. The goal is getting the right note back at the right time.
File: ~/.openclaw/config.json
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "openai",
"query": {
"hybrid": {
"mmr": { "enabled": true },
"temporalDecay": { "enabled": true }
}
}
}
}
}
}
Then, if you have a real reason to consolidate recurring signal in the background, enable dreaming explicitly instead of assuming it is already doing something in the shadows.
File: ~/.openclaw/config.json
{
"plugins": {
"entries": {
"memory-core": {
"config": {
"dreaming": {
"enabled": true
}
}
}
}
}
}
How to verify memory is actually working
“It is on” and “it is working” are different claims. Here is the sanity check I run whenever I change providers or move a workspace. It takes about two minutes and saves hours of blaming the model for things the config is quietly breaking.
- Plant a canary. In a live session, tell the agent: “Please save to MEMORY.md that my canary phrase is ‘blue octopus 42’.” Open
~/.openclaw/workspace/MEMORY.mdin a text editor and confirm the line actually landed on disk. If it did not, memory writes are broken, not memory reads. - Start a fresh session. Close the agent, reopen it, and ask “What is my canary phrase?” without any hints. A working setup recalls it. A broken one shrugs.
- Force a semantic match. Ask a rephrased version: “What weird code word did we agree on?” If keyword search alone answers this, great. If only hybrid search answers it, your embeddings provider is doing its job.
- Inspect the index. Check that the SQLite file exists at
~/.openclaw/memory/<agentId>.sqliteand has a non-trivial size. An empty or missing file means the built-in engine never indexed anything. - Tail the logs. Run the agent with debug logging on and watch for
memory_searchandmemory_gettool calls during the canary question. If they never fire, the model is not reaching for memory at all — usually a plugin or tool-exposure problem, not a search problem.
Troubleshooting: when memory feels broken
| Symptom | Likely cause | What to check first |
|---|---|---|
memory_search returns nothing |
Either the index is empty or the tool is not exposed to the agent. | Confirm the SQLite file at ~/.openclaw/memory/<agentId>.sqlite has size > 0, and that memory-core is listed in your active plugins. |
| Embeddings provider errors on startup | API key missing, wrong model name, or the provider block is nested wrong. | Swap to the local or Ollama provider to isolate the issue. If that works, the original provider config is the problem, not memory. |
MEMORY.md is bloated and noisy |
Daily-note material has been promoted into durable memory by accident. | Open the file and cut anything that would not still be true next month. Move anything time-bound into a dated daily note instead. |
| Five near-identical results for every query | MMR is off, or the same fact has been written multiple times over days. | Enable mmr in memorySearch.query.hybrid, and deduplicate MEMORY.md by hand while you are in there. |
| Old, wrong facts keep resurfacing | Stale entries were never pruned, and temporal decay is off. | Turn on temporalDecay, then delete or correct the offending lines directly in MEMORY.md. |
| Agent remembers things in one session but forgets across restarts | Memory flush is not firing before compaction, or nothing is being written to disk. | Run the canary test above. If the canary never reaches MEMORY.md, the write path is broken, not the recall path. |
Pruning and memory hygiene
My rule of thumb: review MEMORY.md once a month, and delete anything you would not bother writing down today. If a line only made sense in the context of a specific project that shipped two sprints ago, it is noise now. Durable memory is a garden, not a landfill. Daily notes are allowed to sprawl because they age out on their own. MEMORY.md is not.
MEMORY.md, scan top to bottom, ask “is this still true, still useful, and still the most compact way to say it?” for each line. If any answer is no, edit or delete. Commit the change (see the git tip below) so you can always roll back.Back up your memory with git (seriously)
Because OpenClaw memory is plain Markdown on disk, you can put the whole workspace under version control in about ten seconds. I run git init inside ~/.openclaw/workspace and commit after every hygiene pass. That gives me a history of what the agent “knew” on any given day, a safety net for accidental deletions, and an audit trail if the agent ever writes something surprising to MEMORY.md. If you would rather not use git, dragging the workspace folder into a cloud-synced directory works too — just keep in mind that the SQLite index can thrash a sync client, so exclude ~/.openclaw/memory from sync and let it rebuild locally.
Common mistakes that make OpenClaw memory feel worse
Stuffing permanent rules into the wrong file
If it is a stable preference or durable decision, it belongs in MEMORY.md. If it is today’s running context, it belongs in the daily file. If it is personality, it belongs in SOUL.md, not memory.
Expecting compaction to become long-term recall automatically
Compaction keeps the session alive. Memory keeps important facts available later. They touch, but they are not interchangeable.
Running memory search without a clean provider setup
Without embeddings, you are leaning much harder on keyword search alone. That can work, but it is a worse experience for fuzzy recall.
Turning on dreaming before you understand the baseline
Dreaming is interesting, but it is not the first lever to pull. Get the file structure and retrieval working first.
Field notes: what I think most people still get wrong
Most bad OpenClaw memory setups are not failing because the agent is dumb. They are failing because the memory substrate is noisy. People want the system to remember everything, but useful memory is selective by design. Durable memory should be compact and high signal. Daily notes can be messier. Retrieval should be tuned. And dreaming should stay optional until the basics are stable. If you let all of those layers collapse into one blob, the agent starts to feel forgetful and weirdly overconfident at the same time.
FAQs
Is MEMORY.md the same as AGENTS.md or SOUL.md?
No. MEMORY.md is for durable facts, preferences, and decisions. SOUL.md is the identity and voice layer. AGENTS.md is the operating instruction layer.
Does OpenClaw memory work without an API key?
Yes, but you may be limited to keyword-only behavior unless you configure a local or supported embedding provider. The docs list local, Ollama, and other provider options.
Where does the built-in memory index live?
The built-in engine stores the index in a per-agent SQLite database at ~/.openclaw/memory/<agentId>.sqlite. That is one reason it is a solid default for most users.
Should I enable dreaming immediately?
Probably not. Get the basics working first: good notes, a real provider, and clean retrieval. Dreaming helps once recurring signal is the problem.
Can I edit MEMORY.md by hand?
Yes, and it is actively encouraged. That is the whole point of storing memory as plain Markdown on disk. Open it in any editor, prune noise, fix stale facts, reorganise sections. The built-in engine watches memory files for changes and will rebuild its index automatically.
Does OpenClaw memory sync across machines?
Not by itself. Memory lives in ~/.openclaw/workspace on whatever machine the agent is running on. If you want it to follow you between laptops, put the workspace under git, or sync just the workspace folder (not the SQLite index in ~/.openclaw/memory) through Dropbox, iCloud, or similar.
How big can MEMORY.md get before it hurts?
There is no hard limit, but the practical ceiling is “how much of it the model is willing to keep in context alongside everything else.” In my experience, anything past a few thousand words starts to dilute recall and crowd out the rest of the session. Treat that as your prune signal. If MEMORY.md is getting long, it almost always means daily-note material snuck in where it did not belong.
If you want to make OpenClaw feel more useful after memory, the next layer to fix is usually either personality or security. That means SOUL.md on one side and blast-radius control on the other. Useful agents are not just smart. They are shaped well and contained well. If you are still early in the stack decision itself, my OpenClaw vs n8n comparison walks through when a file-first agent is actually the right call, and the Hermes Agent Setup Guide is the closest thing I have to a from-scratch install walkthrough for the same audience.

