with one click
with one click
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | smart-memory |
| description | 持久化认知记忆系统。基于本地向量存储的长期记忆管理。当用户说"保存记忆"、"回忆"、"长期记忆"时使用。 |
| version | 1.0.0 |
| triggers | ["智能记忆","smart memory"] |
Smart Memory v2 is a persistent cognitive memory runtime, not a legacy vector-memory CLI.
Core runtime:
smart-memory/index.jsserver.py (FastAPI)cognitive_memory_system.pyepisodic, semantic, belief, goal)/health, /memories, /memory/{id}, /insights/pending)Use the native OpenClaw skill package:
skills/smart-memory-v25/index.jsskills/smart-memory-v25/openclaw-hooks.jsskills/smart-memory-v25/SKILL.mdPrimary exports:
createSmartMemorySkill(options)createOpenClawHooks({ skill, agentIdentity, summarizeWithLLM })memory_searchquery (string, required)type (all|semantic|episodic|belief|goal, default all)limit (number, default 5)min_relevance (number, default 0.6)/health first, then retrieves via /retrieve and returns formatted memory results.memory_commitcontent (string, required)type (semantic|episodic|belief|goal, required)importance (1-10, default 5)tags (string array, optional)/health firstworking_question, decision heuristics).memory_retry_queue.jsonMemory commit failed - server unreachable. Queued for retry.memory_insightslimit (number, default 10)/health first, calls /insights/pending, returns formatted insight list.GET /health).The v2.5 skill supports episodic session arc capture:
Flow:
Summarize this session arc: What was the goal? What approaches were tried? What decisions were made? What remains open?memory_commit as:
type: "episodic"tags: ["session_arc", "YYYY-MM-DD"]Use inject_active_context (or createOpenClawHooks().beforeModelResponse) before response generation.
This adds the standardized block:
[ACTIVE CONTEXT]
Status: {status}
Active Projects: {active_projects}
Working Questions: {working_questions}
Top of Mind: {top_of_mind}
Pending Insights:
- {insight_1}
- {insight_2}
[/ACTIVE CONTEXT]
Add this guidance line to your agent base prompt:
If pending insights appear in your context that relate to the current conversation, surface them naturally to the user. Do not force it - but if there is a genuine connection, seamlessly bring it up.
const {
createSmartMemorySkill,
createOpenClawHooks,
} = require("./skills/smart-memory-v25");
const memory = createSmartMemorySkill({
baseUrl: "http://127.0.0.1:8000",
summarizeSessionArc: async ({ prompt, conversationText }) => {
return openclaw.llm.complete({ system: prompt, user: conversationText });
},
});
const hooks = createOpenClawHooks({
skill: memory.skill,
agentIdentity: "OpenClaw Agent",
summarizeWithLLM: async ({ prompt, conversationText }) => {
return openclaw.llm.complete({ system: prompt, user: conversationText });
},
});
// Register memory.tools as callable tools:
// - memory_search
// - memory_commit
// - memory_insights
// and call hooks.beforeModelResponse / hooks.onTurn / hooks.onSessionEnd at lifecycle points.
start() / init()ingestMessage(interaction)retrieveContext({ user_message, conversation_history })getPromptContext(promptComposerRequest)runBackground(scheduled)stop()GET /healthPOST /ingestPOST /retrievePOST /composePOST /run_backgroundGET /memoriesGET /memory/{memory_id}GET /insights/pendingFor Docker, WSL, and laptops without NVIDIA GPUs, use CPU-only PyTorch.
# from repository root
cd smart-memory
# Create Python venv
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install CPU-only PyTorch FIRST
pip install torch --index-url https://download.pytorch.org/whl/cpu
# Then install remaining dependencies
pip install -r requirements-cognitive.txt
# Finally, install Node dependencies
npm install
npm install -> postinstall.js) so CPU wheels are always used.Legacy vector-memory CLI artifacts (smart_memory.js, vector_memory_local.js, focus_agent.js) are removed in v2.