1
0

add setup

This commit is contained in:
2026-03-01 18:56:39 +09:00
parent eca2bdce77
commit 7d0a1fc000
2 changed files with 160 additions and 30 deletions

View File

@@ -2,41 +2,62 @@
## Overview
MCP server for AI memory. Reads/writes core.json and memory/*.json in atproto lexicon record format.
MCP server for AI memory. 1 TID = 1 memory element. ATProto lexicon record format.
## Design
- AI decides, tool records
- File I/O only, no database
- 4 MCP tools: read_core, read_memory, save_memory, compress
- Storage format: atproto getRecord JSON
- 1 TID = 1 memory element (not a monolithic blob)
- `memory` setting controls max record count (default: 100)
- `compress` consolidates records when limit is exceeded
- `instructions` in MCP initialize delivers core + all memories to client
## MCP Tools
| Tool | Args | Description |
|------|------|-------------|
| read_core | none | Returns core.json record |
| read_memory | none | Returns latest memory record |
| save_memory | content: string | Creates new memory record (version increments) |
| compress | conversation: string | Same as save_memory (AI compresses before calling) |
| read_core | none | Returns core record (identity, personality) |
| read_memory | none | Returns all memory records as array |
| save_memory | content: string | Adds a single memory element |
| compress | items: string[] | Replaces all records with compressed set |
compress note: AI decides what to keep/discard. Tool just writes.
## Config
```json
{
"bot": {
"did": "did:plc:xxx",
"handle": "ai.syui.ai",
"path": "~/ai/log/public/content",
"memory": 100
}
}
```
- Config file: `~/.config/ai.syui.gpt/config.json` (Linux) / `~/Library/Application Support/ai.syui.gpt/config.json` (macOS)
- Same format as site config.json (`bot` field)
- `memory`: max number of records (default: 100)
## Data
```
~/Library/Application Support/ai.syui.gpt/ (macOS)
~/.local/share/ai.syui.gpt/ (Linux)
├── core.json ← read only, rkey: self
└── memory/
├── {tid1}.json ← version 1
├── {tid2}.json ← version 2
└── {tid3}.json ← version 3 (latest)
$path/{did}/{collection}/{rkey}.json
e.g.
~/ai/log/public/content/
└── did:plc:xxx/
├── ai.syui.gpt.core/
│ └── self.json
└── ai.syui.gpt.memory/
├── {tid1}.json ← "syuiはRustを好む"
├── {tid2}.json ← "ATProto設計に詳しい"
└── {tid3}.json ← "原神プレイヤー"
```
## Record Format
core (single record, rkey: self):
core (rkey: self):
```json
{
"uri": "at://{did}/ai.syui.gpt.core/self",
@@ -53,7 +74,7 @@ core (single record, rkey: self):
}
```
memory (multiple records, rkey: tid):
memory (rkey: tid, 1 element per record):
```json
{
"uri": "at://{did}/ai.syui.gpt.memory/{tid}",
@@ -62,9 +83,8 @@ memory (multiple records, rkey: tid):
"did": "did:plc:xxx",
"content": {
"$type": "ai.syui.gpt.memory#markdown",
"text": "# Memory\n\n## ..."
"text": "syuiはRustを好む"
},
"version": 5,
"createdAt": "2026-03-01T12:00:00Z"
}
}
@@ -74,33 +94,44 @@ memory (multiple records, rkey: tid):
```
src/
├── mcp/server.rs ← JSON-RPC over stdio
├── core/reader.rs ← read core.json, memory/*.json
├── core/writer.rs ← write memory/{tid}.json
── main.rs ← CLI + MCP server
├── mcp/server.rs ← JSON-RPC over stdio, instructions
├── core/config.rs config loading, path resolution
├── core/reader.rs read core.json, memory/*.json
── core/writer.rs ← save_memory, compress_memory
└── main.rs ← CLI + MCP server
```
## Memory Flow
1. `save_memory("fact")` → creates 1 TID file
2. Records accumulate: 1 TID = 1 fact
3. When records exceed `memory` limit → AI calls `compress`
4. `compress(["kept1", "kept2", ...])` → deletes all, writes new set
5. MCP `initialize` → delivers core + all memories as `instructions`
## Compression Rules
When compress is called, AI should:
- Keep facts and decisions
- Discard procedures and processes
- Discard outdated or redundant entries
- Merge related items
- Resolve contradictions (keep newer)
- Don't duplicate core.json content
## Usage
```bash
aigpt # show config and status
aigpt server # start MCP server
aigpt read-core # CLI: read core.json
aigpt read-memory # CLI: read latest memory
aigpt save-memory "..." # CLI: create new memory record
aigpt read-core # read core record
aigpt read-memory # read all memory records
aigpt save-memory "..." # add a single memory element
```
## Tech
- Rust, MCP (JSON-RPC over stdio), atproto record format, file I/O only
- Rust, MCP (JSON-RPC over stdio), ATProto record format, file I/O only
## History
Previous versions (v0.1-v0.3) had multi-layer architecture with SQLite, Big Five personality analysis, relationship inference, gamification, and companion systems. Rewritten to current simple design. Old docs preserved in docs/archive/.
Previous versions (v0.1-v0.3) had multi-layer architecture with SQLite, Big Five personality analysis, relationship inference, gamification, and companion systems. Rewritten to current simple design.

View File

@@ -1,5 +1,6 @@
use anyhow::Result;
use clap::{Parser, Subcommand};
use std::process::Command;
use aigpt::core::{config, reader, writer};
use aigpt::mcp::MCPServer;
@@ -15,6 +16,9 @@ struct Cli {
#[derive(Subcommand)]
enum Commands {
/// Initial setup: clone repo, link config, register MCP
Setup,
/// Start MCP server (JSON-RPC over stdio)
Server,
@@ -32,9 +36,14 @@ enum Commands {
}
fn main() -> Result<()> {
config::init();
let cli = Cli::parse();
if let Some(Commands::Setup) = &cli.command {
return run_setup();
}
config::init();
match cli.command {
None => {
print_status();
@@ -65,11 +74,101 @@ fn main() -> Result<()> {
writer::save_memory(&content)?;
println!("Saved. ({} records)", reader::memory_count());
}
Some(Commands::Setup) => unreachable!(),
}
Ok(())
}
fn run_setup() -> Result<()> {
let home = dirs::home_dir().expect("Cannot find home directory");
let ai_dir = home.join("ai");
let log_dir = ai_dir.join("log");
let cfg_dir = config::config_file()
.parent()
.unwrap()
.to_path_buf();
let cfg_file = config::config_file();
let site_config = log_dir.join("public").join("config.json");
let aigpt_bin = std::env::current_exe().unwrap_or_else(|_| "aigpt".into());
// 1. ~/ai/
std::fs::create_dir_all(&ai_dir)?;
println!("ok {}/", ai_dir.display());
// 2. git clone
if !log_dir.exists() {
println!("cloning ai/log...");
let status = Command::new("git")
.args(["clone", "https://git.syui.ai/ai/log"])
.current_dir(&ai_dir)
.status()?;
if !status.success() {
anyhow::bail!("git clone failed");
}
println!("ok {}/", log_dir.display());
} else {
println!("skip {} (exists)", log_dir.display());
}
// 3. config symlink
std::fs::create_dir_all(&cfg_dir)?;
if !cfg_file.exists() {
#[cfg(unix)]
std::os::unix::fs::symlink(&site_config, &cfg_file)?;
#[cfg(windows)]
std::os::windows::fs::symlink_file(&site_config, &cfg_file)?;
println!("ok {} -> {}", cfg_file.display(), site_config.display());
} else if cfg_file.is_symlink() {
let target = std::fs::read_link(&cfg_file)?;
if target == site_config {
println!("skip {} (linked)", cfg_file.display());
} else {
println!("skip {} (symlink -> {})", cfg_file.display(), target.display());
}
} else {
println!("skip {} (exists)", cfg_file.display());
}
// 4. init data dirs
config::init();
println!("ok data initialized");
// 5. claude mcp add
if is_command_available("claude") {
let status = Command::new("claude")
.args([
"mcp", "add",
"--transport", "stdio",
"aigpt",
"--scope", "user",
"--",
])
.arg(&aigpt_bin)
.arg("server")
.status()?;
if status.success() {
println!("ok claude mcp add aigpt");
} else {
println!("warn claude mcp add failed");
}
} else {
println!("skip claude mcp add (claude not found)");
}
println!("\ndone.");
Ok(())
}
fn is_command_available(cmd: &str) -> bool {
Command::new("which")
.arg(cmd)
.output()
.map(|o| o.status.success())
.unwrap_or(false)
}
fn print_status() {
let cfg = config::load();
let did = cfg.did.clone().unwrap_or_else(|| "self".to_string());