Add Layer 4 caching to reduce AI load
Implemented 5-minute short-term caching for relationship inference: **store.rs**: - Added relationship_cache SQLite table - save_relationship_cache(), get_cached_relationship() - save_all_relationships_cache(), get_cached_all_relationships() - clear_relationship_cache() - called on memory create/update/delete - Cache duration: 5 minutes (configurable constant) **relationship.rs**: - Modified infer_all_relationships() to use cache - Added get_relationship() function with caching support - Cache hit: return immediately - Cache miss: compute, save to cache, return **base.rs**: - Updated tool_get_relationship() to use cached version - Reduced load from O(n) scan to O(1) cache lookup **Benefits**: - Reduces AI load when frequently querying relationships - Automatic cache invalidation on data changes - Scales better with growing memory count - No user-facing changes **Documentation**: - Updated ARCHITECTURE.md with caching strategy details This addresses scalability concerns for Layer 4 as memory data grows.
This commit is contained in:
@@ -362,10 +362,16 @@ if user.extraversion < 0.5 {
|
||||
|
||||
### Design Philosophy
|
||||
|
||||
**推測のみ、保存なし**:
|
||||
**推測ベース + 短期キャッシング**:
|
||||
- 毎回Layer 1-3.5から計算
|
||||
- キャッシュなし(シンプルさ優先)
|
||||
- 後でキャッシング追加可能
|
||||
- 5分間の短期キャッシュで負荷軽減
|
||||
- メモリ更新時にキャッシュ無効化
|
||||
|
||||
**キャッシング戦略**:
|
||||
- SQLiteテーブル(`relationship_cache`)に保存
|
||||
- 個別エンティティ: `get_relationship(entity_id)`
|
||||
- 全体リスト: `list_relationships()`
|
||||
- メモリ作成/更新/削除時に自動クリア
|
||||
|
||||
**独立性**:
|
||||
- Layer 1-3.5に依存
|
||||
|
||||
Reference in New Issue
Block a user