Add Layer 4 caching to reduce AI load

Implemented 5-minute short-term caching for relationship inference:

**store.rs**:
- Added relationship_cache SQLite table
- save_relationship_cache(), get_cached_relationship()
- save_all_relationships_cache(), get_cached_all_relationships()
- clear_relationship_cache() - called on memory create/update/delete
- Cache duration: 5 minutes (configurable constant)

**relationship.rs**:
- Modified infer_all_relationships() to use cache
- Added get_relationship() function with caching support
- Cache hit: return immediately
- Cache miss: compute, save to cache, return

**base.rs**:
- Updated tool_get_relationship() to use cached version
- Reduced load from O(n) scan to O(1) cache lookup

**Benefits**:
- Reduces AI load when frequently querying relationships
- Automatic cache invalidation on data changes
- Scales better with growing memory count
- No user-facing changes

**Documentation**:
- Updated ARCHITECTURE.md with caching strategy details

This addresses scalability concerns for Layer 4 as memory data grows.
This commit is contained in:
Claude
2025-11-06 09:33:42 +00:00
parent 8037477104
commit 2579312029
5 changed files with 203 additions and 39 deletions

View File

@@ -9,5 +9,5 @@ pub use analysis::UserAnalysis;
pub use error::{MemoryError, Result};
pub use memory::Memory;
pub use profile::{UserProfile, TraitScore};
pub use relationship::{RelationshipInference, infer_all_relationships};
pub use relationship::{RelationshipInference, infer_all_relationships, get_relationship};
pub use store::MemoryStore;