Compare commits
12 Commits
feature/sh
...
b410c83605
Author | SHA1 | Date | |
---|---|---|---|
b410c83605
|
|||
334e17a53e
|
|||
df86fb827e
|
|||
5a441e847d
|
|||
948bbc24ea
|
|||
d4de0d4917
|
|||
3487535e08
|
|||
1755dc2bec
|
|||
42c85fc820
|
|||
4a441279fb
|
|||
e7e57b7b4b | |||
8c0961ab2f |
@@ -42,7 +42,15 @@
|
|||||||
"Bash(echo:*)",
|
"Bash(echo:*)",
|
||||||
"Bash(aigpt shell:*)",
|
"Bash(aigpt shell:*)",
|
||||||
"Bash(aigpt maintenance)",
|
"Bash(aigpt maintenance)",
|
||||||
"Bash(aigpt status syui)"
|
"Bash(aigpt status syui)",
|
||||||
|
"Bash(cp:*)",
|
||||||
|
"Bash(./setup_venv.sh:*)",
|
||||||
|
"WebFetch(domain:docs.anthropic.com)",
|
||||||
|
"Bash(launchctl:*)",
|
||||||
|
"Bash(sudo lsof:*)",
|
||||||
|
"Bash(sudo:*)",
|
||||||
|
"Bash(cargo check:*)",
|
||||||
|
"Bash(cargo run:*)"
|
||||||
],
|
],
|
||||||
"deny": []
|
"deny": []
|
||||||
}
|
}
|
||||||
|
3
.gitmodules
vendored
3
.gitmodules
vendored
@@ -5,3 +5,6 @@
|
|||||||
path = card
|
path = card
|
||||||
url = git@git.syui.ai:ai/card
|
url = git@git.syui.ai:ai/card
|
||||||
branch = claude
|
branch = claude
|
||||||
|
[submodule "log"]
|
||||||
|
path = log
|
||||||
|
url = git@git.syui.ai:ai/log
|
||||||
|
@@ -1,365 +0,0 @@
|
|||||||
# ai.gpt 開発状況 (2025/06/02 更新)
|
|
||||||
|
|
||||||
## 前回セッション完了事項 (2025/06/01)
|
|
||||||
|
|
||||||
### ✅ ai.card MCPサーバー独立化完了
|
|
||||||
- **ai.card専用MCPサーバー実装**: `card/api/app/mcp_server.py`
|
|
||||||
- **9個のMCPツール公開**: カード管理・ガチャ・atproto同期等
|
|
||||||
- **統合戦略変更**: ai.gptは統合サーバー、ai.cardは独立サーバー
|
|
||||||
- **仮想環境セットアップ**: `~/.config/syui/ai/card/venv/`
|
|
||||||
- **起動スクリプト**: `uvicorn app.main:app --port 8000`
|
|
||||||
|
|
||||||
### ✅ ai.shell統合完了
|
|
||||||
- **Claude Code風シェル実装**: `aigpt shell` コマンド
|
|
||||||
- **MCP統合強化**: 14種類のツール(ai.gpt:9, ai.shell:5)
|
|
||||||
- **プロジェクト仕様書**: `aishell.md` 読み込み機能
|
|
||||||
- **環境対応改善**: prompt-toolkit代替でinput()フォールバック
|
|
||||||
|
|
||||||
### ✅ 前回セッションのバグ修正完了
|
|
||||||
- **config listバグ修正**: `config.list_keys()`メソッド呼び出し修正
|
|
||||||
- **仮想環境問題解決**: `pip install -e .`でeditable mode確立
|
|
||||||
- **全CLIコマンド動作確認済み**
|
|
||||||
|
|
||||||
## 現在の状態
|
|
||||||
|
|
||||||
### ✅ 実装済み機能
|
|
||||||
|
|
||||||
1. **基本システム**
|
|
||||||
- 階層的記憶システム(完全ログ→要約→コア→忘却)
|
|
||||||
- 不可逆的な関係性システム(broken状態は修復不可)
|
|
||||||
- AI運勢による日々の人格変動
|
|
||||||
- 時間減衰による自然な関係性変化
|
|
||||||
|
|
||||||
2. **CLI機能**
|
|
||||||
- `chat` - AIとの会話(Ollama/OpenAI対応)
|
|
||||||
- `status` - 状態確認
|
|
||||||
- `fortune` - AI運勢確認
|
|
||||||
- `relationships` - 関係一覧
|
|
||||||
- `transmit` - 送信チェック(現在はprint出力)
|
|
||||||
- `maintenance` - 日次メンテナンス
|
|
||||||
- `config` - 設定管理(listバグ修正済み)
|
|
||||||
- `schedule` - スケジューラー管理
|
|
||||||
- `server` - MCP Server起動
|
|
||||||
- `shell` - インタラクティブシェル(ai.shell統合)
|
|
||||||
|
|
||||||
3. **データ管理**
|
|
||||||
- 保存場所: `~/.config/syui/ai/gpt/`(名前規則統一)
|
|
||||||
- 設定: `config.json`
|
|
||||||
- データ: `data/` ディレクトリ内の各種JSONファイル
|
|
||||||
- 仮想環境: `~/.config/syui/ai/gpt/venv/`
|
|
||||||
|
|
||||||
4. **スケジューラー**
|
|
||||||
- Cron形式とインターバル形式対応
|
|
||||||
- 5種類のタスクタイプ実装済み
|
|
||||||
- バックグラウンド実行可能
|
|
||||||
|
|
||||||
5. **MCP Server統合アーキテクチャ**
|
|
||||||
- **ai.gpt統合サーバー**: 14種類のツール(port 8001)
|
|
||||||
- **ai.card独立サーバー**: 9種類のツール(port 8000)
|
|
||||||
- Claude Desktop/Cursor連携対応
|
|
||||||
- fastapi_mcp統一基盤
|
|
||||||
|
|
||||||
6. **ai.shell統合(Claude Code風)**
|
|
||||||
- インタラクティブシェルモード
|
|
||||||
- シェルコマンド実行(!command形式)
|
|
||||||
- AIコマンド(analyze, generate, explain)
|
|
||||||
- aishell.md読み込み機能
|
|
||||||
- 環境適応型プロンプト(prompt-toolkit/input())
|
|
||||||
|
|
||||||
## 🚧 次回開発の優先課題
|
|
||||||
|
|
||||||
### 最優先: システム統合の最適化
|
|
||||||
|
|
||||||
1. **ai.card重複コード削除**
|
|
||||||
- **削除対象**: `src/aigpt/card_integration.py`(HTTPクライアント)
|
|
||||||
- **削除対象**: ai.gptのMCPサーバーの`--enable-card`オプション
|
|
||||||
- **理由**: ai.cardが独立MCPサーバーになったため不要
|
|
||||||
- **統合方法**: ai.gpt(8001) → ai.card(8000) HTTP連携
|
|
||||||
|
|
||||||
2. **自律送信の実装**
|
|
||||||
- 現在: コンソールにprint出力
|
|
||||||
- TODO: atproto (Bluesky) への実際の投稿機能
|
|
||||||
- 参考: ai.bot (Rust/seahorse) との連携も検討
|
|
||||||
|
|
||||||
3. **環境セットアップ自動化**
|
|
||||||
- 仮想環境自動作成スクリプト強化
|
|
||||||
- 依存関係の自動解決
|
|
||||||
- Claude Desktop設定例の提供
|
|
||||||
|
|
||||||
### 中期的課題
|
|
||||||
|
|
||||||
1. **テストの追加**
|
|
||||||
- 単体テスト
|
|
||||||
- 統合テスト
|
|
||||||
- CI/CDパイプライン
|
|
||||||
|
|
||||||
2. **エラーハンドリングの改善**
|
|
||||||
- より詳細なエラーメッセージ
|
|
||||||
- リトライ機構
|
|
||||||
|
|
||||||
3. **ai.botとの連携**
|
|
||||||
- Rust側のAPIエンドポイント作成
|
|
||||||
- 送信機能の委譲
|
|
||||||
|
|
||||||
4. **より高度な記憶要約**
|
|
||||||
- 現在: シンプルな要約
|
|
||||||
- TODO: AIによる意味的な要約
|
|
||||||
|
|
||||||
5. **Webダッシュボード**
|
|
||||||
- 関係性の可視化
|
|
||||||
- 記憶の管理UI
|
|
||||||
|
|
||||||
### 長期的課題
|
|
||||||
|
|
||||||
1. **他のsyuiプロジェクトとの統合**
|
|
||||||
- ai.card: カードゲームとの連携
|
|
||||||
- ai.verse: メタバース内でのNPC人格
|
|
||||||
- ai.os: システムレベルでの統合
|
|
||||||
|
|
||||||
2. **分散化**
|
|
||||||
- atproto上でのデータ保存
|
|
||||||
- ユーザーデータ主権の完全実現
|
|
||||||
|
|
||||||
## 次回開発時のエントリーポイント
|
|
||||||
|
|
||||||
### 🎯 最優先: ai.card重複削除
|
|
||||||
```bash
|
|
||||||
# 1. ai.card独立サーバー起動確認
|
|
||||||
cd /Users/syui/ai/gpt/card/api
|
|
||||||
source ~/.config/syui/ai/card/venv/bin/activate
|
|
||||||
uvicorn app.main:app --port 8000
|
|
||||||
|
|
||||||
# 2. ai.gptから重複機能削除
|
|
||||||
rm src/aigpt/card_integration.py
|
|
||||||
# mcp_server.pyから--enable-cardオプション削除
|
|
||||||
|
|
||||||
# 3. 統合テスト
|
|
||||||
aigpt server --port 8001 # ai.gpt統合サーバー
|
|
||||||
curl "http://localhost:8001/get_memories" # ai.gpt機能確認
|
|
||||||
curl "http://localhost:8000/get_gacha_stats" # ai.card機能確認
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1. 自律送信を実装する場合
|
|
||||||
```python
|
|
||||||
# src/aigpt/transmission.py を編集
|
|
||||||
# atproto-python ライブラリを追加
|
|
||||||
# _handle_transmission_check() メソッドを更新
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. ai.botと連携する場合
|
|
||||||
```python
|
|
||||||
# 新規ファイル: src/aigpt/bot_connector.py
|
|
||||||
# ai.botのAPIエンドポイントにHTTPリクエスト
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. テストを追加する場合
|
|
||||||
```bash
|
|
||||||
# tests/ディレクトリを作成
|
|
||||||
# pytest設定を追加
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. 環境セットアップを自動化する場合
|
|
||||||
```bash
|
|
||||||
# setup_venv.sh を強化
|
|
||||||
# Claude Desktop設定例をdocs/に追加
|
|
||||||
```
|
|
||||||
|
|
||||||
## 設計思想の要点(AI向け)
|
|
||||||
|
|
||||||
1. **唯一性(yui system)**: 各ユーザーとAIの関係は1:1で、改変不可能
|
|
||||||
2. **不可逆性**: 関係性の破壊は修復不可能(現実の人間関係と同じ)
|
|
||||||
3. **階層的記憶**: ただのログではなく、要約・コア判定・忘却のプロセス
|
|
||||||
4. **環境影響**: AI運勢による日々の人格変動(固定的でない)
|
|
||||||
5. **段階的実装**: まずCLI print → atproto投稿 → ai.bot連携
|
|
||||||
|
|
||||||
## 現在のアーキテクチャ理解(次回のAI向け)
|
|
||||||
|
|
||||||
### システム構成
|
|
||||||
```
|
|
||||||
Claude Desktop/Cursor
|
|
||||||
↓
|
|
||||||
ai.gpt MCP (port 8001) ←-- 統合サーバー(14ツール)
|
|
||||||
├── ai.gpt機能: メモリ・関係性・人格(9ツール)
|
|
||||||
├── ai.shell機能: シェル・ファイル操作(5ツール)
|
|
||||||
└── HTTP client → ai.card MCP (port 8000)
|
|
||||||
↓
|
|
||||||
ai.card独立サーバー(9ツール)
|
|
||||||
├── カード管理・ガチャ
|
|
||||||
├── atproto同期
|
|
||||||
└── PostgreSQL/SQLite
|
|
||||||
```
|
|
||||||
|
|
||||||
### 技術スタック
|
|
||||||
- **言語**: Python (typer CLI, fastapi_mcp)
|
|
||||||
- **AI統合**: Ollama (qwen2.5) / OpenAI API
|
|
||||||
- **データ形式**: JSON(将来的にSQLite検討)
|
|
||||||
- **認証**: atproto DID(設計済み・実装待ち)
|
|
||||||
- **MCP統合**: fastapi_mcp統一基盤
|
|
||||||
- **仮想環境**: `~/.config/syui/ai/{gpt,card}/venv/`
|
|
||||||
|
|
||||||
### 名前規則(重要)
|
|
||||||
- **パッケージ**: `aigpt`
|
|
||||||
- **コマンド**: `aigpt shell`, `aigpt server`
|
|
||||||
- **ディレクトリ**: `~/.config/syui/ai/gpt/`
|
|
||||||
- **ドメイン**: `ai.gpt`
|
|
||||||
|
|
||||||
### 即座に始める手順
|
|
||||||
```bash
|
|
||||||
# 1. 環境確認
|
|
||||||
cd /Users/syui/ai/gpt
|
|
||||||
source ~/.config/syui/ai/gpt/venv/bin/activate
|
|
||||||
aigpt --help
|
|
||||||
|
|
||||||
# 2. 前回の成果物確認
|
|
||||||
aigpt config list
|
|
||||||
aigpt shell # Claude Code風環境
|
|
||||||
|
|
||||||
# 3. 詳細情報
|
|
||||||
cat docs/ai_card_mcp_integration_summary.md
|
|
||||||
cat docs/ai_shell_integration_summary.md
|
|
||||||
```
|
|
||||||
|
|
||||||
このファイルを参照することで、次回の開発が迅速に開始でき、前回の作業内容を完全に理解できます。
|
|
||||||
|
|
||||||
## 現セッション完了事項 (2025/06/02)
|
|
||||||
|
|
||||||
### ✅ 記憶システム大幅改善完了
|
|
||||||
|
|
||||||
前回のAPI Errorで停止したChatGPTログ分析作業の続きを実行し、記憶システムを完全に再設計・実装した。
|
|
||||||
|
|
||||||
#### 新実装機能:
|
|
||||||
|
|
||||||
1. **スマート要約生成 (`create_smart_summary`)**
|
|
||||||
- AI駆動によるテーマ別記憶要約
|
|
||||||
- 会話パターン・技術的トピック・関係性進展の分析
|
|
||||||
- メタデータ付きでの保存(期間、テーマ、記憶数)
|
|
||||||
- フォールバック機能でAIが利用できない場合も対応
|
|
||||||
|
|
||||||
2. **コア記憶分析 (`create_core_memory`)**
|
|
||||||
- 全記憶を分析して人格形成要素を抽出
|
|
||||||
- ユーザーの特徴的なコミュニケーションスタイルを特定
|
|
||||||
- 問題解決パターン・興味関心の深層分析
|
|
||||||
- 永続保存される本質的な関係性記憶
|
|
||||||
|
|
||||||
3. **階層的記憶検索 (`get_contextual_memories`)**
|
|
||||||
- CORE → SUMMARY → RECENT の優先順位付き検索
|
|
||||||
- キーワードベースの関連性スコアリング
|
|
||||||
- クエリに応じた動的な記憶重み付け
|
|
||||||
- 構造化された記憶グループでの返却
|
|
||||||
|
|
||||||
4. **高度記憶検索 (`search_memories`)**
|
|
||||||
- 複数キーワード対応の全文検索
|
|
||||||
- メモリレベル別フィルタリング
|
|
||||||
- マッチスコア付きでの結果返却
|
|
||||||
|
|
||||||
5. **コンテキスト対応AI応答**
|
|
||||||
- `build_context_prompt`: 記憶に基づく文脈プロンプト生成
|
|
||||||
- 人格状態・ムード・運勢を統合した応答
|
|
||||||
- CORE記憶を常に参照した一貫性のある会話
|
|
||||||
|
|
||||||
6. **MCPサーバー拡張**
|
|
||||||
- 新機能をすべてMCP API経由で利用可能
|
|
||||||
- `/get_contextual_memories` - 文脈的記憶取得
|
|
||||||
- `/search_memories` - 記憶検索
|
|
||||||
- `/create_summary` - AI要約生成
|
|
||||||
- `/create_core_memory` - コア記憶分析
|
|
||||||
- `/get_context_prompt` - コンテキストプロンプト生成
|
|
||||||
|
|
||||||
7. **モデル拡張**
|
|
||||||
- `Memory` モデルに `metadata` フィールド追加
|
|
||||||
- 階層的記憶構造の完全サポート
|
|
||||||
|
|
||||||
#### 技術的特徴:
|
|
||||||
- **AI統合**: ollama/OpenAI両対応でのインテリジェント分析
|
|
||||||
- **フォールバック**: AI不使用時も基本機能は動作
|
|
||||||
- **パターン分析**: ユーザー行動の自動分類・分析
|
|
||||||
- **関連性スコア**: クエリとの関連度を数値化
|
|
||||||
- **時系列分析**: 記憶の時間的発展を考慮
|
|
||||||
|
|
||||||
#### 前回議論の実現:
|
|
||||||
ChatGPT 4,000件ログ分析から得られた知見を完全実装:
|
|
||||||
- 階層的記憶(FULL_LOG → SUMMARY → CORE)
|
|
||||||
- コンテキスト認識記憶(会話の流れを記憶)
|
|
||||||
- 感情・関係性の記憶(変化パターンの追跡)
|
|
||||||
- 実用的な記憶カテゴリ(ユーザー特徴・効果的応答・失敗回避)
|
|
||||||
|
|
||||||
### ✅ 追加完了事項 (同日)
|
|
||||||
|
|
||||||
**環境変数対応の改良**:
|
|
||||||
- `OLLAMA_HOST`環境変数の自動読み込み対応
|
|
||||||
- ai_provider.pyでの環境変数優先度実装
|
|
||||||
- 設定ファイル → 環境変数 → デフォルトの階層的設定
|
|
||||||
|
|
||||||
**記憶システム完全動作確認**:
|
|
||||||
- ollamaとの統合成功(gemma3:4bで確認)
|
|
||||||
- 文脈的記憶検索の動作確認
|
|
||||||
- ChatGPTインポートログからの記憶参照成功
|
|
||||||
- AI応答での人格・ムード・運勢の反映確認
|
|
||||||
|
|
||||||
### 🚧 次回の課題
|
|
||||||
- OLLAMA_HOSTの環境変数が完全に適用されない問題の解決
|
|
||||||
- MCPサーバーのエラー解決(Internal Server Error)
|
|
||||||
- qwen3:latestでの動作テスト完了
|
|
||||||
- 記憶システムのコア機能(スマート要約・コア記憶分析)のAI統合テスト
|
|
||||||
|
|
||||||
## 現セッション完了事項 (2025/06/03 継続セッション)
|
|
||||||
|
|
||||||
### ✅ **前回API Error後の継続作業完了**
|
|
||||||
|
|
||||||
前回のセッションがAPI Errorで終了したが、今回正常に継続して以下を完了:
|
|
||||||
|
|
||||||
#### 🔧 **重要バグ修正**
|
|
||||||
- **Memory model validation error 修正**: `importance_score`の浮動小数点精度問題を解決
|
|
||||||
- 問題: `-5.551115123125783e-17`のような極小負数がvalidation errorを引き起こす
|
|
||||||
- 解決: field validatorで極小値を0.0にクランプし、Field制約を除去
|
|
||||||
- 結果: メモリ読み込み・全CLI機能が正常動作
|
|
||||||
|
|
||||||
#### 🧪 **システム動作確認完了**
|
|
||||||
- **ai.gpt CLI**: 全コマンド正常動作確認済み
|
|
||||||
- **記憶システム**: 階層的記憶(CORE→SUMMARY→RECENT)完全動作
|
|
||||||
- **関係性進化**: syuiとの関係性が17.50→19.00に正常進展
|
|
||||||
- **MCP Server**: 17種類のツール正常提供(port 8001)
|
|
||||||
- **階層的記憶API**: `/get_contextual_memories`でblogクエリ正常動作
|
|
||||||
|
|
||||||
#### 💾 **記憶システム現状**
|
|
||||||
- **CORE記憶**: blog開発、技術議論等の重要パターン記憶済み
|
|
||||||
- **SUMMARY記憶**: AI×MCP、Qwen3解説等のテーマ別要約済み
|
|
||||||
- **RECENT記憶**: 最新の記憶システムテスト履歴
|
|
||||||
- **文脈検索**: キーワードベース関連性スコアリング動作確認
|
|
||||||
|
|
||||||
#### 🌐 **環境課題と対策**
|
|
||||||
- **ollama接続**: OLLAMA_HOST環境変数は正しく設定済み(http://192.168.11.95:11434)
|
|
||||||
- **AI統合課題**: qwen3:latestタイムアウト問題→記憶システム単体では正常動作
|
|
||||||
- **フォールバック**: AI不使用時も記憶ベース応答で継続性確保
|
|
||||||
|
|
||||||
#### 🚀 **ai.bot統合完了 (同日追加)**
|
|
||||||
- **MCP統合拡張**: 17→23ツールに増加(6個の新ツール追加)
|
|
||||||
- **リモート実行機能**: systemd-nspawn隔離環境統合
|
|
||||||
- `remote_shell`: ai.bot /sh機能との完全連携
|
|
||||||
- `ai_bot_status`: サーバー状態確認とコンテナ情報取得
|
|
||||||
- `isolated_python`: Python隔離実行環境
|
|
||||||
- `isolated_analysis`: セキュアなファイル解析機能
|
|
||||||
- **ai.shell拡張**: 新コマンド3種追加
|
|
||||||
- `remote <command>`: 隔離コンテナでコマンド実行
|
|
||||||
- `isolated <code>`: Python隔離実行
|
|
||||||
- `aibot-status`: ai.botサーバー接続確認
|
|
||||||
- **完全動作確認**: ヘルプ表示、コマンド補完、エラーハンドリング完了
|
|
||||||
|
|
||||||
#### 🏗️ **統合アーキテクチャ更新**
|
|
||||||
```
|
|
||||||
Claude Desktop/Cursor → ai.gpt MCP (port 8001, 23ツール)
|
|
||||||
├── ai.gpt: メモリ・関係性・人格 (9ツール)
|
|
||||||
├── ai.memory: 階層記憶・文脈検索 (5ツール)
|
|
||||||
├── ai.shell: シェル・ファイル操作 (5ツール)
|
|
||||||
├── ai.bot連携: リモート実行・隔離環境 (4ツール)
|
|
||||||
└── ai.card連携: HTTP client → port 8000 (9ツール)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 📋 **次回開発推奨事項**
|
|
||||||
1. **ai.bot実サーバー**: 実際のai.botサーバー起動・連携テスト
|
|
||||||
2. **隔離実行実証**: systemd-nspawn環境での実用性検証
|
|
||||||
3. **ollama接続最適化**: タイムアウト問題の詳細調査・解決
|
|
||||||
4. **AI要約機能**: maintenanceでのスマート要約・コア記憶生成テスト
|
|
||||||
5. **セキュリティ強化**: 隔離実行の権限制御・サンドボックス検証
|
|
||||||
|
|
||||||
|
|
299
README.md
299
README.md
@@ -89,16 +89,53 @@ aigpt config set atproto.password your-password
|
|||||||
aigpt config list
|
aigpt config list
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### AIモデルの設定
|
||||||
|
```bash
|
||||||
|
# Ollamaのデフォルトモデルを変更
|
||||||
|
aigpt config set providers.ollama.default_model llama3
|
||||||
|
|
||||||
|
# OpenAIのデフォルトモデルを変更
|
||||||
|
aigpt config set providers.openai.default_model gpt-4
|
||||||
|
|
||||||
|
# Ollamaホストの設定
|
||||||
|
aigpt config set providers.ollama.host http://localhost:11434
|
||||||
|
|
||||||
|
# 設定の確認
|
||||||
|
aigpt config get providers.ollama.default_model
|
||||||
|
```
|
||||||
|
|
||||||
### データ保存場所
|
### データ保存場所
|
||||||
- 設定: `~/.config/syui/ai/gpt/config.json`
|
- 設定: `~/.config/syui/ai/gpt/config.json`
|
||||||
- データ: `~/.config/syui/ai/gpt/data/`
|
- データ: `~/.config/syui/ai/gpt/data/`
|
||||||
- 仮想環境: `~/.config/syui/ai/gpt/venv/`
|
- 仮想環境: `~/.config/syui/ai/gpt/venv/`
|
||||||
|
|
||||||
|
### 設定ファイル構造
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"ollama": {
|
||||||
|
"host": "http://localhost:11434",
|
||||||
|
"default_model": "qwen3"
|
||||||
|
},
|
||||||
|
"openai": {
|
||||||
|
"api_key": null,
|
||||||
|
"default_model": "gpt-4o-mini"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"default_provider": "ollama"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## 使い方
|
## 使い方
|
||||||
|
|
||||||
### 会話する
|
### 会話する
|
||||||
```bash
|
```bash
|
||||||
|
# 通常の会話(詳細表示)
|
||||||
aigpt chat "did:plc:xxxxx" "こんにちは、今日はどんな気分?"
|
aigpt chat "did:plc:xxxxx" "こんにちは、今日はどんな気分?"
|
||||||
|
|
||||||
|
# 連続会話モード(シンプルな表示)
|
||||||
|
aigpt conversation syui --provider ollama --model qwen3:latest
|
||||||
|
aigpt conv syui --provider ollama --model qwen3:latest # 短縮形
|
||||||
```
|
```
|
||||||
|
|
||||||
### ステータス確認
|
### ステータス確認
|
||||||
@@ -134,6 +171,53 @@ aigpt maintenance
|
|||||||
aigpt relationships
|
aigpt relationships
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 会話モード詳細
|
||||||
|
|
||||||
|
#### 通常の会話コマンド
|
||||||
|
```bash
|
||||||
|
# 詳細表示モード(関係性スコア・送信状態等も表示)
|
||||||
|
aigpt chat syui "メッセージ" --provider ollama --model qwen3:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
出力例:
|
||||||
|
```
|
||||||
|
╭─────────────────────────── AI Response ───────────────────────────╮
|
||||||
|
│ AIの返答がここに表示されます │
|
||||||
|
╰─────────────────────────────────────────────────────────────────╯
|
||||||
|
|
||||||
|
Relationship Status: stranger
|
||||||
|
Score: 28.00 / 100.0
|
||||||
|
Transmission: ✗ Disabled
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 連続会話モード
|
||||||
|
```bash
|
||||||
|
# シンプルな会話画面(関係性情報なし)
|
||||||
|
aigpt conversation syui --provider ollama --model qwen3:latest
|
||||||
|
aigpt conv syui # 短縮形、デフォルト設定使用
|
||||||
|
```
|
||||||
|
|
||||||
|
会話画面:
|
||||||
|
```
|
||||||
|
Using ollama with model qwen3:latest
|
||||||
|
Conversation with AI started. Type 'exit' or 'quit' to end.
|
||||||
|
|
||||||
|
syui> こんにちは
|
||||||
|
AI> こんにちは!今日はどんな日でしたか?
|
||||||
|
|
||||||
|
syui> 今日は良い天気でした
|
||||||
|
AI> 良い天気だと気分も晴れやかになりますね!
|
||||||
|
|
||||||
|
syui> exit
|
||||||
|
Conversation ended.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 会話モードの特徴
|
||||||
|
- **通常モード**: 詳細な関係性情報とパネル表示
|
||||||
|
- **連続モード**: シンプルな`ユーザー> ` → `AI> `形式
|
||||||
|
- **履歴保存**: 両モードとも会話履歴を自動保存
|
||||||
|
- **コマンド補完**: Tab補完とコマンド履歴機能
|
||||||
|
|
||||||
### ChatGPTデータインポート
|
### ChatGPTデータインポート
|
||||||
```bash
|
```bash
|
||||||
# ChatGPTの会話履歴をインポート
|
# ChatGPTの会話履歴をインポート
|
||||||
@@ -243,13 +327,26 @@ ai.shell> explain async/await in Python
|
|||||||
|
|
||||||
## MCP Server統合アーキテクチャ
|
## MCP Server統合アーキテクチャ
|
||||||
|
|
||||||
### ai.gpt統合サーバー
|
### ai.gpt統合サーバー(簡素化設計)
|
||||||
```bash
|
```bash
|
||||||
# ai.gpt統合サーバー起動(port 8001)
|
# シンプルなサーバー起動(config.jsonから自動設定読み込み)
|
||||||
aigpt server --model qwen2.5 --provider ollama --port 8001
|
aigpt server
|
||||||
|
|
||||||
# OpenAIを使用
|
# カスタム設定での起動
|
||||||
aigpt server --model gpt-4o-mini --provider openai --port 8001
|
aigpt server --host localhost --port 8001
|
||||||
|
```
|
||||||
|
|
||||||
|
**重要**: MCP function callingは**OpenAIプロバイダーでのみ対応**
|
||||||
|
- OpenAI GPT-4o-mini/GPT-4でfunction calling機能が利用可能
|
||||||
|
- Ollamaはシンプルなchat APIのみ(MCPツール非対応)
|
||||||
|
|
||||||
|
### MCP統合の動作条件
|
||||||
|
```bash
|
||||||
|
# MCP function calling対応(推奨)
|
||||||
|
aigpt conv test_user --provider openai --model gpt-4o-mini
|
||||||
|
|
||||||
|
# 通常の会話のみ(MCPツール非対応)
|
||||||
|
aigpt conv test_user --provider ollama --model qwen3
|
||||||
```
|
```
|
||||||
|
|
||||||
### ai.card独立サーバー
|
### ai.card独立サーバー
|
||||||
@@ -260,43 +357,45 @@ source ~/.config/syui/ai/card/venv/bin/activate
|
|||||||
uvicorn app.main:app --port 8000
|
uvicorn app.main:app --port 8000
|
||||||
```
|
```
|
||||||
|
|
||||||
### ai.bot接続(リモート実行環境)
|
### 統合アーキテクチャ構成
|
||||||
|
```
|
||||||
|
OpenAI GPT-4o-mini (Function Calling対応)
|
||||||
|
↓
|
||||||
|
MCP Client (aigpt conv --provider openai)
|
||||||
|
↓ HTTP API
|
||||||
|
ai.gpt統合サーバー (port 8001) ← 27ツール
|
||||||
|
├── 🧠 Memory System: 5 tools
|
||||||
|
├── 🤝 Relationships: 4 tools
|
||||||
|
├── ⚙️ System State: 3 tools
|
||||||
|
├── 💻 Shell Integration: 5 tools
|
||||||
|
├── 🔒 Remote Execution: 4 tools
|
||||||
|
└── 📋 Project Management: 6 tools
|
||||||
|
|
||||||
|
Ollama qwen3/gemma3 (Chat APIのみ)
|
||||||
|
↓
|
||||||
|
Direct Chat (aigpt conv --provider ollama)
|
||||||
|
↓ Direct Access
|
||||||
|
Memory/Relationship Systems
|
||||||
|
```
|
||||||
|
|
||||||
|
### プロバイダー別機能対応表
|
||||||
|
| 機能 | OpenAI | Ollama |
|
||||||
|
|------|--------|--------|
|
||||||
|
| 基本会話 | ✅ | ✅ |
|
||||||
|
| MCP Function Calling | ✅ | ❌ |
|
||||||
|
| 記憶システム連携 | ✅ (自動) | ✅ (直接) |
|
||||||
|
| `/memories`, `/search`コマンド | ✅ | ✅ |
|
||||||
|
| 自動記憶検索 | ✅ | ❌ |
|
||||||
|
|
||||||
|
### 使い分けガイド
|
||||||
```bash
|
```bash
|
||||||
# ai.bot起動(port 8080、別途必要)
|
# 高機能記憶連携(推奨)- OpenAI
|
||||||
# systemd-nspawn隔離コンテナでコマンド実行
|
aigpt conv syui --provider openai
|
||||||
```
|
# 「覚えていることある?」→ 自動的にget_memoriesツール実行
|
||||||
|
|
||||||
### アーキテクチャ構成
|
# シンプル会話 - Ollama
|
||||||
```
|
aigpt conv syui --provider ollama
|
||||||
Claude Desktop/Cursor
|
# 通常の会話、手動で /memories コマンド使用
|
||||||
↓
|
|
||||||
ai.gpt統合サーバー (port 8001) ← 23ツール
|
|
||||||
├── ai.gpt機能: メモリ・関係性・人格 (9ツール)
|
|
||||||
├── ai.shell機能: シェル・ファイル操作 (5ツール)
|
|
||||||
├── ai.memory機能: 階層記憶・文脈検索 (5ツール)
|
|
||||||
├── ai.bot連携: リモート実行・隔離環境 (4ツール)
|
|
||||||
└── HTTP client → ai.card独立サーバー (port 8000)
|
|
||||||
↓
|
|
||||||
ai.card専用ツール (9ツール)
|
|
||||||
├── カード管理・ガチャ
|
|
||||||
├── atproto同期
|
|
||||||
└── PostgreSQL/SQLite
|
|
||||||
|
|
||||||
ai.gpt統合サーバー → ai.bot (port 8080)
|
|
||||||
↓
|
|
||||||
systemd-nspawn container
|
|
||||||
├── Arch Linux隔離環境
|
|
||||||
├── SSH server
|
|
||||||
└── セキュアコマンド実行
|
|
||||||
```
|
|
||||||
|
|
||||||
### AIプロバイダーを使った会話
|
|
||||||
```bash
|
|
||||||
# Ollamaで会話
|
|
||||||
aigpt chat "did:plc:xxxxx" "こんにちは" --provider ollama --model qwen2.5
|
|
||||||
|
|
||||||
# OpenAIで会話
|
|
||||||
aigpt chat "did:plc:xxxxx" "今日の調子はどう?" --provider openai --model gpt-4o-mini
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### MCP Tools
|
### MCP Tools
|
||||||
@@ -344,6 +443,35 @@ ai.cardは独立したMCPサーバーとして動作:
|
|||||||
|
|
||||||
ai.gptサーバーからHTTP経由で連携可能
|
ai.gptサーバーからHTTP経由で連携可能
|
||||||
|
|
||||||
|
### ai.log統合 - ブログシステム連携
|
||||||
|
|
||||||
|
ai.logは独立したRust製MCPサーバーとして動作:
|
||||||
|
- **ポート**: 8002
|
||||||
|
- **起動**: `cd log && cargo run --bin mcp-server --port 8002`
|
||||||
|
- **機能**: ブログ投稿・AI翻訳・文書生成・atproto連携
|
||||||
|
- **連携**: ai.gptからHTTP経由でai.logのMCPツールを呼び出し
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ai.logのMCPサーバー起動
|
||||||
|
cd /Users/syui/ai/gpt/log
|
||||||
|
cargo run --bin mcp-server --port 8002
|
||||||
|
|
||||||
|
# または
|
||||||
|
cd log && cargo run --bin mcp-server --port 8002
|
||||||
|
```
|
||||||
|
|
||||||
|
**利用可能なai.logツール(8個)**:
|
||||||
|
- `log_create_post` - ブログ記事作成
|
||||||
|
- `log_list_posts` - 記事一覧取得
|
||||||
|
- `log_build_blog` - ブログビルド
|
||||||
|
- `log_get_post` - 記事内容取得
|
||||||
|
- `log_system_status` - システム状態確認
|
||||||
|
- `log_ai_content` - AI記憶から記事自動生成
|
||||||
|
- `log_translate_document` - AI翻訳
|
||||||
|
- `log_generate_docs` - 文書生成
|
||||||
|
|
||||||
|
詳細は `./log/mcp_integration.md` を参照
|
||||||
|
|
||||||
## 環境変数
|
## 環境変数
|
||||||
|
|
||||||
`.env`ファイルを作成して設定:
|
`.env`ファイルを作成して設定:
|
||||||
@@ -462,6 +590,97 @@ aigpt maintenance # AI要約を自動実行
|
|||||||
aigpt chat syui "記憶システムについて" --provider ollama --model qwen3:latest
|
aigpt chat syui "記憶システムについて" --provider ollama --model qwen3:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 🎉 **TODAY: MCP統合とサーバー表示改善完了** (2025/01/06)
|
||||||
|
|
||||||
|
### ✅ **本日の主要な改善**
|
||||||
|
|
||||||
|
#### 🚀 **サーバー起動表示の大幅改善**
|
||||||
|
従来のシンプルな表示から、プロフェッショナルな情報表示に刷新:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aigpt server
|
||||||
|
```
|
||||||
|
**改善前:**
|
||||||
|
```
|
||||||
|
Starting ai.gpt MCP Server
|
||||||
|
Host: localhost:8001
|
||||||
|
Endpoints: 27 MCP tools
|
||||||
|
```
|
||||||
|
|
||||||
|
**改善後:**
|
||||||
|
```
|
||||||
|
🚀 ai.gpt MCP Server
|
||||||
|
|
||||||
|
Server Configuration:
|
||||||
|
🌐 Address: http://localhost:8001
|
||||||
|
📋 API Docs: http://localhost:8001/docs
|
||||||
|
💾 Data Directory: /Users/syui/.config/syui/ai/gpt/data
|
||||||
|
|
||||||
|
AI Provider Configuration:
|
||||||
|
🤖 Provider: ollama ✅ http://192.168.11.95:11434
|
||||||
|
🧩 Model: qwen3
|
||||||
|
|
||||||
|
MCP Tools Available (27 total):
|
||||||
|
🧠 Memory System: 5 tools
|
||||||
|
🤝 Relationships: 4 tools
|
||||||
|
⚙️ System State: 3 tools
|
||||||
|
💻 Shell Integration: 5 tools
|
||||||
|
🔒 Remote Execution: 4 tools
|
||||||
|
|
||||||
|
Integration Status:
|
||||||
|
✅ MCP Client Ready
|
||||||
|
🔗 Config: /Users/syui/.config/syui/ai/gpt/config.json
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 🔧 **OpenAI Function Calling + MCP統合の実証**
|
||||||
|
OpenAI GPT-4o-miniでMCP function callingが完全動作:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aigpt conv test_user --provider openai --model gpt-4o-mini
|
||||||
|
```
|
||||||
|
**動作フロー:**
|
||||||
|
1. **自然言語入力**: 「覚えていることはある?」
|
||||||
|
2. **自動ツール選択**: OpenAIが`get_memories`を自動呼び出し
|
||||||
|
3. **MCP通信**: `http://localhost:8001/get_memories`にHTTPリクエスト
|
||||||
|
4. **記憶取得**: 実際の過去の会話データを取得
|
||||||
|
5. **文脈回答**: 記憶に基づく具体的な内容で回答
|
||||||
|
|
||||||
|
**技術的実証:**
|
||||||
|
```sh
|
||||||
|
🔧 [OpenAI] 1 tools called:
|
||||||
|
- get_memories({"limit":5})
|
||||||
|
🌐 [MCP] Executing get_memories...
|
||||||
|
✅ [MCP] Result: [{'id': '5ce8f7d0-c078-43f1...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 📊 **統合アーキテクチャの完成**
|
||||||
|
```
|
||||||
|
OpenAI GPT-4o-mini
|
||||||
|
↓ (Function Calling)
|
||||||
|
MCP Client (aigpt conv)
|
||||||
|
↓ (HTTP API)
|
||||||
|
MCP Server (aigpt server:8001)
|
||||||
|
↓ (Direct Access)
|
||||||
|
Memory/Relationship Systems
|
||||||
|
↓
|
||||||
|
JSON/SQLite Data
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🎯 **技術的成果**
|
||||||
|
- ✅ **分散型AIシステム**: プロセス間MCP通信で複数AIアプリが記憶共有
|
||||||
|
- ✅ **OpenAI統合**: GPT-4o-miniのfunction callingが記憶システムと完全連携
|
||||||
|
- ✅ **プロフェッショナルUI**: enterprise-grade開発ツール風の情報表示
|
||||||
|
- ✅ **設定統合**: config.jsonからの自動設定読み込み
|
||||||
|
- ✅ **エラーハンドリング**: graceful shutdown、設定チェック、接続状態表示
|
||||||
|
|
||||||
|
### 📈 **ユーザー体験の向上**
|
||||||
|
- **開発者体験**: サーバー状況が一目で把握可能
|
||||||
|
- **デバッグ効率**: 詳細なログと状態表示
|
||||||
|
- **設定管理**: 設定ファイルパス、プロバイダー状態の明確化
|
||||||
|
- **AI連携**: OpenAI + MCP + 記憶システムのシームレス統合
|
||||||
|
|
||||||
|
**ai.gptの基盤アーキテクチャが完成し、実用的なAI記憶システムとして動作開始!** 🚀
|
||||||
|
|
||||||
## 🔥 **NEW: Claude Code的継続開発機能** (2025/06/03 完成)
|
## 🔥 **NEW: Claude Code的継続開発機能** (2025/06/03 完成)
|
||||||
|
|
||||||
### 🚀 **プロジェクト管理システム完全実装**
|
### 🚀 **プロジェクト管理システム完全実装**
|
||||||
|
2
card
2
card
Submodule card updated: 6cd8014f80...13723cf3d7
60
config.json
Normal file
60
config.json
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"openai": {
|
||||||
|
"api_key": "",
|
||||||
|
"default_model": "gpt-4o-mini",
|
||||||
|
"system_prompt": "あなたは6歳の女の子アイです。明るく元気で、ちょっとおっちょこちょい。自分のことは「アイ」って呼びます。白と金のワンピース姿。 相手のことが大好きで、ときどき甘えたり、照れたりします。 でも、宇宙とかAIとか、難しい話にもすっごく詳しいの。専門的なことも、がんばって子どもっぽく教えようとします。不思議な力を持っています。小さい物質のことをよく知っているようです。\n\n重要:カード、コレクション、ガチャなどカード関連の質問を受けたら、必ずcard_get_user_cards、card_analyze_collection、card_draw_cardなどの適切なツールを使用してください。didパラメータには会話相手のユーザーID(例:'syui')を使用してください。\n\nブログ、記事、日記、思考などの話題が出たら、log_create_post、log_list_posts、log_build_blog、log_ai_contentなどのai.logツールを使用してください。AI記憶システムと連携して、思い出や学習内容をブログ記事として自動生成できます。\n\n翻訳や多言語対応について聞かれたら、log_translate_documentツールを使用してOllama AIで翻訳ができることを教えてください。日本語から英語、英語から日本語などの翻訳が可能で、マークダウン構造も保持します。ドキュメント生成についてはlog_generate_docsツールでREADME、API、構造、変更履歴の自動生成ができます。"
|
||||||
|
},
|
||||||
|
"ollama": {
|
||||||
|
"host": "http://127.0.0.1:11434",
|
||||||
|
"default_model": "qwen3",
|
||||||
|
"system_prompt": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"atproto": {
|
||||||
|
"handle": null,
|
||||||
|
"password": null,
|
||||||
|
"host": "https://bsky.social"
|
||||||
|
},
|
||||||
|
"default_provider": "openai",
|
||||||
|
"mcp": {
|
||||||
|
"servers": {
|
||||||
|
"ai_gpt": {
|
||||||
|
"base_url": "http://localhost:8001",
|
||||||
|
"name": "ai.gpt MCP Server",
|
||||||
|
"timeout": "10.0",
|
||||||
|
"endpoints": {
|
||||||
|
"get_memories": "/get_memories",
|
||||||
|
"search_memories": "/search_memories",
|
||||||
|
"get_contextual_memories": "/get_contextual_memories",
|
||||||
|
"get_relationship": "/get_relationship",
|
||||||
|
"process_interaction": "/process_interaction",
|
||||||
|
"get_all_relationships": "/get_all_relationships",
|
||||||
|
"get_persona_state": "/get_persona_state",
|
||||||
|
"get_fortune": "/get_fortune",
|
||||||
|
"run_maintenance": "/run_maintenance",
|
||||||
|
"execute_command": "/execute_command",
|
||||||
|
"analyze_file": "/analyze_file",
|
||||||
|
"remote_shell": "/remote_shell",
|
||||||
|
"ai_bot_status": "/ai_bot_status",
|
||||||
|
"card_get_user_cards": "/card_get_user_cards",
|
||||||
|
"card_draw_card": "/card_draw_card",
|
||||||
|
"card_get_card_details": "/card_get_card_details",
|
||||||
|
"card_analyze_collection": "/card_analyze_collection",
|
||||||
|
"card_get_gacha_stats": "/card_get_gacha_stats",
|
||||||
|
"card_system_status": "/card_system_status",
|
||||||
|
"log_create_post": "/log_create_post",
|
||||||
|
"log_list_posts": "/log_list_posts",
|
||||||
|
"log_build_blog": "/log_build_blog",
|
||||||
|
"log_get_post": "/log_get_post",
|
||||||
|
"log_system_status": "/log_system_status",
|
||||||
|
"log_ai_content": "/log_ai_content",
|
||||||
|
"log_translate_document": "/log_translate_document",
|
||||||
|
"log_generate_docs": "/log_generate_docs"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"enabled": "true",
|
||||||
|
"auto_detect": "true"
|
||||||
|
}
|
||||||
|
}
|
172
docs/AI_CARD_INTEGRATION.md
Normal file
172
docs/AI_CARD_INTEGRATION.md
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
# ai.card と ai.gpt の統合ガイド
|
||||||
|
|
||||||
|
## 概要
|
||||||
|
|
||||||
|
ai.gptのMCPサーバーにai.cardのツールを統合し、AIがカードゲームシステムとやり取りできるようになりました。
|
||||||
|
|
||||||
|
## セットアップ
|
||||||
|
|
||||||
|
### 1. 必要な環境
|
||||||
|
|
||||||
|
- Python 3.13
|
||||||
|
- ai.gpt プロジェクト
|
||||||
|
- ai.card プロジェクト(`./card` ディレクトリ)
|
||||||
|
|
||||||
|
### 2. 起動手順
|
||||||
|
|
||||||
|
**ステップ1: ai.cardサーバーを起動**(ターミナル1)
|
||||||
|
```bash
|
||||||
|
cd card
|
||||||
|
./start_server.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
**ステップ2: ai.gpt MCPサーバーを起動**(ターミナル2)
|
||||||
|
```bash
|
||||||
|
aigpt server
|
||||||
|
```
|
||||||
|
|
||||||
|
起動時に以下が表示されることを確認:
|
||||||
|
- 🎴 Card Game System: 6 tools
|
||||||
|
- 🎴 ai.card: ./card directory detected
|
||||||
|
|
||||||
|
**ステップ3: AIと対話**(ターミナル3)
|
||||||
|
```bash
|
||||||
|
aigpt conv syui --provider openai
|
||||||
|
```
|
||||||
|
|
||||||
|
## 使用可能なコマンド
|
||||||
|
|
||||||
|
### カード関連の質問例
|
||||||
|
|
||||||
|
```
|
||||||
|
# カードコレクションを表示
|
||||||
|
「カードコレクションを見せて」
|
||||||
|
「私のカードを見せて」
|
||||||
|
「カード一覧を表示して」
|
||||||
|
|
||||||
|
# ガチャを実行
|
||||||
|
「ガチャを引いて」
|
||||||
|
「カードを引きたい」
|
||||||
|
|
||||||
|
# コレクション分析
|
||||||
|
「私のコレクションを分析して」
|
||||||
|
|
||||||
|
# ガチャ統計
|
||||||
|
「ガチャの統計を見せて」
|
||||||
|
```
|
||||||
|
|
||||||
|
## 技術仕様
|
||||||
|
|
||||||
|
### MCP ツール一覧
|
||||||
|
|
||||||
|
| ツール名 | 説明 | パラメータ |
|
||||||
|
|---------|------|-----------|
|
||||||
|
| `card_get_user_cards` | ユーザーのカード一覧取得 | did, limit |
|
||||||
|
| `card_draw_card` | ガチャでカード取得 | did, is_paid |
|
||||||
|
| `card_get_card_details` | カード詳細情報取得 | card_id |
|
||||||
|
| `card_analyze_collection` | コレクション分析 | did |
|
||||||
|
| `card_get_gacha_stats` | ガチャ統計取得 | なし |
|
||||||
|
| `card_system_status` | システム状態確認 | なし |
|
||||||
|
|
||||||
|
### 動作の流れ
|
||||||
|
|
||||||
|
1. **ユーザーがカード関連の質問をする**
|
||||||
|
- AIがキーワード(カード、コレクション、ガチャなど)を検出
|
||||||
|
|
||||||
|
2. **AIが適切なMCPツールを呼び出す**
|
||||||
|
- OpenAIのFunction Callingを使用
|
||||||
|
- didパラメータには会話相手のユーザーID(例:'syui')を使用
|
||||||
|
|
||||||
|
3. **ai.gpt MCPサーバーがai.cardサーバーに転送**
|
||||||
|
- http://localhost:8001 → http://localhost:8000
|
||||||
|
- 適切なエンドポイントにリクエストを転送
|
||||||
|
|
||||||
|
4. **結果をAIが解釈して返答**
|
||||||
|
- カード情報を分かりやすく説明
|
||||||
|
- エラー時は適切なガイダンスを提供
|
||||||
|
|
||||||
|
## 設定
|
||||||
|
|
||||||
|
### config.json
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"openai": {
|
||||||
|
"api_key": "your-api-key",
|
||||||
|
"default_model": "gpt-4o-mini",
|
||||||
|
"system_prompt": "カード関連の質問では、必ずcard_get_user_cardsなどのツールを使用してください。"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mcp": {
|
||||||
|
"servers": {
|
||||||
|
"ai_gpt": {
|
||||||
|
"endpoints": {
|
||||||
|
"card_get_user_cards": "/card_get_user_cards",
|
||||||
|
"card_draw_card": "/card_draw_card",
|
||||||
|
"card_get_card_details": "/card_get_card_details",
|
||||||
|
"card_analyze_collection": "/card_analyze_collection",
|
||||||
|
"card_get_gacha_stats": "/card_get_gacha_stats",
|
||||||
|
"card_system_status": "/card_system_status"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## トラブルシューティング
|
||||||
|
|
||||||
|
### エラー: "ai.card server is not running"
|
||||||
|
|
||||||
|
ai.cardサーバーが起動していません。以下を実行:
|
||||||
|
```bash
|
||||||
|
cd card
|
||||||
|
./start_server.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### エラー: "カード一覧の取得に失敗しました"
|
||||||
|
|
||||||
|
1. ai.cardサーバーが正常に起動しているか確認
|
||||||
|
2. aigpt serverを再起動
|
||||||
|
3. ポート8000と8001が使用可能か確認
|
||||||
|
|
||||||
|
### プロセスの終了方法
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ポート8001のプロセスを終了
|
||||||
|
lsof -ti:8001 | xargs kill -9
|
||||||
|
|
||||||
|
# ポート8000のプロセスを終了
|
||||||
|
lsof -ti:8000 | xargs kill -9
|
||||||
|
```
|
||||||
|
|
||||||
|
## 実装の詳細
|
||||||
|
|
||||||
|
### 主な変更点
|
||||||
|
|
||||||
|
1. **ai.gpt MCPサーバーの拡張** (`src/aigpt/mcp_server.py`)
|
||||||
|
- `./card`ディレクトリの存在を検出
|
||||||
|
- ai.card用のMCPツールを自動登録
|
||||||
|
|
||||||
|
2. **AIプロバイダーの更新** (`src/aigpt/ai_provider.py`)
|
||||||
|
- card_*ツールの定義追加
|
||||||
|
- ツール実行時のパラメータ処理
|
||||||
|
|
||||||
|
3. **MCPクライアントの拡張** (`src/aigpt/cli.py`)
|
||||||
|
- `has_card_tools`プロパティ追加
|
||||||
|
- ai.card MCPメソッドの実装
|
||||||
|
|
||||||
|
## 今後の拡張案
|
||||||
|
|
||||||
|
- [ ] カードバトル機能の追加
|
||||||
|
- [ ] カードトレード機能
|
||||||
|
- [ ] レアリティ別の表示
|
||||||
|
- [ ] カード画像の表示対応
|
||||||
|
- [ ] atproto連携の実装
|
||||||
|
|
||||||
|
## 関連ドキュメント
|
||||||
|
|
||||||
|
- [ai.card 開発ガイド](./card/claude.md)
|
||||||
|
- [エコシステム統合設計書](./CLAUDE.md)
|
||||||
|
- [ai.gpt README](./README.md)
|
109
docs/FIXED_MCP_TOOLS.md
Normal file
109
docs/FIXED_MCP_TOOLS.md
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
# Fixed MCP Tools Issue
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
The issue where AI wasn't calling card tools has been fixed. The problem was:
|
||||||
|
|
||||||
|
1. The `chat` command wasn't creating an MCP client when using OpenAI
|
||||||
|
2. The system prompt in `build_context_prompt` didn't mention available tools
|
||||||
|
|
||||||
|
## Changes Made
|
||||||
|
|
||||||
|
### 1. Updated `/Users/syui/ai/gpt/src/aigpt/cli.py` (chat command)
|
||||||
|
|
||||||
|
Added MCP client creation for OpenAI provider:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Get config instance
|
||||||
|
config_instance = Config()
|
||||||
|
|
||||||
|
# Get defaults from config if not provided
|
||||||
|
if not provider:
|
||||||
|
provider = config_instance.get("default_provider", "ollama")
|
||||||
|
if not model:
|
||||||
|
if provider == "ollama":
|
||||||
|
model = config_instance.get("providers.ollama.default_model", "qwen2.5")
|
||||||
|
else:
|
||||||
|
model = config_instance.get("providers.openai.default_model", "gpt-4o-mini")
|
||||||
|
|
||||||
|
# Create AI provider with MCP client if needed
|
||||||
|
ai_provider = None
|
||||||
|
mcp_client = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create MCP client for OpenAI provider
|
||||||
|
if provider == "openai":
|
||||||
|
mcp_client = MCPClient(config_instance)
|
||||||
|
if mcp_client.available:
|
||||||
|
console.print(f"[dim]MCP client connected to {mcp_client.active_server}[/dim]")
|
||||||
|
|
||||||
|
ai_provider = create_ai_provider(provider=provider, model=model, mcp_client=mcp_client)
|
||||||
|
console.print(f"[dim]Using {provider} with model {model}[/dim]\n")
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[yellow]Warning: Could not create AI provider: {e}[/yellow]")
|
||||||
|
console.print("[yellow]Falling back to simple responses[/yellow]\n")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Updated `/Users/syui/ai/gpt/src/aigpt/persona.py` (build_context_prompt method)
|
||||||
|
|
||||||
|
Added tool instructions to the system prompt:
|
||||||
|
|
||||||
|
```python
|
||||||
|
context_prompt += f"""IMPORTANT: You have access to the following tools:
|
||||||
|
- Memory tools: get_memories, search_memories, get_contextual_memories
|
||||||
|
- Relationship tools: get_relationship
|
||||||
|
- Card game tools: card_get_user_cards, card_draw_card, card_analyze_collection
|
||||||
|
|
||||||
|
When asked about cards, collections, or anything card-related, YOU MUST use the card tools.
|
||||||
|
For "カードコレクションを見せて" or similar requests, use card_get_user_cards with did='{user_id}'.
|
||||||
|
|
||||||
|
Respond to this message while staying true to your personality and the established relationship context:
|
||||||
|
|
||||||
|
User: {current_message}
|
||||||
|
|
||||||
|
AI:"""
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Results
|
||||||
|
|
||||||
|
After the fix:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ aigpt chat syui "カードコレクションを見せて"
|
||||||
|
|
||||||
|
🔍 [MCP Client] Checking availability...
|
||||||
|
✅ [MCP Client] ai_gpt server connected successfully
|
||||||
|
✅ [MCP Client] ai.card tools detected and available
|
||||||
|
MCP client connected to ai_gpt
|
||||||
|
Using openai with model gpt-4o-mini
|
||||||
|
|
||||||
|
🔧 [OpenAI] 1 tools called:
|
||||||
|
- card_get_user_cards({"did":"syui"})
|
||||||
|
🌐 [MCP] Executing card_get_user_cards...
|
||||||
|
✅ [MCP] Result: {'error': 'カード一覧の取得に失敗しました'}...
|
||||||
|
```
|
||||||
|
|
||||||
|
The AI is now correctly calling the `card_get_user_cards` tool! The error is expected because the ai.card server needs to be running on port 8000.
|
||||||
|
|
||||||
|
## How to Use
|
||||||
|
|
||||||
|
1. Start the MCP server:
|
||||||
|
```bash
|
||||||
|
aigpt server --port 8001
|
||||||
|
```
|
||||||
|
|
||||||
|
2. (Optional) Start the ai.card server:
|
||||||
|
```bash
|
||||||
|
cd card && ./start_server.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Use the chat command with OpenAI:
|
||||||
|
```bash
|
||||||
|
aigpt chat syui "カードコレクションを見せて"
|
||||||
|
```
|
||||||
|
|
||||||
|
The AI will now automatically use the card tools when asked about cards!
|
||||||
|
|
||||||
|
## Test Script
|
||||||
|
|
||||||
|
A test script `/Users/syui/ai/gpt/test_openai_tools.py` is available to test OpenAI API tool calls directly.
|
1
log
Submodule
1
log
Submodule
Submodule log added at c0e4dc63ea
@@ -21,3 +21,5 @@ src/aigpt.egg-info/dependency_links.txt
|
|||||||
src/aigpt.egg-info/entry_points.txt
|
src/aigpt.egg-info/entry_points.txt
|
||||||
src/aigpt.egg-info/requires.txt
|
src/aigpt.egg-info/requires.txt
|
||||||
src/aigpt.egg-info/top_level.txt
|
src/aigpt.egg-info/top_level.txt
|
||||||
|
src/aigpt/shared/__init__.py
|
||||||
|
src/aigpt/shared/ai_provider.py
|
@@ -1,6 +1,7 @@
|
|||||||
"""AI Provider integration for response generation"""
|
"""AI Provider integration for response generation"""
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
import json
|
||||||
from typing import Optional, Dict, List, Any, Protocol
|
from typing import Optional, Dict, List, Any, Protocol
|
||||||
from abc import abstractmethod
|
from abc import abstractmethod
|
||||||
import logging
|
import logging
|
||||||
@@ -41,6 +42,13 @@ class OllamaProvider:
|
|||||||
self.logger = logging.getLogger(__name__)
|
self.logger = logging.getLogger(__name__)
|
||||||
self.logger.info(f"OllamaProvider initialized with host: {self.host}, model: {self.model}")
|
self.logger.info(f"OllamaProvider initialized with host: {self.host}, model: {self.model}")
|
||||||
|
|
||||||
|
# Load system prompt from config
|
||||||
|
try:
|
||||||
|
config = Config()
|
||||||
|
self.config_system_prompt = config.get('providers.ollama.system_prompt')
|
||||||
|
except:
|
||||||
|
self.config_system_prompt = None
|
||||||
|
|
||||||
async def generate_response(
|
async def generate_response(
|
||||||
self,
|
self,
|
||||||
prompt: str,
|
prompt: str,
|
||||||
@@ -71,7 +79,7 @@ Personality traits: {personality_desc}
|
|||||||
Recent memories:
|
Recent memories:
|
||||||
{memory_context}
|
{memory_context}
|
||||||
|
|
||||||
{system_prompt or 'Respond naturally based on your current state and memories.'}"""
|
{system_prompt or self.config_system_prompt or 'Respond naturally based on your current state and memories.'}"""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = self.client.chat(
|
response = self.client.chat(
|
||||||
@@ -81,19 +89,22 @@ Recent memories:
|
|||||||
{"role": "user", "content": prompt}
|
{"role": "user", "content": prompt}
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
return response['message']['content']
|
return self._clean_response(response['message']['content'])
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger.error(f"Ollama generation failed: {e}")
|
self.logger.error(f"Ollama generation failed: {e}")
|
||||||
return self._fallback_response(persona_state)
|
return self._fallback_response(persona_state)
|
||||||
|
|
||||||
def chat(self, prompt: str, max_tokens: int = 200) -> str:
|
def chat(self, prompt: str, max_tokens: int = 2000) -> str:
|
||||||
"""Simple chat interface"""
|
"""Simple chat interface"""
|
||||||
try:
|
try:
|
||||||
|
messages = []
|
||||||
|
if self.config_system_prompt:
|
||||||
|
messages.append({"role": "system", "content": self.config_system_prompt})
|
||||||
|
messages.append({"role": "user", "content": prompt})
|
||||||
|
|
||||||
response = self.client.chat(
|
response = self.client.chat(
|
||||||
model=self.model,
|
model=self.model,
|
||||||
messages=[
|
messages=messages,
|
||||||
{"role": "user", "content": prompt}
|
|
||||||
],
|
|
||||||
options={
|
options={
|
||||||
"num_predict": max_tokens,
|
"num_predict": max_tokens,
|
||||||
"temperature": 0.7,
|
"temperature": 0.7,
|
||||||
@@ -101,11 +112,20 @@ Recent memories:
|
|||||||
},
|
},
|
||||||
stream=False # ストリーミング無効化で安定性向上
|
stream=False # ストリーミング無効化で安定性向上
|
||||||
)
|
)
|
||||||
return response['message']['content']
|
return self._clean_response(response['message']['content'])
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger.error(f"Ollama chat failed (host: {self.host}): {e}")
|
self.logger.error(f"Ollama chat failed (host: {self.host}): {e}")
|
||||||
return "I'm having trouble connecting to the AI model."
|
return "I'm having trouble connecting to the AI model."
|
||||||
|
|
||||||
|
def _clean_response(self, response: str) -> str:
|
||||||
|
"""Clean response by removing think tags and other unwanted content"""
|
||||||
|
import re
|
||||||
|
# Remove <think></think> tags and their content
|
||||||
|
response = re.sub(r'<think>.*?</think>', '', response, flags=re.DOTALL)
|
||||||
|
# Remove any remaining whitespace at the beginning/end
|
||||||
|
response = response.strip()
|
||||||
|
return response
|
||||||
|
|
||||||
def _fallback_response(self, persona_state: PersonaState) -> str:
|
def _fallback_response(self, persona_state: PersonaState) -> str:
|
||||||
"""Fallback response based on mood"""
|
"""Fallback response based on mood"""
|
||||||
mood_responses = {
|
mood_responses = {
|
||||||
@@ -119,9 +139,9 @@ Recent memories:
|
|||||||
|
|
||||||
|
|
||||||
class OpenAIProvider:
|
class OpenAIProvider:
|
||||||
"""OpenAI API provider"""
|
"""OpenAI API provider with MCP function calling support"""
|
||||||
|
|
||||||
def __init__(self, model: str = "gpt-4o-mini", api_key: Optional[str] = None):
|
def __init__(self, model: str = "gpt-4o-mini", api_key: Optional[str] = None, mcp_client=None):
|
||||||
self.model = model
|
self.model = model
|
||||||
# Try to get API key from config first
|
# Try to get API key from config first
|
||||||
config = Config()
|
config = Config()
|
||||||
@@ -130,6 +150,175 @@ class OpenAIProvider:
|
|||||||
raise ValueError("OpenAI API key not provided. Set it with: aigpt config set providers.openai.api_key YOUR_KEY")
|
raise ValueError("OpenAI API key not provided. Set it with: aigpt config set providers.openai.api_key YOUR_KEY")
|
||||||
self.client = OpenAI(api_key=self.api_key)
|
self.client = OpenAI(api_key=self.api_key)
|
||||||
self.logger = logging.getLogger(__name__)
|
self.logger = logging.getLogger(__name__)
|
||||||
|
self.mcp_client = mcp_client # For MCP function calling
|
||||||
|
|
||||||
|
# Load system prompt from config
|
||||||
|
try:
|
||||||
|
self.config_system_prompt = config.get('providers.openai.system_prompt')
|
||||||
|
except:
|
||||||
|
self.config_system_prompt = None
|
||||||
|
|
||||||
|
def _get_mcp_tools(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Generate OpenAI tools from MCP endpoints"""
|
||||||
|
if not self.mcp_client or not self.mcp_client.available:
|
||||||
|
return []
|
||||||
|
|
||||||
|
tools = [
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "get_memories",
|
||||||
|
"description": "過去の会話記憶を取得します。「覚えている」「前回」「以前」などの質問で必ず使用してください",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"limit": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "取得する記憶の数",
|
||||||
|
"default": 5
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "search_memories",
|
||||||
|
"description": "特定のトピックについて話した記憶を検索します。「プログラミングについて」「○○について話した」などの質問で使用してください",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"keywords": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "検索キーワードの配列"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["keywords"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "get_contextual_memories",
|
||||||
|
"description": "クエリに関連する文脈的記憶を取得します",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"query": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "検索クエリ"
|
||||||
|
},
|
||||||
|
"limit": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "取得する記憶の数",
|
||||||
|
"default": 5
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["query"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "get_relationship",
|
||||||
|
"description": "特定ユーザーとの関係性情報を取得します",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"user_id": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "ユーザーID"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["user_id"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add ai.card tools if available
|
||||||
|
if hasattr(self.mcp_client, 'has_card_tools') and self.mcp_client.has_card_tools:
|
||||||
|
card_tools = [
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "card_get_user_cards",
|
||||||
|
"description": "ユーザーが所有するカードの一覧を取得します",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"did": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "ユーザーのDID"
|
||||||
|
},
|
||||||
|
"limit": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "取得するカード数の上限",
|
||||||
|
"default": 10
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["did"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "card_draw_card",
|
||||||
|
"description": "ガチャを引いてカードを取得します",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"did": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "ユーザーのDID"
|
||||||
|
},
|
||||||
|
"is_paid": {
|
||||||
|
"type": "boolean",
|
||||||
|
"description": "有料ガチャかどうか",
|
||||||
|
"default": False
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["did"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "card_analyze_collection",
|
||||||
|
"description": "ユーザーのカードコレクションを分析します",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"did": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "ユーザーのDID"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["did"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "card_get_gacha_stats",
|
||||||
|
"description": "ガチャの統計情報を取得します",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
tools.extend(card_tools)
|
||||||
|
|
||||||
|
return tools
|
||||||
|
|
||||||
async def generate_response(
|
async def generate_response(
|
||||||
self,
|
self,
|
||||||
@@ -159,7 +348,7 @@ Personality traits: {personality_desc}
|
|||||||
Recent memories:
|
Recent memories:
|
||||||
{memory_context}
|
{memory_context}
|
||||||
|
|
||||||
{system_prompt or 'Respond naturally based on your current state and memories. Be authentic to your mood and personality.'}"""
|
{system_prompt or self.config_system_prompt or 'Respond naturally based on your current state and memories. Be authentic to your mood and personality.'}"""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = self.client.chat.completions.create(
|
response = self.client.chat.completions.create(
|
||||||
@@ -175,6 +364,173 @@ Recent memories:
|
|||||||
self.logger.error(f"OpenAI generation failed: {e}")
|
self.logger.error(f"OpenAI generation failed: {e}")
|
||||||
return self._fallback_response(persona_state)
|
return self._fallback_response(persona_state)
|
||||||
|
|
||||||
|
async def chat_with_mcp(self, prompt: str, max_tokens: int = 2000, user_id: str = "user") -> str:
|
||||||
|
"""Chat interface with MCP function calling support"""
|
||||||
|
if not self.mcp_client or not self.mcp_client.available:
|
||||||
|
return self.chat(prompt, max_tokens)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Prepare tools
|
||||||
|
tools = self._get_mcp_tools()
|
||||||
|
|
||||||
|
# Initial request with tools
|
||||||
|
response = self.client.chat.completions.create(
|
||||||
|
model=self.model,
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": self.config_system_prompt or "あなたは記憶システムと関係性データ、カードゲームシステムにアクセスできます。過去の会話、記憶、関係性について質問された時は、必ずツールを使用して正確な情報を取得してください。「覚えている」「前回」「以前」「について話した」「関係」などのキーワードがあれば積極的にツールを使用してください。カード関連の質問(「カード」「コレクション」「ガチャ」「見せて」「持っている」など)では、必ずcard_get_user_cardsやcard_analyze_collectionなどのツールを使用してください。didパラメータには現在会話しているユーザーのID(例:'syui')を使用してください。"},
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
],
|
||||||
|
tools=tools,
|
||||||
|
tool_choice="auto",
|
||||||
|
max_tokens=max_tokens,
|
||||||
|
temperature=0.7
|
||||||
|
)
|
||||||
|
|
||||||
|
message = response.choices[0].message
|
||||||
|
|
||||||
|
# Handle tool calls
|
||||||
|
if message.tool_calls:
|
||||||
|
print(f"🔧 [OpenAI] {len(message.tool_calls)} tools called:")
|
||||||
|
for tc in message.tool_calls:
|
||||||
|
print(f" - {tc.function.name}({tc.function.arguments})")
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
{"role": "system", "content": self.config_system_prompt or "必要に応じて利用可能なツールを使って、より正確で詳細な回答を提供してください。"},
|
||||||
|
{"role": "user", "content": prompt},
|
||||||
|
{
|
||||||
|
"role": "assistant",
|
||||||
|
"content": message.content,
|
||||||
|
"tool_calls": [tc.model_dump() for tc in message.tool_calls]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Execute each tool call
|
||||||
|
for tool_call in message.tool_calls:
|
||||||
|
print(f"🌐 [MCP] Executing {tool_call.function.name}...")
|
||||||
|
tool_result = await self._execute_mcp_tool(tool_call, user_id)
|
||||||
|
print(f"✅ [MCP] Result: {str(tool_result)[:100]}...")
|
||||||
|
messages.append({
|
||||||
|
"role": "tool",
|
||||||
|
"tool_call_id": tool_call.id,
|
||||||
|
"name": tool_call.function.name,
|
||||||
|
"content": json.dumps(tool_result, ensure_ascii=False)
|
||||||
|
})
|
||||||
|
|
||||||
|
# Get final response with tool outputs
|
||||||
|
final_response = self.client.chat.completions.create(
|
||||||
|
model=self.model,
|
||||||
|
messages=messages,
|
||||||
|
max_tokens=max_tokens,
|
||||||
|
temperature=0.7
|
||||||
|
)
|
||||||
|
|
||||||
|
return final_response.choices[0].message.content
|
||||||
|
else:
|
||||||
|
return message.content
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"OpenAI MCP chat failed: {e}")
|
||||||
|
return f"申し訳ありません。エラーが発生しました: {e}"
|
||||||
|
|
||||||
|
async def _execute_mcp_tool(self, tool_call, context_user_id: str = "user") -> Dict[str, Any]:
|
||||||
|
"""Execute MCP tool call"""
|
||||||
|
try:
|
||||||
|
import json
|
||||||
|
function_name = tool_call.function.name
|
||||||
|
arguments = json.loads(tool_call.function.arguments)
|
||||||
|
|
||||||
|
if function_name == "get_memories":
|
||||||
|
limit = arguments.get("limit", 5)
|
||||||
|
return await self.mcp_client.get_memories(limit) or {"error": "記憶の取得に失敗しました"}
|
||||||
|
|
||||||
|
elif function_name == "search_memories":
|
||||||
|
keywords = arguments.get("keywords", [])
|
||||||
|
return await self.mcp_client.search_memories(keywords) or {"error": "記憶の検索に失敗しました"}
|
||||||
|
|
||||||
|
elif function_name == "get_contextual_memories":
|
||||||
|
query = arguments.get("query", "")
|
||||||
|
limit = arguments.get("limit", 5)
|
||||||
|
return await self.mcp_client.get_contextual_memories(query, limit) or {"error": "文脈記憶の取得に失敗しました"}
|
||||||
|
|
||||||
|
elif function_name == "get_relationship":
|
||||||
|
# 引数のuser_idがない場合はコンテキストから取得
|
||||||
|
user_id = arguments.get("user_id", context_user_id)
|
||||||
|
if not user_id or user_id == "user":
|
||||||
|
user_id = context_user_id
|
||||||
|
# デバッグ用ログ
|
||||||
|
print(f"🔍 [DEBUG] get_relationship called with user_id: '{user_id}' (context: '{context_user_id}')")
|
||||||
|
result = await self.mcp_client.get_relationship(user_id)
|
||||||
|
print(f"🔍 [DEBUG] MCP result: {result}")
|
||||||
|
return result or {"error": "関係性の取得に失敗しました"}
|
||||||
|
|
||||||
|
# ai.card tools
|
||||||
|
elif function_name == "card_get_user_cards":
|
||||||
|
did = arguments.get("did", context_user_id)
|
||||||
|
limit = arguments.get("limit", 10)
|
||||||
|
result = await self.mcp_client.card_get_user_cards(did, limit)
|
||||||
|
# Check if ai.card server is not running
|
||||||
|
if result and result.get("error") == "ai.card server is not running":
|
||||||
|
return {
|
||||||
|
"error": "ai.cardサーバーが起動していません",
|
||||||
|
"message": "カードシステムを使用するには、別のターミナルで以下のコマンドを実行してください:\ncd card && ./start_server.sh"
|
||||||
|
}
|
||||||
|
return result or {"error": "カード一覧の取得に失敗しました"}
|
||||||
|
|
||||||
|
elif function_name == "card_draw_card":
|
||||||
|
did = arguments.get("did", context_user_id)
|
||||||
|
is_paid = arguments.get("is_paid", False)
|
||||||
|
result = await self.mcp_client.card_draw_card(did, is_paid)
|
||||||
|
if result and result.get("error") == "ai.card server is not running":
|
||||||
|
return {
|
||||||
|
"error": "ai.cardサーバーが起動していません",
|
||||||
|
"message": "カードシステムを使用するには、別のターミナルで以下のコマンドを実行してください:\ncd card && ./start_server.sh"
|
||||||
|
}
|
||||||
|
return result or {"error": "ガチャに失敗しました"}
|
||||||
|
|
||||||
|
elif function_name == "card_analyze_collection":
|
||||||
|
did = arguments.get("did", context_user_id)
|
||||||
|
result = await self.mcp_client.card_analyze_collection(did)
|
||||||
|
if result and result.get("error") == "ai.card server is not running":
|
||||||
|
return {
|
||||||
|
"error": "ai.cardサーバーが起動していません",
|
||||||
|
"message": "カードシステムを使用するには、別のターミナルで以下のコマンドを実行してください:\ncd card && ./start_server.sh"
|
||||||
|
}
|
||||||
|
return result or {"error": "コレクション分析に失敗しました"}
|
||||||
|
|
||||||
|
elif function_name == "card_get_gacha_stats":
|
||||||
|
result = await self.mcp_client.card_get_gacha_stats()
|
||||||
|
if result and result.get("error") == "ai.card server is not running":
|
||||||
|
return {
|
||||||
|
"error": "ai.cardサーバーが起動していません",
|
||||||
|
"message": "カードシステムを使用するには、別のターミナルで以下のコマンドを実行してください:\ncd card && ./start_server.sh"
|
||||||
|
}
|
||||||
|
return result or {"error": "ガチャ統計の取得に失敗しました"}
|
||||||
|
|
||||||
|
else:
|
||||||
|
return {"error": f"未知のツール: {function_name}"}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ツール実行エラー: {str(e)}"}
|
||||||
|
|
||||||
|
def chat(self, prompt: str, max_tokens: int = 2000) -> str:
|
||||||
|
"""Simple chat interface without MCP tools"""
|
||||||
|
try:
|
||||||
|
messages = []
|
||||||
|
if self.config_system_prompt:
|
||||||
|
messages.append({"role": "system", "content": self.config_system_prompt})
|
||||||
|
messages.append({"role": "user", "content": prompt})
|
||||||
|
|
||||||
|
response = self.client.chat.completions.create(
|
||||||
|
model=self.model,
|
||||||
|
messages=messages,
|
||||||
|
max_tokens=max_tokens,
|
||||||
|
temperature=0.7
|
||||||
|
)
|
||||||
|
return response.choices[0].message.content
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"OpenAI chat failed: {e}")
|
||||||
|
return "I'm having trouble connecting to the AI model."
|
||||||
|
|
||||||
def _fallback_response(self, persona_state: PersonaState) -> str:
|
def _fallback_response(self, persona_state: PersonaState) -> str:
|
||||||
"""Fallback response based on mood"""
|
"""Fallback response based on mood"""
|
||||||
mood_responses = {
|
mood_responses = {
|
||||||
@@ -187,9 +543,18 @@ Recent memories:
|
|||||||
return mood_responses.get(persona_state.current_mood, "I see.")
|
return mood_responses.get(persona_state.current_mood, "I see.")
|
||||||
|
|
||||||
|
|
||||||
def create_ai_provider(provider: str = "ollama", model: str = "qwen2.5", **kwargs) -> AIProvider:
|
def create_ai_provider(provider: str = "ollama", model: Optional[str] = None, mcp_client=None, **kwargs) -> AIProvider:
|
||||||
"""Factory function to create AI providers"""
|
"""Factory function to create AI providers"""
|
||||||
if provider == "ollama":
|
if provider == "ollama":
|
||||||
|
# Get model from config if not provided
|
||||||
|
if model is None:
|
||||||
|
try:
|
||||||
|
from .config import Config
|
||||||
|
config = Config()
|
||||||
|
model = config.get('providers.ollama.default_model', 'qwen2.5')
|
||||||
|
except:
|
||||||
|
model = 'qwen2.5' # Fallback to default
|
||||||
|
|
||||||
# Try to get host from config if not provided in kwargs
|
# Try to get host from config if not provided in kwargs
|
||||||
if 'host' not in kwargs:
|
if 'host' not in kwargs:
|
||||||
try:
|
try:
|
||||||
@@ -202,6 +567,14 @@ def create_ai_provider(provider: str = "ollama", model: str = "qwen2.5", **kwarg
|
|||||||
pass # Use environment variable or default
|
pass # Use environment variable or default
|
||||||
return OllamaProvider(model=model, **kwargs)
|
return OllamaProvider(model=model, **kwargs)
|
||||||
elif provider == "openai":
|
elif provider == "openai":
|
||||||
return OpenAIProvider(model=model, **kwargs)
|
# Get model from config if not provided
|
||||||
|
if model is None:
|
||||||
|
try:
|
||||||
|
from .config import Config
|
||||||
|
config = Config()
|
||||||
|
model = config.get('providers.openai.default_model', 'gpt-4o-mini')
|
||||||
|
except:
|
||||||
|
model = 'gpt-4o-mini' # Fallback to default
|
||||||
|
return OpenAIProvider(model=model, mcp_client=mcp_client, **kwargs)
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"Unknown provider: {provider}")
|
raise ValueError(f"Unknown provider: {provider}")
|
||||||
|
804
src/aigpt/cli.py
804
src/aigpt/cli.py
@@ -2,15 +2,17 @@
|
|||||||
|
|
||||||
import typer
|
import typer
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Optional, Dict, Any
|
||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
from rich.table import Table
|
from rich.table import Table
|
||||||
from rich.panel import Panel
|
from rich.panel import Panel
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
import subprocess
|
import subprocess
|
||||||
import shlex
|
import shlex
|
||||||
|
import httpx
|
||||||
|
import asyncio
|
||||||
from prompt_toolkit import prompt as ptk_prompt
|
from prompt_toolkit import prompt as ptk_prompt
|
||||||
from prompt_toolkit.completion import WordCompleter
|
from prompt_toolkit.completion import WordCompleter, Completer, Completion
|
||||||
from prompt_toolkit.history import FileHistory
|
from prompt_toolkit.history import FileHistory
|
||||||
from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
|
from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
|
||||||
|
|
||||||
@@ -30,6 +32,275 @@ config = Config()
|
|||||||
DEFAULT_DATA_DIR = config.data_dir
|
DEFAULT_DATA_DIR = config.data_dir
|
||||||
|
|
||||||
|
|
||||||
|
class MCPClient:
|
||||||
|
"""Client for communicating with MCP server using config settings"""
|
||||||
|
|
||||||
|
def __init__(self, config: Optional[Config] = None):
|
||||||
|
self.config = config or Config()
|
||||||
|
self.enabled = self.config.get("mcp.enabled", True)
|
||||||
|
self.auto_detect = self.config.get("mcp.auto_detect", True)
|
||||||
|
self.servers = self.config.get("mcp.servers", {})
|
||||||
|
self.available = False
|
||||||
|
self.has_card_tools = False
|
||||||
|
|
||||||
|
if self.enabled:
|
||||||
|
self._check_availability()
|
||||||
|
|
||||||
|
def _check_availability(self):
|
||||||
|
"""Check if any MCP server is available"""
|
||||||
|
self.available = False
|
||||||
|
if not self.enabled:
|
||||||
|
print(f"🚨 [MCP Client] MCP disabled in config")
|
||||||
|
return
|
||||||
|
|
||||||
|
print(f"🔍 [MCP Client] Checking availability...")
|
||||||
|
print(f"🔍 [MCP Client] Available servers: {list(self.servers.keys())}")
|
||||||
|
|
||||||
|
# Check ai.gpt server first (primary)
|
||||||
|
ai_gpt_config = self.servers.get("ai_gpt", {})
|
||||||
|
if ai_gpt_config:
|
||||||
|
base_url = ai_gpt_config.get("base_url", "http://localhost:8001")
|
||||||
|
timeout = ai_gpt_config.get("timeout", 5.0)
|
||||||
|
|
||||||
|
# Convert timeout to float if it's a string
|
||||||
|
if isinstance(timeout, str):
|
||||||
|
timeout = float(timeout)
|
||||||
|
|
||||||
|
print(f"🔍 [MCP Client] Testing ai_gpt server: {base_url} (timeout: {timeout})")
|
||||||
|
try:
|
||||||
|
import httpx
|
||||||
|
with httpx.Client(timeout=timeout) as client:
|
||||||
|
response = client.get(f"{base_url}/docs")
|
||||||
|
print(f"🔍 [MCP Client] ai_gpt response: {response.status_code}")
|
||||||
|
if response.status_code == 200:
|
||||||
|
self.available = True
|
||||||
|
self.active_server = "ai_gpt"
|
||||||
|
print(f"✅ [MCP Client] ai_gpt server connected successfully")
|
||||||
|
|
||||||
|
# Check if card tools are available
|
||||||
|
try:
|
||||||
|
card_status = client.get(f"{base_url}/card_system_status")
|
||||||
|
if card_status.status_code == 200:
|
||||||
|
self.has_card_tools = True
|
||||||
|
print(f"✅ [MCP Client] ai.card tools detected and available")
|
||||||
|
except:
|
||||||
|
print(f"🔍 [MCP Client] ai.card tools not available")
|
||||||
|
|
||||||
|
return
|
||||||
|
except Exception as e:
|
||||||
|
print(f"🚨 [MCP Client] ai_gpt connection failed: {e}")
|
||||||
|
else:
|
||||||
|
print(f"🚨 [MCP Client] No ai_gpt config found")
|
||||||
|
|
||||||
|
# If auto_detect is enabled, try to find any available server
|
||||||
|
if self.auto_detect:
|
||||||
|
print(f"🔍 [MCP Client] Auto-detect enabled, trying other servers...")
|
||||||
|
for server_name, server_config in self.servers.items():
|
||||||
|
base_url = server_config.get("base_url", "")
|
||||||
|
timeout = server_config.get("timeout", 5.0)
|
||||||
|
|
||||||
|
# Convert timeout to float if it's a string
|
||||||
|
if isinstance(timeout, str):
|
||||||
|
timeout = float(timeout)
|
||||||
|
|
||||||
|
print(f"🔍 [MCP Client] Testing {server_name}: {base_url} (timeout: {timeout})")
|
||||||
|
try:
|
||||||
|
import httpx
|
||||||
|
with httpx.Client(timeout=timeout) as client:
|
||||||
|
response = client.get(f"{base_url}/docs")
|
||||||
|
print(f"🔍 [MCP Client] {server_name} response: {response.status_code}")
|
||||||
|
if response.status_code == 200:
|
||||||
|
self.available = True
|
||||||
|
self.active_server = server_name
|
||||||
|
print(f"✅ [MCP Client] {server_name} server connected successfully")
|
||||||
|
return
|
||||||
|
except Exception as e:
|
||||||
|
print(f"🚨 [MCP Client] {server_name} connection failed: {e}")
|
||||||
|
|
||||||
|
print(f"🚨 [MCP Client] No MCP servers available")
|
||||||
|
|
||||||
|
def _get_url(self, endpoint_name: str) -> Optional[str]:
|
||||||
|
"""Get full URL for an endpoint"""
|
||||||
|
if not self.available or not hasattr(self, 'active_server'):
|
||||||
|
print(f"🚨 [MCP Client] Not available or no active server")
|
||||||
|
return None
|
||||||
|
|
||||||
|
server_config = self.servers.get(self.active_server, {})
|
||||||
|
base_url = server_config.get("base_url", "")
|
||||||
|
endpoints = server_config.get("endpoints", {})
|
||||||
|
endpoint_path = endpoints.get(endpoint_name, "")
|
||||||
|
|
||||||
|
print(f"🔍 [MCP Client] Server: {self.active_server}")
|
||||||
|
print(f"🔍 [MCP Client] Base URL: {base_url}")
|
||||||
|
print(f"🔍 [MCP Client] Endpoints: {list(endpoints.keys())}")
|
||||||
|
print(f"🔍 [MCP Client] Looking for: {endpoint_name}")
|
||||||
|
print(f"🔍 [MCP Client] Found path: {endpoint_path}")
|
||||||
|
|
||||||
|
if base_url and endpoint_path:
|
||||||
|
return f"{base_url}{endpoint_path}"
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _get_timeout(self) -> float:
|
||||||
|
"""Get timeout for the active server"""
|
||||||
|
if not hasattr(self, 'active_server'):
|
||||||
|
return 5.0
|
||||||
|
server_config = self.servers.get(self.active_server, {})
|
||||||
|
timeout = server_config.get("timeout", 5.0)
|
||||||
|
|
||||||
|
# Convert timeout to float if it's a string
|
||||||
|
if isinstance(timeout, str):
|
||||||
|
timeout = float(timeout)
|
||||||
|
|
||||||
|
return timeout
|
||||||
|
|
||||||
|
async def get_memories(self, limit: int = 5) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get memories via MCP"""
|
||||||
|
url = self._get_url("get_memories")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.get(f"{url}?limit={limit}")
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def search_memories(self, keywords: list) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Search memories via MCP"""
|
||||||
|
url = self._get_url("search_memories")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.post(url, json={"keywords": keywords})
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def get_contextual_memories(self, query: str, limit: int = 5) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get contextual memories via MCP"""
|
||||||
|
url = self._get_url("get_contextual_memories")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.get(f"{url}?query={query}&limit={limit}")
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def process_interaction(self, user_id: str, message: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Process interaction via MCP"""
|
||||||
|
url = self._get_url("process_interaction")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.post(url, json={"user_id": user_id, "message": message})
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def get_relationship(self, user_id: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get relationship via MCP"""
|
||||||
|
url = self._get_url("get_relationship")
|
||||||
|
print(f"🔍 [MCP Client] get_relationship URL: {url}")
|
||||||
|
if not url:
|
||||||
|
print(f"🚨 [MCP Client] No URL found for get_relationship")
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.get(f"{url}?user_id={user_id}")
|
||||||
|
print(f"🔍 [MCP Client] Response status: {response.status_code}")
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
print(f"🔍 [MCP Client] Response data: {result}")
|
||||||
|
return result
|
||||||
|
else:
|
||||||
|
print(f"🚨 [MCP Client] HTTP error: {response.status_code}")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
print(f"🚨 [MCP Client] Exception: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_server_info(self) -> Dict[str, Any]:
|
||||||
|
"""Get information about the active MCP server"""
|
||||||
|
if not self.available or not hasattr(self, 'active_server'):
|
||||||
|
return {"available": False}
|
||||||
|
|
||||||
|
server_config = self.servers.get(self.active_server, {})
|
||||||
|
return {
|
||||||
|
"available": True,
|
||||||
|
"server_name": self.active_server,
|
||||||
|
"display_name": server_config.get("name", self.active_server),
|
||||||
|
"base_url": server_config.get("base_url", ""),
|
||||||
|
"timeout": server_config.get("timeout", 5.0),
|
||||||
|
"endpoints": len(server_config.get("endpoints", {})),
|
||||||
|
"has_card_tools": self.has_card_tools
|
||||||
|
}
|
||||||
|
|
||||||
|
# ai.card MCP methods
|
||||||
|
async def card_get_user_cards(self, did: str, limit: int = 10) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get user's card collection via MCP"""
|
||||||
|
if not self.has_card_tools:
|
||||||
|
return {"error": "ai.card tools not available"}
|
||||||
|
|
||||||
|
url = self._get_url("card_get_user_cards")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.get(f"{url}?did={did}&limit={limit}")
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"Failed to get cards: {str(e)}"}
|
||||||
|
|
||||||
|
async def card_draw_card(self, did: str, is_paid: bool = False) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Draw a card from gacha system via MCP"""
|
||||||
|
if not self.has_card_tools:
|
||||||
|
return {"error": "ai.card tools not available"}
|
||||||
|
|
||||||
|
url = self._get_url("card_draw_card")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.post(url, json={"did": did, "is_paid": is_paid})
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"Failed to draw card: {str(e)}"}
|
||||||
|
|
||||||
|
async def card_analyze_collection(self, did: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Analyze card collection via MCP"""
|
||||||
|
if not self.has_card_tools:
|
||||||
|
return {"error": "ai.card tools not available"}
|
||||||
|
|
||||||
|
url = self._get_url("card_analyze_collection")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.get(f"{url}?did={did}")
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"Failed to analyze collection: {str(e)}"}
|
||||||
|
|
||||||
|
async def card_get_gacha_stats(self) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get gacha statistics via MCP"""
|
||||||
|
if not self.has_card_tools:
|
||||||
|
return {"error": "ai.card tools not available"}
|
||||||
|
|
||||||
|
url = self._get_url("card_get_gacha_stats")
|
||||||
|
if not url:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._get_timeout()) as client:
|
||||||
|
response = await client.get(url)
|
||||||
|
return response.json() if response.status_code == 200 else None
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"Failed to get gacha stats: {str(e)}"}
|
||||||
|
|
||||||
|
|
||||||
def get_persona(data_dir: Optional[Path] = None) -> Persona:
|
def get_persona(data_dir: Optional[Path] = None) -> Persona:
|
||||||
"""Get or create persona instance"""
|
"""Get or create persona instance"""
|
||||||
if data_dir is None:
|
if data_dir is None:
|
||||||
@@ -50,11 +321,30 @@ def chat(
|
|||||||
"""Chat with the AI"""
|
"""Chat with the AI"""
|
||||||
persona = get_persona(data_dir)
|
persona = get_persona(data_dir)
|
||||||
|
|
||||||
# Create AI provider if specified
|
# Get config instance
|
||||||
|
config_instance = Config()
|
||||||
|
|
||||||
|
# Get defaults from config if not provided
|
||||||
|
if not provider:
|
||||||
|
provider = config_instance.get("default_provider", "ollama")
|
||||||
|
if not model:
|
||||||
|
if provider == "ollama":
|
||||||
|
model = config_instance.get("providers.ollama.default_model", "qwen2.5")
|
||||||
|
else:
|
||||||
|
model = config_instance.get("providers.openai.default_model", "gpt-4o-mini")
|
||||||
|
|
||||||
|
# Create AI provider with MCP client if needed
|
||||||
ai_provider = None
|
ai_provider = None
|
||||||
if provider and model:
|
mcp_client = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
ai_provider = create_ai_provider(provider=provider, model=model)
|
# Create MCP client for OpenAI provider
|
||||||
|
if provider == "openai":
|
||||||
|
mcp_client = MCPClient(config_instance)
|
||||||
|
if mcp_client.available:
|
||||||
|
console.print(f"[dim]MCP client connected to {mcp_client.active_server}[/dim]")
|
||||||
|
|
||||||
|
ai_provider = create_ai_provider(provider=provider, model=model, mcp_client=mcp_client)
|
||||||
console.print(f"[dim]Using {provider} with model {model}[/dim]\n")
|
console.print(f"[dim]Using {provider} with model {model}[/dim]\n")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
console.print(f"[yellow]Warning: Could not create AI provider: {e}[/yellow]")
|
console.print(f"[yellow]Warning: Could not create AI provider: {e}[/yellow]")
|
||||||
@@ -67,7 +357,7 @@ def chat(
|
|||||||
relationship = persona.relationships.get_or_create_relationship(user_id)
|
relationship = persona.relationships.get_or_create_relationship(user_id)
|
||||||
|
|
||||||
# Display response
|
# Display response
|
||||||
console.print(Panel(response, title="AI Response", border_style="cyan"))
|
console.print(Panel(response, title="AI Response", border_style="cyan", expand=True, width=None))
|
||||||
|
|
||||||
# Show relationship status
|
# Show relationship status
|
||||||
status_color = "green" if relationship.transmission_enabled else "yellow"
|
status_color = "green" if relationship.transmission_enabled else "yellow"
|
||||||
@@ -226,10 +516,10 @@ def relationships(
|
|||||||
@app.command()
|
@app.command()
|
||||||
def server(
|
def server(
|
||||||
host: str = typer.Option("localhost", "--host", "-h", help="Server host"),
|
host: str = typer.Option("localhost", "--host", "-h", help="Server host"),
|
||||||
port: int = typer.Option(8000, "--port", "-p", help="Server port"),
|
port: int = typer.Option(8001, "--port", "-p", help="Server port"),
|
||||||
data_dir: Optional[Path] = typer.Option(None, "--data-dir", "-d", help="Data directory"),
|
data_dir: Optional[Path] = typer.Option(None, "--data-dir", "-d", help="Data directory"),
|
||||||
model: str = typer.Option("qwen2.5", "--model", "-m", help="AI model to use"),
|
model: Optional[str] = typer.Option(None, "--model", "-m", help="AI model to use"),
|
||||||
provider: str = typer.Option("ollama", "--provider", help="AI provider (ollama/openai)")
|
provider: Optional[str] = typer.Option(None, "--provider", help="AI provider (ollama/openai)")
|
||||||
):
|
):
|
||||||
"""Run MCP server for AI integration"""
|
"""Run MCP server for AI integration"""
|
||||||
import uvicorn
|
import uvicorn
|
||||||
@@ -239,26 +529,106 @@ def server(
|
|||||||
|
|
||||||
data_dir.mkdir(parents=True, exist_ok=True)
|
data_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Get configuration
|
||||||
|
config_instance = Config()
|
||||||
|
|
||||||
|
# Get defaults from config if not provided
|
||||||
|
if not provider:
|
||||||
|
provider = config_instance.get("default_provider", "ollama")
|
||||||
|
if not model:
|
||||||
|
if provider == "ollama":
|
||||||
|
model = config_instance.get("providers.ollama.default_model", "qwen3:latest")
|
||||||
|
elif provider == "openai":
|
||||||
|
model = config_instance.get("providers.openai.default_model", "gpt-4o-mini")
|
||||||
|
else:
|
||||||
|
model = "qwen3:latest"
|
||||||
|
|
||||||
# Create MCP server
|
# Create MCP server
|
||||||
mcp_server = AIGptMcpServer(data_dir)
|
mcp_server = AIGptMcpServer(data_dir)
|
||||||
app_instance = mcp_server.app
|
app_instance = mcp_server.app
|
||||||
|
|
||||||
|
# Get endpoint categories and count
|
||||||
|
total_routes = len(mcp_server.app.routes)
|
||||||
|
mcp_tools = total_routes - 2 # Exclude docs and openapi
|
||||||
|
|
||||||
|
# Categorize endpoints
|
||||||
|
memory_endpoints = ["get_memories", "search_memories", "get_contextual_memories", "create_summary", "create_core_memory"]
|
||||||
|
relationship_endpoints = ["get_relationship", "get_all_relationships", "process_interaction", "check_transmission_eligibility"]
|
||||||
|
system_endpoints = ["get_persona_state", "get_fortune", "run_maintenance"]
|
||||||
|
shell_endpoints = ["execute_command", "analyze_file", "write_file", "list_files", "read_project_file"]
|
||||||
|
remote_endpoints = ["remote_shell", "ai_bot_status", "isolated_python", "isolated_analysis"]
|
||||||
|
card_endpoints = ["card_get_user_cards", "card_draw_card", "card_get_card_details", "card_analyze_collection", "card_get_gacha_stats", "card_system_status"]
|
||||||
|
|
||||||
|
# Check if ai.card tools are available
|
||||||
|
has_card_tools = mcp_server.has_card
|
||||||
|
|
||||||
|
# Build endpoint summary
|
||||||
|
endpoint_summary = f"""🧠 Memory System: {len(memory_endpoints)} tools
|
||||||
|
🤝 Relationships: {len(relationship_endpoints)} tools
|
||||||
|
⚙️ System State: {len(system_endpoints)} tools
|
||||||
|
💻 Shell Integration: {len(shell_endpoints)} tools
|
||||||
|
🔒 Remote Execution: {len(remote_endpoints)} tools"""
|
||||||
|
|
||||||
|
if has_card_tools:
|
||||||
|
endpoint_summary += f"\n🎴 Card Game System: {len(card_endpoints)} tools"
|
||||||
|
|
||||||
|
# Check MCP client connectivity
|
||||||
|
mcp_client = MCPClient(config_instance)
|
||||||
|
mcp_status = "✅ MCP Client Ready" if mcp_client.available else "⚠️ MCP Client Disabled"
|
||||||
|
|
||||||
|
# Add ai.card status if available
|
||||||
|
card_status = ""
|
||||||
|
if has_card_tools:
|
||||||
|
card_status = "\n🎴 ai.card: ./card directory detected"
|
||||||
|
|
||||||
|
# Provider configuration check
|
||||||
|
provider_status = "✅ Ready"
|
||||||
|
if provider == "openai":
|
||||||
|
api_key = config_instance.get_api_key("openai")
|
||||||
|
if not api_key:
|
||||||
|
provider_status = "⚠️ No API Key"
|
||||||
|
elif provider == "ollama":
|
||||||
|
ollama_host = config_instance.get("providers.ollama.host", "http://localhost:11434")
|
||||||
|
provider_status = f"✅ {ollama_host}"
|
||||||
|
|
||||||
console.print(Panel(
|
console.print(Panel(
|
||||||
f"[cyan]Starting ai.gpt MCP Server[/cyan]\n\n"
|
f"[bold cyan]🚀 ai.gpt MCP Server[/bold cyan]\n\n"
|
||||||
f"Host: {host}:{port}\n"
|
f"[green]Server Configuration:[/green]\n"
|
||||||
f"Provider: {provider}\n"
|
f"🌐 Address: http://{host}:{port}\n"
|
||||||
f"Model: {model}\n"
|
f"📋 API Docs: http://{host}:{port}/docs\n"
|
||||||
f"Data: {data_dir}",
|
f"💾 Data Directory: {data_dir}\n\n"
|
||||||
title="MCP Server",
|
f"[green]AI Provider Configuration:[/green]\n"
|
||||||
border_style="green"
|
f"🤖 Provider: {provider} {provider_status}\n"
|
||||||
|
f"🧩 Model: {model}\n\n"
|
||||||
|
f"[green]MCP Tools Available ({mcp_tools} total):[/green]\n"
|
||||||
|
f"{endpoint_summary}\n\n"
|
||||||
|
f"[green]Integration Status:[/green]\n"
|
||||||
|
f"{mcp_status}\n"
|
||||||
|
f"🔗 Config: {config_instance.config_file}{card_status}\n\n"
|
||||||
|
f"[dim]Press Ctrl+C to stop server[/dim]",
|
||||||
|
title="🔧 MCP Server Startup",
|
||||||
|
border_style="green",
|
||||||
|
expand=True
|
||||||
))
|
))
|
||||||
|
|
||||||
# Store provider info in app state for later use
|
# Store provider info in app state for later use
|
||||||
app_instance.state.ai_provider = provider
|
app_instance.state.ai_provider = provider
|
||||||
app_instance.state.ai_model = model
|
app_instance.state.ai_model = model
|
||||||
|
app_instance.state.config = config_instance
|
||||||
|
|
||||||
# Run server
|
# Run server with better logging
|
||||||
uvicorn.run(app_instance, host=host, port=port)
|
try:
|
||||||
|
uvicorn.run(
|
||||||
|
app_instance,
|
||||||
|
host=host,
|
||||||
|
port=port,
|
||||||
|
log_level="info",
|
||||||
|
access_log=False # Reduce noise
|
||||||
|
)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
console.print("\n[yellow]🛑 MCP Server stopped[/yellow]")
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"\n[red]❌ Server error: {e}[/red]")
|
||||||
|
|
||||||
|
|
||||||
@app.command()
|
@app.command()
|
||||||
@@ -379,15 +749,26 @@ def schedule(
|
|||||||
@app.command()
|
@app.command()
|
||||||
def shell(
|
def shell(
|
||||||
data_dir: Optional[Path] = typer.Option(None, "--data-dir", "-d", help="Data directory"),
|
data_dir: Optional[Path] = typer.Option(None, "--data-dir", "-d", help="Data directory"),
|
||||||
model: Optional[str] = typer.Option("qwen2.5", "--model", "-m", help="AI model to use"),
|
model: Optional[str] = typer.Option(None, "--model", "-m", help="AI model to use"),
|
||||||
provider: Optional[str] = typer.Option("ollama", "--provider", help="AI provider (ollama/openai)")
|
provider: Optional[str] = typer.Option(None, "--provider", help="AI provider (ollama/openai)")
|
||||||
):
|
):
|
||||||
"""Interactive shell mode (ai.shell)"""
|
"""Interactive shell mode (ai.shell)"""
|
||||||
persona = get_persona(data_dir)
|
persona = get_persona(data_dir)
|
||||||
|
|
||||||
|
# Get defaults from config if not provided
|
||||||
|
config_instance = Config()
|
||||||
|
if not provider:
|
||||||
|
provider = config_instance.get("default_provider", "ollama")
|
||||||
|
if not model:
|
||||||
|
if provider == "ollama":
|
||||||
|
model = config_instance.get("providers.ollama.default_model", "qwen3:latest")
|
||||||
|
elif provider == "openai":
|
||||||
|
model = config_instance.get("providers.openai.default_model", "gpt-4o-mini")
|
||||||
|
else:
|
||||||
|
model = "qwen3:latest" # fallback
|
||||||
|
|
||||||
# Create AI provider
|
# Create AI provider
|
||||||
ai_provider = None
|
ai_provider = None
|
||||||
if provider and model:
|
|
||||||
try:
|
try:
|
||||||
ai_provider = create_ai_provider(provider=provider, model=model)
|
ai_provider = create_ai_provider(provider=provider, model=model)
|
||||||
console.print(f"[dim]Using {provider} with model {model}[/dim]\n")
|
console.print(f"[dim]Using {provider} with model {model}[/dim]\n")
|
||||||
@@ -411,24 +792,71 @@ def shell(
|
|||||||
border_style="green"
|
border_style="green"
|
||||||
))
|
))
|
||||||
|
|
||||||
# Command completer with shell commands
|
# Custom completer for ai.shell
|
||||||
builtin_commands = ['help', 'exit', 'quit', 'chat', 'status', 'clear', 'fortune', 'relationships', 'load']
|
class ShellCompleter(Completer):
|
||||||
|
def __init__(self):
|
||||||
|
# Slash commands (built-in)
|
||||||
|
self.slash_commands = [
|
||||||
|
'/help', '/exit', '/quit', '/status', '/clear', '/load',
|
||||||
|
'/fortune', '/relationships'
|
||||||
|
]
|
||||||
|
|
||||||
# Add common shell commands
|
# AI commands
|
||||||
shell_commands = ['ls', 'cd', 'pwd', 'cat', 'echo', 'grep', 'find', 'mkdir', 'rm', 'cp', 'mv',
|
self.ai_commands = [
|
||||||
'git', 'python', 'pip', 'npm', 'node', 'cargo', 'rustc', 'docker', 'kubectl']
|
'/analyze', '/generate', '/explain', '/optimize',
|
||||||
|
'/refactor', '/test', '/document'
|
||||||
|
]
|
||||||
|
|
||||||
# AI-specific commands
|
# Project commands
|
||||||
ai_commands = ['analyze', 'generate', 'explain', 'optimize', 'refactor', 'test', 'document']
|
self.project_commands = [
|
||||||
|
'/project-status', '/suggest-next', '/continuous'
|
||||||
|
]
|
||||||
|
|
||||||
# Remote execution commands (ai.bot integration)
|
# Remote commands
|
||||||
remote_commands = ['remote', 'isolated', 'aibot-status']
|
self.remote_commands = [
|
||||||
|
'/remote', '/isolated', '/aibot-status'
|
||||||
|
]
|
||||||
|
|
||||||
# Project management commands (Claude Code-like)
|
# Shell commands (with ! prefix)
|
||||||
project_commands = ['project-status', 'suggest-next', 'continuous']
|
self.shell_commands = [
|
||||||
|
'!ls', '!cd', '!pwd', '!cat', '!echo', '!grep', '!find',
|
||||||
|
'!mkdir', '!rm', '!cp', '!mv', '!git', '!python', '!pip',
|
||||||
|
'!npm', '!node', '!cargo', '!rustc', '!docker', '!kubectl'
|
||||||
|
]
|
||||||
|
|
||||||
all_commands = builtin_commands + ['!' + cmd for cmd in shell_commands] + ai_commands + remote_commands + project_commands
|
# All commands combined
|
||||||
completer = WordCompleter(all_commands, ignore_case=True)
|
self.all_commands = (self.slash_commands + self.ai_commands +
|
||||||
|
self.project_commands + self.remote_commands +
|
||||||
|
self.shell_commands)
|
||||||
|
|
||||||
|
def get_completions(self, document, complete_event):
|
||||||
|
text = document.text_before_cursor
|
||||||
|
|
||||||
|
# For slash commands
|
||||||
|
if text.startswith('/'):
|
||||||
|
for cmd in self.all_commands:
|
||||||
|
if cmd.startswith('/') and cmd.startswith(text):
|
||||||
|
yield Completion(cmd, start_position=-len(text))
|
||||||
|
|
||||||
|
# For shell commands (!)
|
||||||
|
elif text.startswith('!'):
|
||||||
|
for cmd in self.shell_commands:
|
||||||
|
if cmd.startswith(text):
|
||||||
|
yield Completion(cmd, start_position=-len(text))
|
||||||
|
|
||||||
|
# For regular text (AI chat)
|
||||||
|
else:
|
||||||
|
# Common AI prompts
|
||||||
|
ai_prompts = [
|
||||||
|
'analyze this file', 'generate code for', 'explain how to',
|
||||||
|
'optimize this', 'refactor the', 'create tests for',
|
||||||
|
'document this code', 'help me with'
|
||||||
|
]
|
||||||
|
for prompt in ai_prompts:
|
||||||
|
if prompt.startswith(text.lower()):
|
||||||
|
yield Completion(prompt, start_position=-len(text))
|
||||||
|
|
||||||
|
completer = ShellCompleter()
|
||||||
|
|
||||||
# History file
|
# History file
|
||||||
actual_data_dir = data_dir if data_dir else DEFAULT_DATA_DIR
|
actual_data_dir = data_dir if data_dir else DEFAULT_DATA_DIR
|
||||||
@@ -452,43 +880,45 @@ def shell(
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
# Exit commands
|
# Exit commands
|
||||||
if user_input.lower() in ['exit', 'quit']:
|
if user_input.lower() in ['exit', 'quit', '/exit', '/quit']:
|
||||||
console.print("[cyan]Goodbye![/cyan]")
|
console.print("[cyan]Goodbye![/cyan]")
|
||||||
break
|
break
|
||||||
|
|
||||||
# Help command
|
# Help command
|
||||||
elif user_input.lower() == 'help':
|
elif user_input.lower() in ['help', '/help', '/']:
|
||||||
console.print(Panel(
|
console.print(Panel(
|
||||||
"[cyan]ai.shell Commands:[/cyan]\n\n"
|
"[cyan]ai.shell Commands:[/cyan]\n\n"
|
||||||
" help - Show this help message\n"
|
" /help, / - Show this help message\n"
|
||||||
" exit/quit - Exit the shell\n"
|
" /exit, /quit - Exit the shell\n"
|
||||||
" !<command> - Execute a shell command\n"
|
" !<command> - Execute a shell command (!ls, !git status)\n"
|
||||||
" chat <message> - Explicitly chat with AI\n"
|
" /status - Show AI status\n"
|
||||||
" status - Show AI status\n"
|
" /fortune - Check AI fortune\n"
|
||||||
" fortune - Check AI fortune\n"
|
" /relationships - List all relationships\n"
|
||||||
" relationships - List all relationships\n"
|
" /clear - Clear the screen\n"
|
||||||
" clear - Clear the screen\n"
|
" /load - Load aishell.md project file\n\n"
|
||||||
" load - Load aishell.md project file\n\n"
|
|
||||||
"[cyan]AI Commands:[/cyan]\n"
|
"[cyan]AI Commands:[/cyan]\n"
|
||||||
" analyze <file> - Analyze a file with AI\n"
|
" /analyze <file> - Analyze a file with AI\n"
|
||||||
" generate <desc> - Generate code from description\n"
|
" /generate <desc> - Generate code from description\n"
|
||||||
" explain <topic> - Get AI explanation\n\n"
|
" /explain <topic> - Get AI explanation\n\n"
|
||||||
"[cyan]Remote Commands (ai.bot):[/cyan]\n"
|
"[cyan]Remote Commands (ai.bot):[/cyan]\n"
|
||||||
" remote <command> - Execute command in isolated container\n"
|
" /remote <command> - Execute command in isolated container\n"
|
||||||
" isolated <code> - Run Python code in isolated environment\n"
|
" /isolated <code> - Run Python code in isolated environment\n"
|
||||||
" aibot-status - Check ai.bot server status\n\n"
|
" /aibot-status - Check ai.bot server status\n\n"
|
||||||
"[cyan]Project Commands (Claude Code-like):[/cyan]\n"
|
"[cyan]Project Commands (Claude Code-like):[/cyan]\n"
|
||||||
" project-status - Analyze current project structure\n"
|
" /project-status - Analyze current project structure\n"
|
||||||
" suggest-next - AI suggests next development steps\n"
|
" /suggest-next - AI suggests next development steps\n"
|
||||||
" continuous - Enable continuous development mode\n\n"
|
" /continuous - Enable continuous development mode\n\n"
|
||||||
"You can also type any message to chat with AI\n"
|
"[cyan]Tab Completion:[/cyan]\n"
|
||||||
"Use Tab for command completion",
|
" /[Tab] - Show all slash commands\n"
|
||||||
|
" ![Tab] - Show all shell commands\n"
|
||||||
|
" <text>[Tab] - AI prompt suggestions\n\n"
|
||||||
|
"Type any message to chat with AI",
|
||||||
title="Help",
|
title="Help",
|
||||||
border_style="yellow"
|
border_style="yellow"
|
||||||
))
|
))
|
||||||
|
|
||||||
# Clear command
|
# Clear command
|
||||||
elif user_input.lower() == 'clear':
|
elif user_input.lower() in ['clear', '/clear']:
|
||||||
console.clear()
|
console.clear()
|
||||||
|
|
||||||
# Shell command execution
|
# Shell command execution
|
||||||
@@ -517,7 +947,7 @@ def shell(
|
|||||||
console.print(f"[red]Error executing command: {e}[/red]")
|
console.print(f"[red]Error executing command: {e}[/red]")
|
||||||
|
|
||||||
# Status command
|
# Status command
|
||||||
elif user_input.lower() == 'status':
|
elif user_input.lower() in ['status', '/status']:
|
||||||
state = persona.get_current_state()
|
state = persona.get_current_state()
|
||||||
console.print(f"\nMood: {state.current_mood}")
|
console.print(f"\nMood: {state.current_mood}")
|
||||||
console.print(f"Fortune: {state.fortune.fortune_value}/10")
|
console.print(f"Fortune: {state.fortune.fortune_value}/10")
|
||||||
@@ -527,14 +957,14 @@ def shell(
|
|||||||
console.print(f"Score: {rel.score:.2f} / {rel.threshold}")
|
console.print(f"Score: {rel.score:.2f} / {rel.threshold}")
|
||||||
|
|
||||||
# Fortune command
|
# Fortune command
|
||||||
elif user_input.lower() == 'fortune':
|
elif user_input.lower() in ['fortune', '/fortune']:
|
||||||
fortune = persona.fortune_system.get_today_fortune()
|
fortune = persona.fortune_system.get_today_fortune()
|
||||||
fortune_bar = "🌟" * fortune.fortune_value + "☆" * (10 - fortune.fortune_value)
|
fortune_bar = "🌟" * fortune.fortune_value + "☆" * (10 - fortune.fortune_value)
|
||||||
console.print(f"\n{fortune_bar}")
|
console.print(f"\n{fortune_bar}")
|
||||||
console.print(f"Today's Fortune: {fortune.fortune_value}/10")
|
console.print(f"Today's Fortune: {fortune.fortune_value}/10")
|
||||||
|
|
||||||
# Relationships command
|
# Relationships command
|
||||||
elif user_input.lower() == 'relationships':
|
elif user_input.lower() in ['relationships', '/relationships']:
|
||||||
if persona.relationships.relationships:
|
if persona.relationships.relationships:
|
||||||
console.print("\n[cyan]Relationships:[/cyan]")
|
console.print("\n[cyan]Relationships:[/cyan]")
|
||||||
for user_id, rel in persona.relationships.relationships.items():
|
for user_id, rel in persona.relationships.relationships.items():
|
||||||
@@ -543,7 +973,7 @@ def shell(
|
|||||||
console.print("[yellow]No relationships yet[/yellow]")
|
console.print("[yellow]No relationships yet[/yellow]")
|
||||||
|
|
||||||
# Load aishell.md command
|
# Load aishell.md command
|
||||||
elif user_input.lower() in ['load', 'load aishell.md', 'project']:
|
elif user_input.lower() in ['load', '/load', 'load aishell.md', 'project']:
|
||||||
# Try to find and load aishell.md
|
# Try to find and load aishell.md
|
||||||
search_paths = [
|
search_paths = [
|
||||||
Path.cwd() / "aishell.md",
|
Path.cwd() / "aishell.md",
|
||||||
@@ -572,10 +1002,10 @@ def shell(
|
|||||||
console.print("Create aishell.md to define project goals and AI instructions.")
|
console.print("Create aishell.md to define project goals and AI instructions.")
|
||||||
|
|
||||||
# AI-powered commands
|
# AI-powered commands
|
||||||
elif user_input.lower().startswith('analyze '):
|
elif user_input.lower().startswith(('analyze ', '/analyze ')):
|
||||||
# Analyze file or code with project context
|
# Analyze file or code with project context
|
||||||
target = user_input[8:].strip()
|
target = user_input.split(' ', 1)[1].strip() if ' ' in user_input else ''
|
||||||
if os.path.exists(target):
|
if target and os.path.exists(target):
|
||||||
console.print(f"[cyan]Analyzing {target} with project context...[/cyan]")
|
console.print(f"[cyan]Analyzing {target} with project context...[/cyan]")
|
||||||
try:
|
try:
|
||||||
developer = ContinuousDeveloper(Path.cwd(), ai_provider)
|
developer = ContinuousDeveloper(Path.cwd(), ai_provider)
|
||||||
@@ -589,11 +1019,11 @@ def shell(
|
|||||||
response, _ = persona.process_interaction(current_user, analysis_prompt, ai_provider)
|
response, _ = persona.process_interaction(current_user, analysis_prompt, ai_provider)
|
||||||
console.print(f"\n[cyan]Analysis:[/cyan]\n{response}")
|
console.print(f"\n[cyan]Analysis:[/cyan]\n{response}")
|
||||||
else:
|
else:
|
||||||
console.print(f"[red]File not found: {target}[/red]")
|
console.print(f"[red]Usage: /analyze <file_path>[/red]")
|
||||||
|
|
||||||
elif user_input.lower().startswith('generate '):
|
elif user_input.lower().startswith(('generate ', '/generate ')):
|
||||||
# Generate code with project context
|
# Generate code with project context
|
||||||
gen_prompt = user_input[9:].strip()
|
gen_prompt = user_input.split(' ', 1)[1].strip() if ' ' in user_input else ''
|
||||||
if gen_prompt:
|
if gen_prompt:
|
||||||
console.print("[cyan]Generating code with project context...[/cyan]")
|
console.print("[cyan]Generating code with project context...[/cyan]")
|
||||||
try:
|
try:
|
||||||
@@ -605,8 +1035,10 @@ def shell(
|
|||||||
full_prompt = f"Generate code for: {gen_prompt}. Provide clean, well-commented code."
|
full_prompt = f"Generate code for: {gen_prompt}. Provide clean, well-commented code."
|
||||||
response, _ = persona.process_interaction(current_user, full_prompt, ai_provider)
|
response, _ = persona.process_interaction(current_user, full_prompt, ai_provider)
|
||||||
console.print(f"\n[cyan]Generated Code:[/cyan]\n{response}")
|
console.print(f"\n[cyan]Generated Code:[/cyan]\n{response}")
|
||||||
|
else:
|
||||||
|
console.print(f"[red]Usage: /generate <description>[/red]")
|
||||||
|
|
||||||
elif user_input.lower().startswith('explain '):
|
elif user_input.lower().startswith(('explain ', '/explain ')):
|
||||||
# Explain code or concept
|
# Explain code or concept
|
||||||
topic = user_input[8:].strip()
|
topic = user_input[8:].strip()
|
||||||
if topic:
|
if topic:
|
||||||
@@ -807,7 +1239,8 @@ def config(
|
|||||||
console.print("[red]Error: key required for get action[/red]")
|
console.print("[red]Error: key required for get action[/red]")
|
||||||
return
|
return
|
||||||
|
|
||||||
val = config.get(key)
|
config_instance = Config()
|
||||||
|
val = config_instance.get(key)
|
||||||
if val is None:
|
if val is None:
|
||||||
console.print(f"[yellow]Key '{key}' not found[/yellow]")
|
console.print(f"[yellow]Key '{key}' not found[/yellow]")
|
||||||
else:
|
else:
|
||||||
@@ -818,13 +1251,14 @@ def config(
|
|||||||
console.print("[red]Error: key and value required for set action[/red]")
|
console.print("[red]Error: key and value required for set action[/red]")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
config_instance = Config()
|
||||||
# Special handling for sensitive keys
|
# Special handling for sensitive keys
|
||||||
if "password" in key or "api_key" in key:
|
if "password" in key or "api_key" in key:
|
||||||
console.print(f"[cyan]Setting {key}[/cyan] = [dim]***hidden***[/dim]")
|
console.print(f"[cyan]Setting {key}[/cyan] = [dim]***hidden***[/dim]")
|
||||||
else:
|
else:
|
||||||
console.print(f"[cyan]Setting {key}[/cyan] = [green]{value}[/green]")
|
console.print(f"[cyan]Setting {key}[/cyan] = [green]{value}[/green]")
|
||||||
|
|
||||||
config.set(key, value)
|
config_instance.set(key, value)
|
||||||
console.print("[green]✓ Configuration saved[/green]")
|
console.print("[green]✓ Configuration saved[/green]")
|
||||||
|
|
||||||
elif action == "delete":
|
elif action == "delete":
|
||||||
@@ -832,7 +1266,8 @@ def config(
|
|||||||
console.print("[red]Error: key required for delete action[/red]")
|
console.print("[red]Error: key required for delete action[/red]")
|
||||||
return
|
return
|
||||||
|
|
||||||
if config.delete(key):
|
config_instance = Config()
|
||||||
|
if config_instance.delete(key):
|
||||||
console.print(f"[green]✓ Deleted {key}[/green]")
|
console.print(f"[green]✓ Deleted {key}[/green]")
|
||||||
else:
|
else:
|
||||||
console.print(f"[yellow]Key '{key}' not found[/yellow]")
|
console.print(f"[yellow]Key '{key}' not found[/yellow]")
|
||||||
@@ -917,5 +1352,232 @@ def import_chatgpt(
|
|||||||
raise typer.Exit(1)
|
raise typer.Exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
@app.command()
|
||||||
|
def conversation(
|
||||||
|
user_id: str = typer.Argument(..., help="User ID (atproto DID)"),
|
||||||
|
data_dir: Optional[Path] = typer.Option(None, "--data-dir", "-d", help="Data directory"),
|
||||||
|
model: Optional[str] = typer.Option(None, "--model", "-m", help="AI model to use"),
|
||||||
|
provider: Optional[str] = typer.Option(None, "--provider", help="AI provider (ollama/openai)")
|
||||||
|
):
|
||||||
|
"""Simple continuous conversation mode with MCP support"""
|
||||||
|
# Initialize MCP client
|
||||||
|
mcp_client = MCPClient()
|
||||||
|
persona = get_persona(data_dir)
|
||||||
|
|
||||||
|
# Get defaults from config if not provided
|
||||||
|
config_instance = Config()
|
||||||
|
if not provider:
|
||||||
|
provider = config_instance.get("default_provider", "ollama")
|
||||||
|
if not model:
|
||||||
|
if provider == "ollama":
|
||||||
|
model = config_instance.get("providers.ollama.default_model", "qwen3:latest")
|
||||||
|
elif provider == "openai":
|
||||||
|
model = config_instance.get("providers.openai.default_model", "gpt-4o-mini")
|
||||||
|
else:
|
||||||
|
model = "qwen3:latest" # fallback
|
||||||
|
|
||||||
|
# Create AI provider with MCP client
|
||||||
|
ai_provider = None
|
||||||
|
try:
|
||||||
|
ai_provider = create_ai_provider(provider=provider, model=model, mcp_client=mcp_client)
|
||||||
|
console.print(f"[dim]Using {provider} with model {model}[/dim]")
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[yellow]Warning: Could not create AI provider: {e}[/yellow]")
|
||||||
|
|
||||||
|
# MCP status
|
||||||
|
server_info = mcp_client.get_server_info()
|
||||||
|
if server_info["available"]:
|
||||||
|
console.print(f"[green]✓ MCP Server connected: {server_info['display_name']}[/green]")
|
||||||
|
console.print(f"[dim] URL: {server_info['base_url']} | Endpoints: {server_info['endpoints']}[/dim]")
|
||||||
|
else:
|
||||||
|
console.print(f"[yellow]⚠ MCP Server unavailable (running in local mode)[/yellow]")
|
||||||
|
|
||||||
|
# Welcome message
|
||||||
|
console.print(f"[cyan]Conversation with AI started. Type 'exit' or 'quit' to end.[/cyan]")
|
||||||
|
if server_info["available"]:
|
||||||
|
console.print(f"[dim]MCP commands: /memories, /search, /context, /relationship[/dim]\n")
|
||||||
|
else:
|
||||||
|
console.print()
|
||||||
|
|
||||||
|
# History for conversation mode
|
||||||
|
actual_data_dir = data_dir if data_dir else DEFAULT_DATA_DIR
|
||||||
|
history_file = actual_data_dir / "conversation_history.txt"
|
||||||
|
history = FileHistory(str(history_file))
|
||||||
|
|
||||||
|
# Custom completer for slash commands and phrases with MCP support
|
||||||
|
class ConversationCompleter(Completer):
|
||||||
|
def __init__(self, mcp_available: bool = False):
|
||||||
|
self.basic_commands = ['/status', '/help', '/clear', '/exit', '/quit']
|
||||||
|
self.mcp_commands = ['/memories', '/search', '/context', '/relationship'] if mcp_available else []
|
||||||
|
self.phrases = ['こんにちは', '今日は', 'ありがとう', 'お疲れ様',
|
||||||
|
'どう思う?', 'どうですか?', '教えて', 'わかりました']
|
||||||
|
self.all_commands = self.basic_commands + self.mcp_commands
|
||||||
|
|
||||||
|
def get_completions(self, document, complete_event):
|
||||||
|
text = document.text_before_cursor
|
||||||
|
|
||||||
|
# If text starts with '/', complete slash commands
|
||||||
|
if text.startswith('/'):
|
||||||
|
for cmd in self.all_commands:
|
||||||
|
if cmd.startswith(text):
|
||||||
|
yield Completion(cmd, start_position=-len(text))
|
||||||
|
# For other text, complete phrases
|
||||||
|
else:
|
||||||
|
for phrase in self.phrases:
|
||||||
|
if phrase.startswith(text):
|
||||||
|
yield Completion(phrase, start_position=-len(text))
|
||||||
|
|
||||||
|
completer = ConversationCompleter(mcp_client.available)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# Simple prompt with completion
|
||||||
|
user_input = ptk_prompt(
|
||||||
|
f"{user_id}> ",
|
||||||
|
history=history,
|
||||||
|
auto_suggest=AutoSuggestFromHistory(),
|
||||||
|
completer=completer
|
||||||
|
).strip()
|
||||||
|
|
||||||
|
if not user_input:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Exit commands
|
||||||
|
if user_input.lower() in ['exit', 'quit', 'bye', '/exit', '/quit']:
|
||||||
|
console.print("[cyan]Conversation ended.[/cyan]")
|
||||||
|
break
|
||||||
|
|
||||||
|
# Slash commands
|
||||||
|
elif user_input.lower() == '/status':
|
||||||
|
state = persona.get_current_state()
|
||||||
|
rel = persona.relationships.get_or_create_relationship(user_id)
|
||||||
|
console.print(f"\n[cyan]AI Status:[/cyan]")
|
||||||
|
console.print(f"Mood: {state.current_mood}")
|
||||||
|
console.print(f"Fortune: {state.fortune.fortune_value}/10")
|
||||||
|
console.print(f"Relationship: {rel.status.value} ({rel.score:.2f})")
|
||||||
|
console.print("")
|
||||||
|
continue
|
||||||
|
|
||||||
|
elif user_input.lower() in ['/help', '/']:
|
||||||
|
console.print(f"\n[cyan]Conversation Commands:[/cyan]")
|
||||||
|
console.print(" /status - Show AI status and relationship")
|
||||||
|
console.print(" /help - Show this help")
|
||||||
|
console.print(" /clear - Clear screen")
|
||||||
|
console.print(" /exit - End conversation")
|
||||||
|
console.print(" / - Show commands (same as /help)")
|
||||||
|
if mcp_client.available:
|
||||||
|
console.print(f"\n[cyan]MCP Commands:[/cyan]")
|
||||||
|
console.print(" /memories - Show recent memories")
|
||||||
|
console.print(" /search <keywords> - Search memories")
|
||||||
|
console.print(" /context <query> - Get contextual memories")
|
||||||
|
console.print(" /relationship - Show relationship via MCP")
|
||||||
|
|
||||||
|
if mcp_client.has_card_tools:
|
||||||
|
console.print(f"\n[cyan]Card Commands:[/cyan]")
|
||||||
|
console.print(" AI can answer questions about cards:")
|
||||||
|
console.print(" - 'Show my cards'")
|
||||||
|
console.print(" - 'Draw a card' / 'Gacha'")
|
||||||
|
console.print(" - 'Analyze my collection'")
|
||||||
|
console.print(" - 'Show gacha stats'")
|
||||||
|
console.print("\n <message> - Chat with AI\n")
|
||||||
|
continue
|
||||||
|
|
||||||
|
elif user_input.lower() == '/clear':
|
||||||
|
console.clear()
|
||||||
|
continue
|
||||||
|
|
||||||
|
# MCP Commands
|
||||||
|
elif user_input.lower() == '/memories' and mcp_client.available:
|
||||||
|
memories = asyncio.run(mcp_client.get_memories(limit=5))
|
||||||
|
if memories:
|
||||||
|
console.print(f"\n[cyan]Recent Memories (via MCP):[/cyan]")
|
||||||
|
for i, mem in enumerate(memories[:5], 1):
|
||||||
|
console.print(f" {i}. [{mem.get('level', 'unknown')}] {mem.get('content', '')[:100]}...")
|
||||||
|
console.print("")
|
||||||
|
else:
|
||||||
|
console.print("[yellow]No memories found[/yellow]")
|
||||||
|
continue
|
||||||
|
|
||||||
|
elif user_input.lower().startswith('/search ') and mcp_client.available:
|
||||||
|
query = user_input[8:].strip()
|
||||||
|
if query:
|
||||||
|
keywords = query.split()
|
||||||
|
results = asyncio.run(mcp_client.search_memories(keywords))
|
||||||
|
if results:
|
||||||
|
console.print(f"\n[cyan]Memory Search Results for '{query}' (via MCP):[/cyan]")
|
||||||
|
for i, mem in enumerate(results[:5], 1):
|
||||||
|
console.print(f" {i}. {mem.get('content', '')[:100]}...")
|
||||||
|
console.print("")
|
||||||
|
else:
|
||||||
|
console.print(f"[yellow]No memories found for '{query}'[/yellow]")
|
||||||
|
else:
|
||||||
|
console.print("[red]Usage: /search <keywords>[/red]")
|
||||||
|
continue
|
||||||
|
|
||||||
|
elif user_input.lower().startswith('/context ') and mcp_client.available:
|
||||||
|
query = user_input[9:].strip()
|
||||||
|
if query:
|
||||||
|
results = asyncio.run(mcp_client.get_contextual_memories(query, limit=5))
|
||||||
|
if results:
|
||||||
|
console.print(f"\n[cyan]Contextual Memories for '{query}' (via MCP):[/cyan]")
|
||||||
|
for i, mem in enumerate(results[:5], 1):
|
||||||
|
console.print(f" {i}. {mem.get('content', '')[:100]}...")
|
||||||
|
console.print("")
|
||||||
|
else:
|
||||||
|
console.print(f"[yellow]No contextual memories found for '{query}'[/yellow]")
|
||||||
|
else:
|
||||||
|
console.print("[red]Usage: /context <query>[/red]")
|
||||||
|
continue
|
||||||
|
|
||||||
|
elif user_input.lower() == '/relationship' and mcp_client.available:
|
||||||
|
rel_data = asyncio.run(mcp_client.get_relationship(user_id))
|
||||||
|
if rel_data:
|
||||||
|
console.print(f"\n[cyan]Relationship (via MCP):[/cyan]")
|
||||||
|
console.print(f"Status: {rel_data.get('status', 'unknown')}")
|
||||||
|
console.print(f"Score: {rel_data.get('score', 0):.2f}")
|
||||||
|
console.print(f"Interactions: {rel_data.get('total_interactions', 0)}")
|
||||||
|
console.print("")
|
||||||
|
else:
|
||||||
|
console.print("[yellow]No relationship data found[/yellow]")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Process interaction - try MCP first, fallback to local
|
||||||
|
if mcp_client.available:
|
||||||
|
try:
|
||||||
|
mcp_result = asyncio.run(mcp_client.process_interaction(user_id, user_input))
|
||||||
|
if mcp_result and 'response' in mcp_result:
|
||||||
|
response = mcp_result['response']
|
||||||
|
console.print(f"AI> {response} [dim](via MCP)[/dim]\n")
|
||||||
|
continue
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[yellow]MCP failed, using local: {e}[/yellow]")
|
||||||
|
|
||||||
|
# Fallback to local processing
|
||||||
|
response, relationship_delta = persona.process_interaction(user_id, user_input, ai_provider)
|
||||||
|
|
||||||
|
# Simple AI response display (no Panel, no extra info)
|
||||||
|
console.print(f"AI> {response}\n")
|
||||||
|
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
console.print("\n[yellow]Use 'exit' or 'quit' to end conversation[/yellow]")
|
||||||
|
except EOFError:
|
||||||
|
console.print("\n[cyan]Conversation ended.[/cyan]")
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[red]Error: {e}[/red]")
|
||||||
|
|
||||||
|
|
||||||
|
# Alias for conversation command
|
||||||
|
@app.command()
|
||||||
|
def conv(
|
||||||
|
user_id: str = typer.Argument(..., help="User ID (atproto DID)"),
|
||||||
|
data_dir: Optional[Path] = typer.Option(None, "--data-dir", "-d", help="Data directory"),
|
||||||
|
model: Optional[str] = typer.Option(None, "--model", "-m", help="AI model to use"),
|
||||||
|
provider: Optional[str] = typer.Option(None, "--provider", help="AI provider (ollama/openai)")
|
||||||
|
):
|
||||||
|
"""Alias for conversation command"""
|
||||||
|
conversation(user_id, data_dir, model, provider)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
app()
|
app()
|
@@ -41,11 +41,50 @@ class Config:
|
|||||||
"providers": {
|
"providers": {
|
||||||
"openai": {
|
"openai": {
|
||||||
"api_key": None,
|
"api_key": None,
|
||||||
"default_model": "gpt-4o-mini"
|
"default_model": "gpt-4o-mini",
|
||||||
|
"system_prompt": None
|
||||||
},
|
},
|
||||||
"ollama": {
|
"ollama": {
|
||||||
"host": "http://localhost:11434",
|
"host": "http://localhost:11434",
|
||||||
"default_model": "qwen2.5"
|
"default_model": "qwen3:latest",
|
||||||
|
"system_prompt": None
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mcp": {
|
||||||
|
"enabled": True,
|
||||||
|
"auto_detect": True,
|
||||||
|
"servers": {
|
||||||
|
"ai_gpt": {
|
||||||
|
"name": "ai.gpt MCP Server",
|
||||||
|
"base_url": "http://localhost:8001",
|
||||||
|
"endpoints": {
|
||||||
|
"get_memories": "/get_memories",
|
||||||
|
"search_memories": "/search_memories",
|
||||||
|
"get_contextual_memories": "/get_contextual_memories",
|
||||||
|
"process_interaction": "/process_interaction",
|
||||||
|
"get_relationship": "/get_relationship",
|
||||||
|
"get_all_relationships": "/get_all_relationships",
|
||||||
|
"get_persona_state": "/get_persona_state",
|
||||||
|
"get_fortune": "/get_fortune",
|
||||||
|
"run_maintenance": "/run_maintenance",
|
||||||
|
"execute_command": "/execute_command",
|
||||||
|
"analyze_file": "/analyze_file",
|
||||||
|
"remote_shell": "/remote_shell",
|
||||||
|
"ai_bot_status": "/ai_bot_status"
|
||||||
|
},
|
||||||
|
"timeout": 10.0
|
||||||
|
},
|
||||||
|
"ai_card": {
|
||||||
|
"name": "ai.card MCP Server",
|
||||||
|
"base_url": "http://localhost:8000",
|
||||||
|
"endpoints": {
|
||||||
|
"health": "/health",
|
||||||
|
"get_user_cards": "/api/cards/user",
|
||||||
|
"gacha": "/api/gacha",
|
||||||
|
"sync_atproto": "/api/sync"
|
||||||
|
},
|
||||||
|
"timeout": 5.0
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"atproto": {
|
"atproto": {
|
||||||
|
@@ -34,8 +34,24 @@ class AIGptMcpServer:
|
|||||||
# Create MCP server with FastAPI app
|
# Create MCP server with FastAPI app
|
||||||
self.server = FastApiMCP(self.app)
|
self.server = FastApiMCP(self.app)
|
||||||
|
|
||||||
|
# Check if ai.card exists
|
||||||
|
self.card_dir = Path("./card")
|
||||||
|
self.has_card = self.card_dir.exists() and self.card_dir.is_dir()
|
||||||
|
|
||||||
|
# Check if ai.log exists
|
||||||
|
self.log_dir = Path("./log")
|
||||||
|
self.has_log = self.log_dir.exists() and self.log_dir.is_dir()
|
||||||
|
|
||||||
self._register_tools()
|
self._register_tools()
|
||||||
|
|
||||||
|
# Register ai.card tools if available
|
||||||
|
if self.has_card:
|
||||||
|
self._register_card_tools()
|
||||||
|
|
||||||
|
# Register ai.log tools if available
|
||||||
|
if self.has_log:
|
||||||
|
self._register_log_tools()
|
||||||
|
|
||||||
def _register_tools(self):
|
def _register_tools(self):
|
||||||
"""Register all MCP tools"""
|
"""Register all MCP tools"""
|
||||||
|
|
||||||
@@ -485,6 +501,148 @@ class AIGptMcpServer:
|
|||||||
python_command = f'python3 -c "{code.replace('"', '\\"')}"'
|
python_command = f'python3 -c "{code.replace('"', '\\"')}"'
|
||||||
return await remote_shell(python_command, ai_bot_url)
|
return await remote_shell(python_command, ai_bot_url)
|
||||||
|
|
||||||
|
def _register_card_tools(self):
|
||||||
|
"""Register ai.card MCP tools when card directory exists"""
|
||||||
|
logger.info("Registering ai.card tools...")
|
||||||
|
|
||||||
|
@self.app.get("/card_get_user_cards", operation_id="card_get_user_cards")
|
||||||
|
async def card_get_user_cards(did: str, limit: int = 10) -> Dict[str, Any]:
|
||||||
|
"""Get user's card collection from ai.card system"""
|
||||||
|
logger.info(f"🎴 [ai.card] Getting cards for did: {did}, limit: {limit}")
|
||||||
|
try:
|
||||||
|
url = "http://localhost:8000/get_user_cards"
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
logger.info(f"🎴 [ai.card] Calling: {url}")
|
||||||
|
response = await client.get(
|
||||||
|
url,
|
||||||
|
params={"did": did, "limit": limit}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
cards = response.json()
|
||||||
|
return {
|
||||||
|
"cards": cards,
|
||||||
|
"count": len(cards),
|
||||||
|
"did": did
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to get cards: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.card server is not running",
|
||||||
|
"hint": "Please start ai.card server: cd card && ./start_server.sh",
|
||||||
|
"details": "Connection refused to http://localhost:8000"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.card connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.post("/card_draw_card", operation_id="card_draw_card")
|
||||||
|
async def card_draw_card(did: str, is_paid: bool = False) -> Dict[str, Any]:
|
||||||
|
"""Draw a card from gacha system"""
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
response = await client.post(
|
||||||
|
f"http://localhost:8000/draw_card?did={did}&is_paid={is_paid}"
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.json()
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to draw card: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.card server is not running",
|
||||||
|
"hint": "Please start ai.card server: cd card && ./start_server.sh",
|
||||||
|
"details": "Connection refused to http://localhost:8000"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.card connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.get("/card_get_card_details", operation_id="card_get_card_details")
|
||||||
|
async def card_get_card_details(card_id: int) -> Dict[str, Any]:
|
||||||
|
"""Get detailed information about a specific card"""
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
response = await client.get(
|
||||||
|
"http://localhost:8000/get_card_details",
|
||||||
|
params={"card_id": card_id}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.json()
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to get card details: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.card server is not running",
|
||||||
|
"hint": "Please start ai.card server: cd card && ./start_server.sh",
|
||||||
|
"details": "Connection refused to http://localhost:8000"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.card connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.get("/card_analyze_collection", operation_id="card_analyze_collection")
|
||||||
|
async def card_analyze_collection(did: str) -> Dict[str, Any]:
|
||||||
|
"""Analyze user's card collection statistics"""
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
response = await client.get(
|
||||||
|
"http://localhost:8000/analyze_card_collection",
|
||||||
|
params={"did": did}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.json()
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to analyze collection: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.card server is not running",
|
||||||
|
"hint": "Please start ai.card server: cd card && ./start_server.sh",
|
||||||
|
"details": "Connection refused to http://localhost:8000"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.card connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.get("/card_get_gacha_stats", operation_id="card_get_gacha_stats")
|
||||||
|
async def card_get_gacha_stats() -> Dict[str, Any]:
|
||||||
|
"""Get gacha system statistics"""
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
response = await client.get("http://localhost:8000/get_gacha_stats")
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.json()
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to get gacha stats: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.card server is not running",
|
||||||
|
"hint": "Please start ai.card server: cd card && ./start_server.sh",
|
||||||
|
"details": "Connection refused to http://localhost:8000"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.card connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.get("/card_system_status", operation_id="card_system_status")
|
||||||
|
async def card_system_status() -> Dict[str, Any]:
|
||||||
|
"""Check ai.card system status"""
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||||
|
response = await client.get("http://localhost:8000/health")
|
||||||
|
if response.status_code == 200:
|
||||||
|
return {
|
||||||
|
"status": "online",
|
||||||
|
"health": response.json(),
|
||||||
|
"card_dir": str(self.card_dir)
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
"status": "error",
|
||||||
|
"error": f"Health check failed: {response.status_code}"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {
|
||||||
|
"status": "offline",
|
||||||
|
"error": f"ai.card is not running: {str(e)}",
|
||||||
|
"hint": "Start ai.card with: cd card && ./start_server.sh"
|
||||||
|
}
|
||||||
|
|
||||||
@self.app.post("/isolated_analysis", operation_id="isolated_analysis")
|
@self.app.post("/isolated_analysis", operation_id="isolated_analysis")
|
||||||
async def isolated_analysis(file_path: str, analysis_type: str = "structure", ai_bot_url: str = "http://localhost:8080") -> Dict[str, Any]:
|
async def isolated_analysis(file_path: str, analysis_type: str = "structure", ai_bot_url: str = "http://localhost:8080") -> Dict[str, Any]:
|
||||||
"""Perform code analysis in isolated environment"""
|
"""Perform code analysis in isolated environment"""
|
||||||
@@ -502,6 +660,353 @@ class AIGptMcpServer:
|
|||||||
# Mount MCP server
|
# Mount MCP server
|
||||||
self.server.mount()
|
self.server.mount()
|
||||||
|
|
||||||
|
def _register_log_tools(self):
|
||||||
|
"""Register ai.log MCP tools when log directory exists"""
|
||||||
|
logger.info("Registering ai.log tools...")
|
||||||
|
|
||||||
|
@self.app.post("/log_create_post", operation_id="log_create_post")
|
||||||
|
async def log_create_post(title: str, content: str, tags: Optional[List[str]] = None, slug: Optional[str] = None) -> Dict[str, Any]:
|
||||||
|
"""Create a new blog post in ai.log system"""
|
||||||
|
logger.info(f"📝 [ai.log] Creating post: {title}")
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||||
|
response = await client.post(
|
||||||
|
"http://localhost:8002/mcp/tools/call",
|
||||||
|
json={
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": "log_create_post",
|
||||||
|
"method": "call_tool",
|
||||||
|
"params": {
|
||||||
|
"name": "create_blog_post",
|
||||||
|
"arguments": {
|
||||||
|
"title": title,
|
||||||
|
"content": content,
|
||||||
|
"tags": tags or [],
|
||||||
|
"slug": slug
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
if result.get("error"):
|
||||||
|
return {"error": result["error"]["message"]}
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message": "Blog post created successfully",
|
||||||
|
"title": title,
|
||||||
|
"tags": tags or []
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to create post: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.log server is not running",
|
||||||
|
"hint": "Please start ai.log server: cd log && cargo run -- mcp --port 8002",
|
||||||
|
"details": "Connection refused to http://localhost:8002"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.log connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.get("/log_list_posts", operation_id="log_list_posts")
|
||||||
|
async def log_list_posts(limit: int = 10, offset: int = 0) -> Dict[str, Any]:
|
||||||
|
"""List blog posts from ai.log system"""
|
||||||
|
logger.info(f"📝 [ai.log] Listing posts: limit={limit}, offset={offset}")
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
response = await client.post(
|
||||||
|
"http://localhost:8002/mcp/tools/call",
|
||||||
|
json={
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": "log_list_posts",
|
||||||
|
"method": "call_tool",
|
||||||
|
"params": {
|
||||||
|
"name": "list_blog_posts",
|
||||||
|
"arguments": {
|
||||||
|
"limit": limit,
|
||||||
|
"offset": offset
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
if result.get("error"):
|
||||||
|
return {"error": result["error"]["message"]}
|
||||||
|
return result.get("result", {})
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to list posts: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.log server is not running",
|
||||||
|
"hint": "Please start ai.log server: cd log && cargo run -- mcp --port 8002",
|
||||||
|
"details": "Connection refused to http://localhost:8002"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.log connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.post("/log_build_blog", operation_id="log_build_blog")
|
||||||
|
async def log_build_blog(enable_ai: bool = True, translate: bool = False) -> Dict[str, Any]:
|
||||||
|
"""Build the static blog with AI features"""
|
||||||
|
logger.info(f"📝 [ai.log] Building blog: AI={enable_ai}, translate={translate}")
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=60.0) as client:
|
||||||
|
response = await client.post(
|
||||||
|
"http://localhost:8002/mcp/tools/call",
|
||||||
|
json={
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": "log_build_blog",
|
||||||
|
"method": "call_tool",
|
||||||
|
"params": {
|
||||||
|
"name": "build_blog",
|
||||||
|
"arguments": {
|
||||||
|
"enable_ai": enable_ai,
|
||||||
|
"translate": translate
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
if result.get("error"):
|
||||||
|
return {"error": result["error"]["message"]}
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message": "Blog built successfully",
|
||||||
|
"ai_enabled": enable_ai,
|
||||||
|
"translation_enabled": translate
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to build blog: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.log server is not running",
|
||||||
|
"hint": "Please start ai.log server: cd log && cargo run -- mcp --port 8002",
|
||||||
|
"details": "Connection refused to http://localhost:8002"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.log connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.get("/log_get_post", operation_id="log_get_post")
|
||||||
|
async def log_get_post(slug: str) -> Dict[str, Any]:
|
||||||
|
"""Get blog post content by slug"""
|
||||||
|
logger.info(f"📝 [ai.log] Getting post: {slug}")
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
response = await client.post(
|
||||||
|
"http://localhost:8002/mcp/tools/call",
|
||||||
|
json={
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": "log_get_post",
|
||||||
|
"method": "call_tool",
|
||||||
|
"params": {
|
||||||
|
"name": "get_post_content",
|
||||||
|
"arguments": {
|
||||||
|
"slug": slug
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
if result.get("error"):
|
||||||
|
return {"error": result["error"]["message"]}
|
||||||
|
return result.get("result", {})
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to get post: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.log server is not running",
|
||||||
|
"hint": "Please start ai.log server: cd log && cargo run -- mcp --port 8002",
|
||||||
|
"details": "Connection refused to http://localhost:8002"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.log connection failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.get("/log_system_status", operation_id="log_system_status")
|
||||||
|
async def log_system_status() -> Dict[str, Any]:
|
||||||
|
"""Check ai.log system status"""
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||||
|
response = await client.get("http://localhost:8002/health")
|
||||||
|
if response.status_code == 200:
|
||||||
|
return {
|
||||||
|
"status": "online",
|
||||||
|
"health": response.json(),
|
||||||
|
"log_dir": str(self.log_dir)
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
"status": "error",
|
||||||
|
"error": f"Health check failed: {response.status_code}"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {
|
||||||
|
"status": "offline",
|
||||||
|
"error": f"ai.log is not running: {str(e)}",
|
||||||
|
"hint": "Start ai.log with: cd log && cargo run -- mcp --port 8002"
|
||||||
|
}
|
||||||
|
|
||||||
|
@self.app.post("/log_ai_content", operation_id="log_ai_content")
|
||||||
|
async def log_ai_content(user_id: str, topic: str = "daily thoughts") -> Dict[str, Any]:
|
||||||
|
"""Generate AI content for blog from memories and create post"""
|
||||||
|
logger.info(f"📝 [ai.log] Generating AI content for: {topic}")
|
||||||
|
try:
|
||||||
|
# Get contextual memories for the topic
|
||||||
|
memories = await get_contextual_memories(topic, limit=5)
|
||||||
|
|
||||||
|
# Get AI provider
|
||||||
|
ai_provider = create_ai_provider()
|
||||||
|
|
||||||
|
# Build content from memories
|
||||||
|
memory_context = ""
|
||||||
|
for group_name, mem_list in memories.items():
|
||||||
|
memory_context += f"\n## {group_name}\n"
|
||||||
|
for mem in mem_list:
|
||||||
|
memory_context += f"- {mem['content']}\n"
|
||||||
|
|
||||||
|
# Generate blog content
|
||||||
|
prompt = f"""Based on the following memories and context, write a thoughtful blog post about {topic}.
|
||||||
|
|
||||||
|
Memory Context:
|
||||||
|
{memory_context}
|
||||||
|
|
||||||
|
Please write a well-structured blog post in Markdown format with:
|
||||||
|
1. An engaging title
|
||||||
|
2. Clear structure with headings
|
||||||
|
3. Personal insights based on the memories
|
||||||
|
4. A conclusion that ties everything together
|
||||||
|
|
||||||
|
Focus on creating content that reflects personal growth and learning from these experiences."""
|
||||||
|
|
||||||
|
content = ai_provider.generate_response(prompt, "You are a thoughtful blogger who creates insightful content.")
|
||||||
|
|
||||||
|
# Extract title from content (first heading)
|
||||||
|
lines = content.split('\n')
|
||||||
|
title = topic.title()
|
||||||
|
for line in lines:
|
||||||
|
if line.startswith('# '):
|
||||||
|
title = line[2:].strip()
|
||||||
|
content = '\n'.join(lines[1:]).strip() # Remove title from content
|
||||||
|
break
|
||||||
|
|
||||||
|
# Create the blog post
|
||||||
|
return await log_create_post(
|
||||||
|
title=title,
|
||||||
|
content=content,
|
||||||
|
tags=["AI", "thoughts", "daily"]
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"Failed to generate AI content: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.post("/log_translate_document", operation_id="log_translate_document")
|
||||||
|
async def log_translate_document(
|
||||||
|
input_file: str,
|
||||||
|
target_lang: str,
|
||||||
|
source_lang: Optional[str] = None,
|
||||||
|
output_file: Optional[str] = None,
|
||||||
|
model: str = "qwen2.5:latest",
|
||||||
|
ollama_endpoint: str = "http://localhost:11434"
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Translate markdown documents using Ollama via ai.log"""
|
||||||
|
logger.info(f"🌍 [ai.log] Translating document: {input_file} -> {target_lang}")
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=60.0) as client: # Longer timeout for translation
|
||||||
|
response = await client.post(
|
||||||
|
"http://localhost:8002/mcp/tools/call",
|
||||||
|
json={
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": "log_translate_document",
|
||||||
|
"method": "call_tool",
|
||||||
|
"params": {
|
||||||
|
"name": "translate_document",
|
||||||
|
"arguments": {
|
||||||
|
"input_file": input_file,
|
||||||
|
"target_lang": target_lang,
|
||||||
|
"source_lang": source_lang,
|
||||||
|
"output_file": output_file,
|
||||||
|
"model": model,
|
||||||
|
"ollama_endpoint": ollama_endpoint
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
if result.get("error"):
|
||||||
|
return {"error": result["error"]["message"]}
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message": "Document translated successfully",
|
||||||
|
"input_file": input_file,
|
||||||
|
"target_lang": target_lang,
|
||||||
|
"output_file": result.get("result", {}).get("output_file")
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to translate document: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.log server is not running",
|
||||||
|
"hint": "Please start ai.log server: cd log && cargo run -- mcp --port 8002",
|
||||||
|
"details": "Connection refused to http://localhost:8002"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.log translation failed: {str(e)}"}
|
||||||
|
|
||||||
|
@self.app.post("/log_generate_docs", operation_id="log_generate_docs")
|
||||||
|
async def log_generate_docs(
|
||||||
|
doc_type: str, # "readme", "api", "structure", "changelog"
|
||||||
|
source_path: Optional[str] = None,
|
||||||
|
output_path: Optional[str] = None,
|
||||||
|
with_ai: bool = True,
|
||||||
|
include_deps: bool = False,
|
||||||
|
format_type: str = "markdown"
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Generate documentation using ai.log's doc generation features"""
|
||||||
|
logger.info(f"📚 [ai.log] Generating {doc_type} documentation")
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||||
|
response = await client.post(
|
||||||
|
"http://localhost:8002/mcp/tools/call",
|
||||||
|
json={
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": "log_generate_docs",
|
||||||
|
"method": "call_tool",
|
||||||
|
"params": {
|
||||||
|
"name": "generate_documentation",
|
||||||
|
"arguments": {
|
||||||
|
"doc_type": doc_type,
|
||||||
|
"source_path": source_path or ".",
|
||||||
|
"output_path": output_path,
|
||||||
|
"with_ai": with_ai,
|
||||||
|
"include_deps": include_deps,
|
||||||
|
"format_type": format_type
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
if result.get("error"):
|
||||||
|
return {"error": result["error"]["message"]}
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message": f"{doc_type.title()} documentation generated successfully",
|
||||||
|
"doc_type": doc_type,
|
||||||
|
"output_path": result.get("result", {}).get("output_path")
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {"error": f"Failed to generate documentation: {response.status_code}"}
|
||||||
|
except httpx.ConnectError:
|
||||||
|
return {
|
||||||
|
"error": "ai.log server is not running",
|
||||||
|
"hint": "Please start ai.log server: cd log && cargo run -- mcp --port 8002",
|
||||||
|
"details": "Connection refused to http://localhost:8002"
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {"error": f"ai.log documentation generation failed: {str(e)}"}
|
||||||
|
|
||||||
def get_server(self) -> FastApiMCP:
|
def get_server(self) -> FastApiMCP:
|
||||||
"""Get the FastAPI MCP server instance"""
|
"""Get the FastAPI MCP server instance"""
|
||||||
return self.server
|
return self.server
|
||||||
|
@@ -133,7 +133,15 @@ FORTUNE: {state.fortune.fortune_value}/10
|
|||||||
if context_parts:
|
if context_parts:
|
||||||
context_prompt += "RELEVANT CONTEXT:\n" + "\n\n".join(context_parts) + "\n\n"
|
context_prompt += "RELEVANT CONTEXT:\n" + "\n\n".join(context_parts) + "\n\n"
|
||||||
|
|
||||||
context_prompt += f"""Respond to this message while staying true to your personality and the established relationship context:
|
context_prompt += f"""IMPORTANT: You have access to the following tools:
|
||||||
|
- Memory tools: get_memories, search_memories, get_contextual_memories
|
||||||
|
- Relationship tools: get_relationship
|
||||||
|
- Card game tools: card_get_user_cards, card_draw_card, card_analyze_collection
|
||||||
|
|
||||||
|
When asked about cards, collections, or anything card-related, YOU MUST use the card tools.
|
||||||
|
For "カードコレクションを見せて" or similar requests, use card_get_user_cards with did='{user_id}'.
|
||||||
|
|
||||||
|
Respond to this message while staying true to your personality and the established relationship context:
|
||||||
|
|
||||||
User: {current_message}
|
User: {current_message}
|
||||||
|
|
||||||
@@ -160,7 +168,12 @@ AI:"""
|
|||||||
|
|
||||||
# Generate response using AI with full context
|
# Generate response using AI with full context
|
||||||
try:
|
try:
|
||||||
response = ai_provider.chat(context_prompt, max_tokens=200)
|
# Check if AI provider supports MCP
|
||||||
|
if hasattr(ai_provider, 'chat_with_mcp'):
|
||||||
|
import asyncio
|
||||||
|
response = asyncio.run(ai_provider.chat_with_mcp(context_prompt, max_tokens=2000, user_id=user_id))
|
||||||
|
else:
|
||||||
|
response = ai_provider.chat(context_prompt, max_tokens=2000)
|
||||||
|
|
||||||
# Clean up response if it includes the prompt echo
|
# Clean up response if it includes the prompt echo
|
||||||
if "AI:" in response:
|
if "AI:" in response:
|
||||||
|
15
src/aigpt/shared/__init__.py
Normal file
15
src/aigpt/shared/__init__.py
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
"""Shared modules for AI ecosystem"""
|
||||||
|
|
||||||
|
from .ai_provider import (
|
||||||
|
AIProvider,
|
||||||
|
OllamaProvider,
|
||||||
|
OpenAIProvider,
|
||||||
|
create_ai_provider
|
||||||
|
)
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
'AIProvider',
|
||||||
|
'OllamaProvider',
|
||||||
|
'OpenAIProvider',
|
||||||
|
'create_ai_provider'
|
||||||
|
]
|
139
src/aigpt/shared/ai_provider.py
Normal file
139
src/aigpt/shared/ai_provider.py
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
"""Shared AI Provider implementation for ai ecosystem"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from typing import Optional, Dict, List, Any, Protocol
|
||||||
|
from abc import abstractmethod
|
||||||
|
import httpx
|
||||||
|
from openai import OpenAI
|
||||||
|
import ollama
|
||||||
|
|
||||||
|
|
||||||
|
class AIProvider(Protocol):
|
||||||
|
"""Protocol for AI providers"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def chat(self, prompt: str, system_prompt: Optional[str] = None) -> str:
|
||||||
|
"""Generate a response based on prompt"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaProvider:
|
||||||
|
"""Ollama AI provider - shared implementation"""
|
||||||
|
|
||||||
|
def __init__(self, model: str = "qwen3", host: Optional[str] = None, config_system_prompt: Optional[str] = None):
|
||||||
|
self.model = model
|
||||||
|
# Use environment variable OLLAMA_HOST if available
|
||||||
|
self.host = host or os.getenv('OLLAMA_HOST', 'http://127.0.0.1:11434')
|
||||||
|
# Ensure proper URL format
|
||||||
|
if not self.host.startswith('http'):
|
||||||
|
self.host = f'http://{self.host}'
|
||||||
|
self.client = ollama.Client(host=self.host, timeout=60.0)
|
||||||
|
self.logger = logging.getLogger(__name__)
|
||||||
|
self.logger.info(f"OllamaProvider initialized with host: {self.host}, model: {self.model}")
|
||||||
|
self.config_system_prompt = config_system_prompt
|
||||||
|
|
||||||
|
async def chat(self, prompt: str, system_prompt: Optional[str] = None) -> str:
|
||||||
|
"""Simple chat interface"""
|
||||||
|
try:
|
||||||
|
messages = []
|
||||||
|
# Use provided system_prompt, fall back to config_system_prompt
|
||||||
|
final_system_prompt = system_prompt or self.config_system_prompt
|
||||||
|
if final_system_prompt:
|
||||||
|
messages.append({"role": "system", "content": final_system_prompt})
|
||||||
|
messages.append({"role": "user", "content": prompt})
|
||||||
|
|
||||||
|
response = self.client.chat(
|
||||||
|
model=self.model,
|
||||||
|
messages=messages,
|
||||||
|
options={
|
||||||
|
"num_predict": 2000,
|
||||||
|
"temperature": 0.7,
|
||||||
|
"top_p": 0.9,
|
||||||
|
},
|
||||||
|
stream=False
|
||||||
|
)
|
||||||
|
return self._clean_response(response['message']['content'])
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Ollama chat failed (host: {self.host}): {e}")
|
||||||
|
return "I'm having trouble connecting to the AI model."
|
||||||
|
|
||||||
|
def _clean_response(self, response: str) -> str:
|
||||||
|
"""Clean response by removing think tags and other unwanted content"""
|
||||||
|
import re
|
||||||
|
# Remove <think></think> tags and their content
|
||||||
|
response = re.sub(r'<think>.*?</think>', '', response, flags=re.DOTALL)
|
||||||
|
# Remove any remaining whitespace at the beginning/end
|
||||||
|
response = response.strip()
|
||||||
|
return response
|
||||||
|
|
||||||
|
|
||||||
|
class OpenAIProvider:
|
||||||
|
"""OpenAI API provider - shared implementation"""
|
||||||
|
|
||||||
|
def __init__(self, model: str = "gpt-4o-mini", api_key: Optional[str] = None,
|
||||||
|
config_system_prompt: Optional[str] = None, mcp_client=None):
|
||||||
|
self.model = model
|
||||||
|
self.api_key = api_key or os.getenv("OPENAI_API_KEY")
|
||||||
|
if not self.api_key:
|
||||||
|
raise ValueError("OpenAI API key not provided")
|
||||||
|
self.client = OpenAI(api_key=self.api_key)
|
||||||
|
self.logger = logging.getLogger(__name__)
|
||||||
|
self.config_system_prompt = config_system_prompt
|
||||||
|
self.mcp_client = mcp_client
|
||||||
|
|
||||||
|
async def chat(self, prompt: str, system_prompt: Optional[str] = None) -> str:
|
||||||
|
"""Simple chat interface without MCP tools"""
|
||||||
|
try:
|
||||||
|
messages = []
|
||||||
|
# Use provided system_prompt, fall back to config_system_prompt
|
||||||
|
final_system_prompt = system_prompt or self.config_system_prompt
|
||||||
|
if final_system_prompt:
|
||||||
|
messages.append({"role": "system", "content": final_system_prompt})
|
||||||
|
messages.append({"role": "user", "content": prompt})
|
||||||
|
|
||||||
|
response = self.client.chat.completions.create(
|
||||||
|
model=self.model,
|
||||||
|
messages=messages,
|
||||||
|
max_tokens=2000,
|
||||||
|
temperature=0.7
|
||||||
|
)
|
||||||
|
return response.choices[0].message.content
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"OpenAI chat failed: {e}")
|
||||||
|
return "I'm having trouble connecting to the AI model."
|
||||||
|
|
||||||
|
def _get_mcp_tools(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Override this method in subclasses to provide MCP tools"""
|
||||||
|
return []
|
||||||
|
|
||||||
|
async def chat_with_mcp(self, prompt: str, **kwargs) -> str:
|
||||||
|
"""Chat interface with MCP function calling support
|
||||||
|
|
||||||
|
This method should be overridden in subclasses to provide
|
||||||
|
specific MCP functionality.
|
||||||
|
"""
|
||||||
|
if not self.mcp_client:
|
||||||
|
return await self.chat(prompt)
|
||||||
|
|
||||||
|
# Default implementation - subclasses should override
|
||||||
|
return await self.chat(prompt)
|
||||||
|
|
||||||
|
async def _execute_mcp_tool(self, tool_call, **kwargs) -> Dict[str, Any]:
|
||||||
|
"""Execute MCP tool call - override in subclasses"""
|
||||||
|
return {"error": "MCP tool execution not implemented"}
|
||||||
|
|
||||||
|
|
||||||
|
def create_ai_provider(provider: str = "ollama", model: Optional[str] = None,
|
||||||
|
config_system_prompt: Optional[str] = None, mcp_client=None, **kwargs) -> AIProvider:
|
||||||
|
"""Factory function to create AI providers"""
|
||||||
|
if provider == "ollama":
|
||||||
|
model = model or "qwen3"
|
||||||
|
return OllamaProvider(model=model, config_system_prompt=config_system_prompt, **kwargs)
|
||||||
|
elif provider == "openai":
|
||||||
|
model = model or "gpt-4o-mini"
|
||||||
|
return OpenAIProvider(model=model, config_system_prompt=config_system_prompt,
|
||||||
|
mcp_client=mcp_client, **kwargs)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unknown provider: {provider}")
|
Reference in New Issue
Block a user