update gpt
This commit is contained in:
		| @@ -32,6 +32,12 @@ axum = "0.7" | ||||
| tower = "0.5" | ||||
| tower-http = { version = "0.5", features = ["cors", "fs"] } | ||||
| hyper = { version = "1.0", features = ["full"] } | ||||
| # Documentation generation dependencies | ||||
| syn = { version = "2.0", features = ["full", "parsing", "visit"] } | ||||
| quote = "1.0" | ||||
| ignore = "0.4" | ||||
| git2 = "0.18" | ||||
| regex = "1.0" | ||||
|  | ||||
| [dev-dependencies] | ||||
| tempfile = "3.14" | ||||
							
								
								
									
										418
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										418
									
								
								README.md
									
									
									
									
									
								
							| @@ -4,15 +4,40 @@ A Rust-based static blog generator with AI integration capabilities. | ||||
|  | ||||
| ## Overview | ||||
|  | ||||
| ai.log is part of the ai ecosystem - a static site generator that creates blogs with built-in AI features for content enhancement and atproto integration. | ||||
| ai.log is part of the ai ecosystem - a static site generator that creates blogs with built-in AI features for content enhancement and atproto integration. The system follows the yui system principles with dual-layer MCP architecture. | ||||
|  | ||||
| ## Architecture | ||||
|  | ||||
| ### Dual MCP Integration | ||||
|  | ||||
| **ai.log MCP Server (API Layer)** | ||||
| - **Role**: Independent blog API | ||||
| - **Port**: 8002 | ||||
| - **Location**: `./src/mcp/` | ||||
| - **Function**: Core blog generation and management | ||||
|  | ||||
| **ai.gpt Integration (Server Layer)** | ||||
| - **Role**: AI integration gateway | ||||
| - **Port**: 8001 (within ai.gpt) | ||||
| - **Location**: `../src/aigpt/mcp_server.py` | ||||
| - **Function**: AI memory system + HTTP proxy to ai.log | ||||
|  | ||||
| ### Data Flow | ||||
| ``` | ||||
| Claude Code → ai.gpt (Server/AI) → ai.log (API/Blog) → Static Site | ||||
|               ↑                      ↑ | ||||
|               Memory System          File Operations | ||||
|               Relationship AI        Markdown Processing | ||||
|               Context Analysis       Template Rendering | ||||
| ``` | ||||
|  | ||||
| ## Features | ||||
|  | ||||
| - Static blog generation (inspired by Zola) | ||||
| - AI-powered article editing and enhancement | ||||
| - Automatic translation (ja → en) | ||||
| - AI comment system integrated with atproto | ||||
| - OAuth authentication via atproto accounts | ||||
| - **Static Blog Generation**: Inspired by Zola, built with Rust | ||||
| - **AI-Powered Content**: Memory-driven article generation via ai.gpt | ||||
| - **🌍 Ollama Translation**: Multi-language markdown translation with structure preservation | ||||
| - **atproto Integration**: OAuth authentication and comment system (planned) | ||||
| - **MCP Integration**: Seamless Claude Code workflow | ||||
|  | ||||
| ## Installation | ||||
|  | ||||
| @@ -22,6 +47,8 @@ cargo install ailog | ||||
|  | ||||
| ## Usage | ||||
|  | ||||
| ### Standalone Mode | ||||
|  | ||||
| ```bash | ||||
| # Initialize a new blog | ||||
| ailog init myblog | ||||
| @@ -35,53 +62,380 @@ ailog build | ||||
| # Serve locally | ||||
| ailog serve | ||||
|  | ||||
| # Start MCP server | ||||
| ailog mcp --port 8002 | ||||
|  | ||||
| # Generate documentation | ||||
| ailog doc readme --with-ai | ||||
| ailog doc api --output ./docs | ||||
| ailog doc structure --include-deps | ||||
|  | ||||
| # Translate documents (requires Ollama) | ||||
| ailog doc translate --input README.md --target-lang en | ||||
| ailog doc translate --input docs/api.md --target-lang ja --model qwen2.5:latest | ||||
|  | ||||
| # Clean build files | ||||
| ailog clean | ||||
| ``` | ||||
|  | ||||
| ### AI Ecosystem Integration | ||||
|  | ||||
| When integrated with ai.gpt, use natural language: | ||||
| - "ブログ記事を書いて" → Triggers `log_ai_content` | ||||
| - "記事一覧を見せて" → Triggers `log_list_posts` | ||||
| - "ブログをビルドして" → Triggers `log_build_blog` | ||||
|  | ||||
| ### Documentation & Translation | ||||
|  | ||||
| Generate comprehensive documentation and translate content: | ||||
| - "READMEを生成して" → Triggers `log_generate_docs` | ||||
| - "APIドキュメントを作成して" → Generates API documentation | ||||
| - "プロジェクト構造を解析して" → Creates structure documentation | ||||
| - "このファイルを英語に翻訳して" → Triggers `log_translate_document` | ||||
| - "マークダウンを日本語に変換して" → Uses Ollama for translation | ||||
|  | ||||
| ## MCP Tools | ||||
|  | ||||
| ### ai.log Server (Port 8002) | ||||
| - `create_blog_post` - Create new blog post | ||||
| - `list_blog_posts` - List existing posts | ||||
| - `build_blog` - Build static site | ||||
| - `get_post_content` - Get post by slug | ||||
| - `translate_document` ⭐ - Ollama-powered markdown translation | ||||
| - `generate_documentation` ⭐ - Code analysis and documentation generation | ||||
|  | ||||
| ### ai.gpt Integration (Port 8001) | ||||
| - `log_create_post` - Proxy to ai.log + error handling | ||||
| - `log_list_posts` - Proxy to ai.log + formatting | ||||
| - `log_build_blog` - Proxy to ai.log + AI features | ||||
| - `log_get_post` - Proxy to ai.log + context | ||||
| - `log_system_status` - Health check for ai.log | ||||
| - `log_ai_content` ⭐ - AI memory → blog content generation | ||||
| - `log_translate_document` 🌍 - Document translation via Ollama | ||||
| - `log_generate_docs` 📚 - Documentation generation | ||||
|  | ||||
| ### Documentation Generation Tools | ||||
| - `doc readme` - Generate README.md from project analysis | ||||
| - `doc api` - Generate API documentation | ||||
| - `doc structure` - Analyze and document project structure | ||||
| - `doc changelog` - Generate changelog from git history | ||||
| - `doc translate` 🌍 - Multi-language document translation | ||||
|  | ||||
| ### Translation Features | ||||
| - **Language Support**: English, Japanese, Chinese, Korean, Spanish | ||||
| - **Markdown Preservation**: Code blocks, links, images, tables maintained | ||||
| - **Auto-Detection**: Automatically detects Japanese content | ||||
| - **Ollama Integration**: Uses local AI models for privacy and cost-efficiency | ||||
| - **Smart Processing**: Section-by-section translation with structure awareness | ||||
|  | ||||
| ## Configuration | ||||
|  | ||||
| Configuration files are stored in `~/.config/syui/ai/log/` | ||||
| ### ai.log Configuration | ||||
| - Location: `~/.config/syui/ai/log/` | ||||
| - Format: TOML configuration | ||||
|  | ||||
| ## AI Integration (Planned) | ||||
| ### ai.gpt Integration | ||||
| - Configuration: `../config.json` | ||||
| - Auto-detection: ai.log tools enabled when `./log/` directory exists | ||||
| - System prompt: Automatically triggers blog tools for related queries | ||||
|  | ||||
| - Automatic content suggestions and corrections | ||||
| - Multi-language support with AI translation | ||||
| - AI-generated comments linked to atproto accounts | ||||
| ## AI Integration Features | ||||
|  | ||||
| ### Memory-Driven Content Generation | ||||
| - **Source**: ai.gpt memory system | ||||
| - **Process**: Contextual memories → AI analysis → Blog content | ||||
| - **Output**: Structured markdown with personal insights | ||||
|  | ||||
| ### Automatic Workflows | ||||
| - Daily blog posts from accumulated memories | ||||
| - Content enhancement and suggestions | ||||
| - Related article recommendations | ||||
| - Multi-language content generation | ||||
|  | ||||
| ## atproto Integration (Planned) | ||||
|  | ||||
| Implements OAuth 2.0 for user authentication: | ||||
| - Users can comment using their atproto accounts | ||||
| - Comments are stored in atproto collections | ||||
| - Full data sovereignty for users | ||||
| ### OAuth 2.0 Authentication | ||||
| - Client metadata: `public/client-metadata.json` | ||||
| - Comment system integration | ||||
| - Data sovereignty: Users own their comments | ||||
| - Collection storage in atproto | ||||
|  | ||||
| ### Comment System | ||||
| - atproto account login | ||||
| - Distributed comment storage | ||||
| - Real-time comment synchronization | ||||
|  | ||||
| ## Build & Deploy | ||||
|  | ||||
| Designed for GitHub Actions and Cloudflare Pages deployment. Push to main branch triggers automatic build and deploy. | ||||
| ### GitHub Actions | ||||
| ```yaml | ||||
| # .github/workflows/gh-pages.yml | ||||
| - name: Build ai.log | ||||
|   run: | | ||||
|     cd log | ||||
|     cargo build --release | ||||
|     ./target/release/ailog build | ||||
| ``` | ||||
|  | ||||
| ### Cloudflare Pages | ||||
| - Static output: `./public/` | ||||
| - Automatic deployment on main branch push | ||||
| - AI content generation during build process | ||||
|  | ||||
| ## Development Status | ||||
|  | ||||
| Currently implemented: | ||||
| - ✅ Project structure and Cargo.toml setup | ||||
| - ✅ Basic command-line interface (init, new, build, serve, clean) | ||||
| - ✅ Configuration system with TOML support | ||||
| - ✅ Markdown parsing with frontmatter support | ||||
| - ✅ Template system with Handlebars | ||||
| - ✅ Static site generation with posts and pages | ||||
| - ✅ Development server with hot reload | ||||
| - ✅ AI integration foundation (GPT client, translator, comment system) | ||||
| - ✅ atproto client with OAuth support | ||||
| - ✅ MCP server integration for AI tools | ||||
| - ✅ Test blog with sample content and styling | ||||
| ### ✅ Completed Features | ||||
| - Project structure and Cargo.toml setup | ||||
| - CLI interface (init, new, build, serve, clean, mcp, doc) | ||||
| - Configuration system with TOML support | ||||
| - Markdown parsing with frontmatter support | ||||
| - Template system with Handlebars | ||||
| - Static site generation with posts and pages | ||||
| - Development server with hot reload | ||||
| - **MCP server integration (both layers)** | ||||
| - **ai.gpt integration with 6 tools** | ||||
| - **AI memory system connection** | ||||
| - **📚 Documentation generation from code** | ||||
| - **🔍 Rust project analysis and API extraction** | ||||
| - **📝 README, API docs, and structure analysis** | ||||
| - **🌍 Ollama-powered translation system** | ||||
| - **🚀 Complete MCP integration with ai.gpt** | ||||
| - **📄 Markdown-aware translation preserving structure** | ||||
| - Test blog with sample content and styling | ||||
|  | ||||
| Planned features: | ||||
| - AI-powered content enhancement and suggestions | ||||
| - Automatic translation (ja → en) pipeline | ||||
| - atproto comment system with OAuth authentication | ||||
| ### 🚧 In Progress | ||||
| - AI-powered content enhancement pipeline | ||||
| - atproto OAuth implementation | ||||
|  | ||||
| ### 📋 Planned Features | ||||
| - Advanced template customization | ||||
| - Plugin system for extensibility | ||||
| - Real-time comment system | ||||
| - Multi-blog management | ||||
| - VTuber integration (ai.verse connection) | ||||
|  | ||||
| ## Integration with ai Ecosystem | ||||
|  | ||||
| ### System Dependencies | ||||
| - **ai.gpt**: Memory system, relationship tracking, AI provider | ||||
| - **ai.card**: Future cross-system content sharing | ||||
| - **ai.bot**: atproto posting and mention handling | ||||
| - **ai.verse**: 3D world blog representation (future) | ||||
|  | ||||
| ### yui System Compliance | ||||
| - **Uniqueness**: Each blog post tied to individual identity | ||||
| - **Reality Reflection**: Personal memories → digital content | ||||
| - **Irreversibility**: Published content maintains historical integrity | ||||
|  | ||||
| ## Getting Started | ||||
|  | ||||
| ### 1. Standalone Usage | ||||
| ```bash | ||||
| git clone [repository] | ||||
| cd log | ||||
| cargo run -- init my-blog | ||||
| cargo run -- new "First Post" | ||||
| cargo run -- build | ||||
| cargo run -- serve | ||||
| ``` | ||||
|  | ||||
| ### 2. AI Ecosystem Integration | ||||
| ```bash | ||||
| # Start ai.log MCP server | ||||
| cargo run -- mcp --port 8002 | ||||
|  | ||||
| # In another terminal, start ai.gpt | ||||
| cd ../ | ||||
| # ai.gpt startup commands | ||||
|  | ||||
| # Use Claude Code with natural language blog commands | ||||
| ``` | ||||
|  | ||||
| ## Documentation Generation Features | ||||
|  | ||||
| ### 📚 Automatic README Generation | ||||
| ```bash | ||||
| # Generate README from project analysis | ||||
| ailog doc readme --source ./src --with-ai | ||||
|  | ||||
| # Output: Enhanced README.md with: | ||||
| # - Project overview and metrics | ||||
| # - Dependency analysis | ||||
| # - Module structure | ||||
| # - AI-generated insights | ||||
| ``` | ||||
|  | ||||
| ### 📖 API Documentation | ||||
| ```bash | ||||
| # Generate comprehensive API docs | ||||
| ailog doc api --source ./src --format markdown --output ./docs | ||||
|  | ||||
| # Creates: | ||||
| # - docs/api.md (main API overview) | ||||
| # - docs/module_name.md (per-module documentation) | ||||
| # - Function signatures and documentation | ||||
| # - Struct/enum definitions | ||||
| ``` | ||||
|  | ||||
| ### 🏗️ Project Structure Analysis | ||||
| ```bash | ||||
| # Analyze and document project structure | ||||
| ailog doc structure --source . --include-deps | ||||
|  | ||||
| # Generates: | ||||
| # - Directory tree visualization | ||||
| # - File distribution by language | ||||
| # - Dependency graph analysis | ||||
| # - Code metrics and statistics | ||||
| ``` | ||||
|  | ||||
| ### 📝 Git Changelog Generation | ||||
| ```bash | ||||
| # Generate changelog from git history | ||||
| ailog doc changelog --from v1.0.0 --explain-changes | ||||
|  | ||||
| # Creates: | ||||
| # - Structured changelog | ||||
| # - Commit categorization | ||||
| # - AI-enhanced change explanations | ||||
| ``` | ||||
|  | ||||
| ### 🤖 AI-Enhanced Documentation | ||||
| When `--with-ai` is enabled: | ||||
| - **Content Enhancement**: AI improves readability and adds insights | ||||
| - **Context Awareness**: Leverages ai.gpt memory system | ||||
| - **Smart Categorization**: Automatic organization of content | ||||
| - **Technical Writing**: Professional documentation style | ||||
|  | ||||
| ## 🌍 Translation System | ||||
|  | ||||
| ### Ollama-Powered Translation | ||||
|  | ||||
| ai.log includes a comprehensive translation system powered by Ollama AI models: | ||||
|  | ||||
| ```bash | ||||
| # Basic translation | ||||
| ailog doc translate --input README.md --target-lang en | ||||
|  | ||||
| # Advanced translation with custom settings | ||||
| ailog doc translate \ | ||||
|   --input docs/technical-guide.ja.md \ | ||||
|   --target-lang en \ | ||||
|   --source-lang ja \ | ||||
|   --output docs/technical-guide.en.md \ | ||||
|   --model qwen2.5:latest \ | ||||
|   --ollama-endpoint http://localhost:11434 | ||||
| ``` | ||||
|  | ||||
| ### Translation Features | ||||
|  | ||||
| #### 📄 Markdown-Aware Processing | ||||
| - **Code Block Preservation**: All code snippets remain untranslated | ||||
| - **Link Maintenance**: URLs and link structures preserved | ||||
| - **Image Handling**: Alt text can be translated while preserving image paths | ||||
| - **Table Translation**: Table content translated while maintaining structure | ||||
| - **Header Preservation**: Markdown headers translated with level maintenance | ||||
|  | ||||
| #### 🎯 Smart Language Detection | ||||
| - **Auto-Detection**: Automatically detects Japanese content using Unicode ranges | ||||
| - **Manual Override**: Specify source language for precise control | ||||
| - **Mixed Content**: Handles documents with multiple languages | ||||
|  | ||||
| #### 🔧 Flexible Configuration | ||||
| - **Model Selection**: Choose from available Ollama models | ||||
| - **Custom Endpoints**: Use different Ollama instances | ||||
| - **Output Control**: Auto-generate or specify output paths | ||||
| - **Batch Processing**: Process multiple files efficiently | ||||
|  | ||||
| ### Supported Languages | ||||
|  | ||||
| | Language | Code | Direction | Model Optimized | | ||||
| |----------|------|-----------|-----------------| | ||||
| | English  | `en` | ↔️        | ✅ qwen2.5      | | ||||
| | Japanese | `ja` | ↔️        | ✅ qwen2.5      | | ||||
| | Chinese  | `zh` | ↔️        | ✅ qwen2.5      | | ||||
| | Korean   | `ko` | ↔️        | ⚠️ Basic       | | ||||
| | Spanish  | `es` | ↔️        | ⚠️ Basic       | | ||||
|  | ||||
| ### Translation Workflow | ||||
|  | ||||
| 1. **Parse Document**: Analyze markdown structure and identify sections | ||||
| 2. **Preserve Code**: Isolate code blocks and technical content | ||||
| 3. **Translate Content**: Process text sections with Ollama AI | ||||
| 4. **Reconstruct**: Rebuild document maintaining original formatting | ||||
| 5. **Validate**: Ensure structural integrity and completeness | ||||
|  | ||||
| ### Integration with ai.gpt | ||||
|  | ||||
| ```python | ||||
| # Via ai.gpt MCP tools | ||||
| await log_translate_document( | ||||
|     input_file="README.ja.md", | ||||
|     target_lang="en", | ||||
|     model="qwen2.5:latest" | ||||
| ) | ||||
| ``` | ||||
|  | ||||
| ### Requirements | ||||
|  | ||||
| - **Ollama**: Install and run Ollama locally | ||||
| - **Models**: Download supported models (qwen2.5:latest recommended) | ||||
| - **Memory**: Sufficient RAM for model inference | ||||
| - **Network**: For initial model download only | ||||
|  | ||||
| ## Configuration Examples | ||||
|  | ||||
| ### Basic Blog Config | ||||
| ```toml | ||||
| [blog] | ||||
| title = "My AI Blog" | ||||
| description = "Personal thoughts and AI insights" | ||||
| base_url = "https://myblog.example.com" | ||||
|  | ||||
| [ai] | ||||
| provider = "openai" | ||||
| model = "gpt-4" | ||||
| translation = true | ||||
| ``` | ||||
|  | ||||
| ### Advanced Integration | ||||
| ```json | ||||
| // ../config.json (ai.gpt) | ||||
| { | ||||
|   "mcp": { | ||||
|     "servers": { | ||||
|       "ai_gpt": { | ||||
|         "endpoints": { | ||||
|           "log_ai_content": "/log_ai_content", | ||||
|           "log_create_post": "/log_create_post" | ||||
|         } | ||||
|       } | ||||
|     } | ||||
|   } | ||||
| } | ||||
| ``` | ||||
|  | ||||
| ## Troubleshooting | ||||
|  | ||||
| ### MCP Connection Issues | ||||
| - Ensure ai.log server is running: `cargo run -- mcp --port 8002` | ||||
| - Check ai.gpt config includes log endpoints | ||||
| - Verify `./log/` directory exists relative to ai.gpt | ||||
|  | ||||
| ### Build Failures | ||||
| - Check Rust version: `rustc --version` | ||||
| - Update dependencies: `cargo update` | ||||
| - Clear cache: `cargo clean` | ||||
|  | ||||
| ### AI Integration Problems | ||||
| - Verify ai.gpt memory system is initialized | ||||
| - Check AI provider configuration | ||||
| - Ensure sufficient context in memory system | ||||
|  | ||||
| ## License | ||||
|  | ||||
| © syui | ||||
|  | ||||
| --- | ||||
|  | ||||
| **Part of the ai ecosystem**: ai.gpt, ai.card, ai.log, ai.bot, ai.verse, ai.shell | ||||
|   | ||||
| @@ -14,24 +14,35 @@ ai.logをai.gptと連携するためのMCPサーバー設定ガイド | ||||
|  | ||||
| ## ai.gptでの設定 | ||||
|  | ||||
| ai.gptの設定ファイル `~/.config/syui/ai/gpt/config.json` に以下を追加: | ||||
| ai.logツールはai.gptのMCPサーバーに統合済みです。`config.json`に以下の設定が含まれています: | ||||
|  | ||||
| ```json | ||||
| { | ||||
|   "mcp": { | ||||
|     "enabled": true, | ||||
|     "enabled": "true", | ||||
|     "auto_detect": "true", | ||||
|     "servers": { | ||||
|       "ai_gpt": {"base_url": "http://localhost:8001"}, | ||||
|       "ai_card": {"base_url": "http://localhost:8000"}, | ||||
|       "ai_log": {"base_url": "http://localhost:8002"} | ||||
|       "ai_gpt": { | ||||
|         "base_url": "http://localhost:8001", | ||||
|         "endpoints": { | ||||
|           "log_create_post": "/log_create_post", | ||||
|           "log_list_posts": "/log_list_posts", | ||||
|           "log_build_blog": "/log_build_blog", | ||||
|           "log_get_post": "/log_get_post", | ||||
|           "log_system_status": "/log_system_status", | ||||
|           "log_ai_content": "/log_ai_content" | ||||
|         } | ||||
|       } | ||||
|     } | ||||
|   } | ||||
| } | ||||
| ``` | ||||
|  | ||||
| ## 利用可能なMCPツール | ||||
| **重要**: ai.logツールを使用するには、ai.logディレクトリが `./log/` に存在し、ai.logのMCPサーバーがポート8002で稼働している必要があります。 | ||||
|  | ||||
| ### 1. create_blog_post | ||||
| ## 利用可能なMCPツール(ai.gpt統合版) | ||||
|  | ||||
| ### 1. log_create_post | ||||
| 新しいブログ記事を作成します。 | ||||
|  | ||||
| **パラメータ**: | ||||
| @@ -42,34 +53,45 @@ ai.gptの設定ファイル `~/.config/syui/ai/gpt/config.json` に以下を追 | ||||
|  | ||||
| **使用例**: | ||||
| ```python | ||||
| # ai.gptからの呼び出し例 | ||||
| result = await mcp_client.call_tool("create_blog_post", { | ||||
|     "title": "AI統合の新しい可能性", | ||||
|     "content": "# 概要\n\nai.gptとai.logの連携により...", | ||||
|     "tags": ["AI", "技術", "ブログ"] | ||||
| }) | ||||
| # Claude Code/ai.gptから自動呼び出し | ||||
| # "ブログ記事を書いて"という発言で自動トリガー | ||||
| ``` | ||||
|  | ||||
| ### 2. list_blog_posts | ||||
| ### 2. log_list_posts | ||||
| 既存のブログ記事一覧を取得します。 | ||||
|  | ||||
| **パラメータ**: | ||||
| - `limit` (オプション): 取得件数上限 (デフォルト: 10) | ||||
| - `offset` (オプション): スキップ件数 (デフォルト: 0) | ||||
|  | ||||
| ### 3. build_blog | ||||
| ### 3. log_build_blog | ||||
| ブログをビルドして静的ファイルを生成します。 | ||||
|  | ||||
| **パラメータ**: | ||||
| - `enable_ai` (オプション): AI機能を有効化 | ||||
| - `translate` (オプション): 自動翻訳を有効化 | ||||
| - `enable_ai` (オプション): AI機能を有効化 (デフォルト: true) | ||||
| - `translate` (オプション): 自動翻訳を有効化 (デフォルト: false) | ||||
|  | ||||
| ### 4. get_post_content | ||||
| ### 4. log_get_post | ||||
| 指定したスラッグの記事内容を取得します。 | ||||
|  | ||||
| **パラメータ**: | ||||
| - `slug` (必須): 記事のスラッグ | ||||
|  | ||||
| ### 5. log_system_status | ||||
| ai.logシステムの状態を確認します。 | ||||
|  | ||||
| ### 6. log_ai_content ⭐ NEW | ||||
| AI記憶システムと連携して自動でブログ記事を生成・投稿します。 | ||||
|  | ||||
| **パラメータ**: | ||||
| - `user_id` (必須): ユーザーID | ||||
| - `topic` (オプション): 記事のトピック (デフォルト: "daily thoughts") | ||||
|  | ||||
| **機能**: | ||||
| - ai.gptの記憶システムから関連する思い出を取得 | ||||
| - AI技術で記憶をブログ記事に変換 | ||||
| - 自動でai.logに投稿 | ||||
|  | ||||
| ## ai.gptからの連携パターン | ||||
|  | ||||
| ### 記事の自動投稿 | ||||
|   | ||||
							
								
								
									
										313
									
								
								src/analyzer/mod.rs
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										313
									
								
								src/analyzer/mod.rs
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,313 @@ | ||||
| pub mod rust_analyzer; | ||||
|  | ||||
| use anyhow::Result; | ||||
| use serde::{Deserialize, Serialize}; | ||||
| use std::collections::HashMap; | ||||
| use std::path::{Path, PathBuf}; | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct ProjectInfo { | ||||
|     pub name: String, | ||||
|     pub description: Option<String>, | ||||
|     pub version: String, | ||||
|     pub authors: Vec<String>, | ||||
|     pub license: Option<String>, | ||||
|     pub dependencies: HashMap<String, String>, | ||||
|     pub modules: Vec<ModuleInfo>, | ||||
|     pub structure: ProjectStructure, | ||||
|     pub metrics: ProjectMetrics, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct ModuleInfo { | ||||
|     pub name: String, | ||||
|     pub path: PathBuf, | ||||
|     pub functions: Vec<FunctionInfo>, | ||||
|     pub structs: Vec<StructInfo>, | ||||
|     pub enums: Vec<EnumInfo>, | ||||
|     pub traits: Vec<TraitInfo>, | ||||
|     pub docs: Option<String>, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct FunctionInfo { | ||||
|     pub name: String, | ||||
|     pub visibility: String, | ||||
|     pub is_async: bool, | ||||
|     pub parameters: Vec<Parameter>, | ||||
|     pub return_type: Option<String>, | ||||
|     pub docs: Option<String>, | ||||
|     pub line_number: usize, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct Parameter { | ||||
|     pub name: String, | ||||
|     pub param_type: String, | ||||
|     pub is_mutable: bool, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct StructInfo { | ||||
|     pub name: String, | ||||
|     pub visibility: String, | ||||
|     pub fields: Vec<FieldInfo>, | ||||
|     pub docs: Option<String>, | ||||
|     pub line_number: usize, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct FieldInfo { | ||||
|     pub name: String, | ||||
|     pub field_type: String, | ||||
|     pub visibility: String, | ||||
|     pub docs: Option<String>, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct EnumInfo { | ||||
|     pub name: String, | ||||
|     pub visibility: String, | ||||
|     pub variants: Vec<VariantInfo>, | ||||
|     pub docs: Option<String>, | ||||
|     pub line_number: usize, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct VariantInfo { | ||||
|     pub name: String, | ||||
|     pub fields: Vec<FieldInfo>, | ||||
|     pub docs: Option<String>, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct TraitInfo { | ||||
|     pub name: String, | ||||
|     pub visibility: String, | ||||
|     pub methods: Vec<FunctionInfo>, | ||||
|     pub docs: Option<String>, | ||||
|     pub line_number: usize, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct ProjectStructure { | ||||
|     pub directories: Vec<DirectoryInfo>, | ||||
|     pub files: Vec<FileInfo>, | ||||
|     pub dependency_graph: HashMap<String, Vec<String>>, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct DirectoryInfo { | ||||
|     pub name: String, | ||||
|     pub path: PathBuf, | ||||
|     pub file_count: usize, | ||||
|     pub subdirectories: Vec<String>, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct FileInfo { | ||||
|     pub name: String, | ||||
|     pub path: PathBuf, | ||||
|     pub language: String, | ||||
|     pub lines_of_code: usize, | ||||
|     pub is_test: bool, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct ProjectMetrics { | ||||
|     pub total_lines: usize, | ||||
|     pub total_files: usize, | ||||
|     pub test_files: usize, | ||||
|     pub dependency_count: usize, | ||||
|     pub complexity_score: f32, | ||||
|     pub test_coverage: Option<f32>, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct ApiInfo { | ||||
|     pub modules: Vec<ModuleInfo>, | ||||
|     pub public_functions: Vec<FunctionInfo>, | ||||
|     pub public_structs: Vec<StructInfo>, | ||||
|     pub public_enums: Vec<EnumInfo>, | ||||
|     pub public_traits: Vec<TraitInfo>, | ||||
| } | ||||
|  | ||||
| pub struct CodeAnalyzer { | ||||
|     rust_analyzer: rust_analyzer::RustAnalyzer, | ||||
| } | ||||
|  | ||||
| impl CodeAnalyzer { | ||||
|     pub fn new() -> Self { | ||||
|         Self { | ||||
|             rust_analyzer: rust_analyzer::RustAnalyzer::new(), | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     pub fn analyze_project(&self, path: &Path) -> Result<ProjectInfo> { | ||||
|         println!("  🔍 Analyzing project at: {}", path.display()); | ||||
|          | ||||
|         // Check if this is a Rust project | ||||
|         let cargo_toml = path.join("Cargo.toml"); | ||||
|         if cargo_toml.exists() { | ||||
|             return self.rust_analyzer.analyze_project(path); | ||||
|         } | ||||
|          | ||||
|         // For now, only support Rust projects | ||||
|         anyhow::bail!("Only Rust projects are currently supported"); | ||||
|     } | ||||
|  | ||||
|     pub fn analyze_api(&self, path: &Path) -> Result<ApiInfo> { | ||||
|         println!("  📚 Analyzing API at: {}", path.display()); | ||||
|          | ||||
|         let project_info = self.analyze_project(path.parent().unwrap_or(path))?; | ||||
|          | ||||
|         // Extract only public items | ||||
|         let mut public_functions = Vec::new(); | ||||
|         let mut public_structs = Vec::new(); | ||||
|         let mut public_enums = Vec::new(); | ||||
|         let mut public_traits = Vec::new(); | ||||
|          | ||||
|         for module in &project_info.modules { | ||||
|             for func in &module.functions { | ||||
|                 if func.visibility == "pub" { | ||||
|                     public_functions.push(func.clone()); | ||||
|                 } | ||||
|             } | ||||
|             for struct_info in &module.structs { | ||||
|                 if struct_info.visibility == "pub" { | ||||
|                     public_structs.push(struct_info.clone()); | ||||
|                 } | ||||
|             } | ||||
|             for enum_info in &module.enums { | ||||
|                 if enum_info.visibility == "pub" { | ||||
|                     public_enums.push(enum_info.clone()); | ||||
|                 } | ||||
|             } | ||||
|             for trait_info in &module.traits { | ||||
|                 if trait_info.visibility == "pub" { | ||||
|                     public_traits.push(trait_info.clone()); | ||||
|                 } | ||||
|             } | ||||
|         } | ||||
|          | ||||
|         Ok(ApiInfo { | ||||
|             modules: project_info.modules, | ||||
|             public_functions, | ||||
|             public_structs, | ||||
|             public_enums, | ||||
|             public_traits, | ||||
|         }) | ||||
|     } | ||||
|  | ||||
|     pub fn analyze_structure(&self, path: &Path, include_deps: bool) -> Result<ProjectStructure> { | ||||
|         println!("  🏗️  Analyzing structure at: {}", path.display()); | ||||
|          | ||||
|         let mut directories = Vec::new(); | ||||
|         let mut files = Vec::new(); | ||||
|         let mut dependency_graph = HashMap::new(); | ||||
|          | ||||
|         self.walk_directory(path, &mut directories, &mut files)?; | ||||
|          | ||||
|         if include_deps { | ||||
|             dependency_graph = self.analyze_dependencies(path)?; | ||||
|         } | ||||
|          | ||||
|         Ok(ProjectStructure { | ||||
|             directories, | ||||
|             files, | ||||
|             dependency_graph, | ||||
|         }) | ||||
|     } | ||||
|  | ||||
|     fn walk_directory( | ||||
|         &self, | ||||
|         path: &Path, | ||||
|         directories: &mut Vec<DirectoryInfo>, | ||||
|         files: &mut Vec<FileInfo>, | ||||
|     ) -> Result<()> { | ||||
|         use walkdir::WalkDir; | ||||
|          | ||||
|         let walker = WalkDir::new(path) | ||||
|             .into_iter() | ||||
|             .filter_entry(|e| { | ||||
|                 let name = e.file_name().to_string_lossy(); | ||||
|                 // Skip hidden files and common build/cache directories | ||||
|                 !name.starts_with('.')  | ||||
|                     && name != "target"  | ||||
|                     && name != "node_modules" | ||||
|                     && name != "dist" | ||||
|             }); | ||||
|          | ||||
|         for entry in walker { | ||||
|             let entry = entry?; | ||||
|             let path = entry.path(); | ||||
|             let relative_path = path.strip_prefix(path.ancestors().last().unwrap())?; | ||||
|              | ||||
|             if entry.file_type().is_dir() { | ||||
|                 let file_count = std::fs::read_dir(path)? | ||||
|                     .filter_map(|e| e.ok()) | ||||
|                     .filter(|e| e.file_type().map(|ft| ft.is_file()).unwrap_or(false)) | ||||
|                     .count(); | ||||
|                  | ||||
|                 let subdirectories = std::fs::read_dir(path)? | ||||
|                     .filter_map(|e| e.ok()) | ||||
|                     .filter(|e| e.file_type().map(|ft| ft.is_dir()).unwrap_or(false)) | ||||
|                     .map(|e| e.file_name().to_string_lossy().to_string()) | ||||
|                     .collect(); | ||||
|                  | ||||
|                 directories.push(DirectoryInfo { | ||||
|                     name: path.file_name().unwrap().to_string_lossy().to_string(), | ||||
|                     path: relative_path.to_path_buf(), | ||||
|                     file_count, | ||||
|                     subdirectories, | ||||
|                 }); | ||||
|             } else if entry.file_type().is_file() { | ||||
|                 let language = self.detect_language(path); | ||||
|                 let lines_of_code = self.count_lines(path)?; | ||||
|                 let is_test = self.is_test_file(path); | ||||
|                  | ||||
|                 files.push(FileInfo { | ||||
|                     name: path.file_name().unwrap().to_string_lossy().to_string(), | ||||
|                     path: relative_path.to_path_buf(), | ||||
|                     language, | ||||
|                     lines_of_code, | ||||
|                     is_test, | ||||
|                 }); | ||||
|             } | ||||
|         } | ||||
|          | ||||
|         Ok(()) | ||||
|     } | ||||
|  | ||||
|     fn detect_language(&self, path: &Path) -> String { | ||||
|         match path.extension().and_then(|s| s.to_str()) { | ||||
|             Some("rs") => "rust".to_string(), | ||||
|             Some("py") => "python".to_string(), | ||||
|             Some("js") => "javascript".to_string(), | ||||
|             Some("ts") => "typescript".to_string(), | ||||
|             Some("md") => "markdown".to_string(), | ||||
|             Some("toml") => "toml".to_string(), | ||||
|             Some("json") => "json".to_string(), | ||||
|             Some("yaml") | Some("yml") => "yaml".to_string(), | ||||
|             _ => "unknown".to_string(), | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     fn count_lines(&self, path: &Path) -> Result<usize> { | ||||
|         let content = std::fs::read_to_string(path)?; | ||||
|         Ok(content.lines().count()) | ||||
|     } | ||||
|  | ||||
|     fn is_test_file(&self, path: &Path) -> bool { | ||||
|         let filename = path.file_name().unwrap().to_string_lossy(); | ||||
|         filename.contains("test")  | ||||
|             || filename.starts_with("test_") | ||||
|             || path.to_string_lossy().contains("/tests/") | ||||
|     } | ||||
|  | ||||
|     fn analyze_dependencies(&self, _path: &Path) -> Result<HashMap<String, Vec<String>>> { | ||||
|         // For now, just return empty dependencies | ||||
|         // TODO: Implement actual dependency analysis | ||||
|         Ok(HashMap::new()) | ||||
|     } | ||||
| } | ||||
							
								
								
									
										512
									
								
								src/analyzer/rust_analyzer.rs
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										512
									
								
								src/analyzer/rust_analyzer.rs
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,512 @@ | ||||
| use anyhow::Result; | ||||
| use std::collections::HashMap; | ||||
| use std::path::Path; | ||||
| use syn::{visit::Visit, ItemEnum, ItemFn, ItemStruct, ItemTrait, Visibility}; | ||||
|  | ||||
| use super::*; | ||||
|  | ||||
| pub struct RustAnalyzer; | ||||
|  | ||||
| impl RustAnalyzer { | ||||
|     pub fn new() -> Self { | ||||
|         Self | ||||
|     } | ||||
|  | ||||
|     pub fn analyze_project(&self, path: &Path) -> Result<ProjectInfo> { | ||||
|         // Parse Cargo.toml | ||||
|         let cargo_toml_path = path.join("Cargo.toml"); | ||||
|         let cargo_content = std::fs::read_to_string(&cargo_toml_path)?; | ||||
|         let cargo_toml: toml::Value = toml::from_str(&cargo_content)?; | ||||
|          | ||||
|         let package = cargo_toml.get("package").unwrap(); | ||||
|         let name = package.get("name").unwrap().as_str().unwrap().to_string(); | ||||
|         let description = package.get("description").map(|v| v.as_str().unwrap().to_string()); | ||||
|         let version = package.get("version").unwrap().as_str().unwrap().to_string(); | ||||
|         let authors = package | ||||
|             .get("authors") | ||||
|             .map(|v| { | ||||
|                 v.as_array() | ||||
|                     .unwrap() | ||||
|                     .iter() | ||||
|                     .map(|a| a.as_str().unwrap().to_string()) | ||||
|                     .collect() | ||||
|             }) | ||||
|             .unwrap_or_default(); | ||||
|         let license = package.get("license").map(|v| v.as_str().unwrap().to_string()); | ||||
|  | ||||
|         // Parse dependencies | ||||
|         let dependencies = self.parse_dependencies(&cargo_toml)?; | ||||
|  | ||||
|         // Analyze source code | ||||
|         let src_path = path.join("src"); | ||||
|         let modules = self.analyze_modules(&src_path)?; | ||||
|  | ||||
|         // Calculate metrics | ||||
|         let metrics = self.calculate_metrics(&modules, &dependencies); | ||||
|  | ||||
|         // Analyze structure | ||||
|         let structure = self.analyze_project_structure(path)?; | ||||
|  | ||||
|         Ok(ProjectInfo { | ||||
|             name, | ||||
|             description, | ||||
|             version, | ||||
|             authors, | ||||
|             license, | ||||
|             dependencies, | ||||
|             modules, | ||||
|             structure, | ||||
|             metrics, | ||||
|         }) | ||||
|     } | ||||
|  | ||||
|     fn parse_dependencies(&self, cargo_toml: &toml::Value) -> Result<HashMap<String, String>> { | ||||
|         let mut dependencies = HashMap::new(); | ||||
|  | ||||
|         if let Some(deps) = cargo_toml.get("dependencies") { | ||||
|             if let Some(deps_table) = deps.as_table() { | ||||
|                 for (name, value) in deps_table { | ||||
|                     let version = match value { | ||||
|                         toml::Value::String(v) => v.clone(), | ||||
|                         toml::Value::Table(t) => { | ||||
|                             t.get("version") | ||||
|                                 .and_then(|v| v.as_str()) | ||||
|                                 .unwrap_or("*") | ||||
|                                 .to_string() | ||||
|                         } | ||||
|                         _ => "*".to_string(), | ||||
|                     }; | ||||
|                     dependencies.insert(name.clone(), version); | ||||
|                 } | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         Ok(dependencies) | ||||
|     } | ||||
|  | ||||
|     fn analyze_modules(&self, src_path: &Path) -> Result<Vec<ModuleInfo>> { | ||||
|         let mut modules = Vec::new(); | ||||
|  | ||||
|         if !src_path.exists() { | ||||
|             return Ok(modules); | ||||
|         } | ||||
|  | ||||
|         // Walk through all .rs files | ||||
|         for entry in walkdir::WalkDir::new(src_path) { | ||||
|             let entry = entry?; | ||||
|             if entry.file_type().is_file() { | ||||
|                 if let Some(extension) = entry.path().extension() { | ||||
|                     if extension == "rs" { | ||||
|                         if let Ok(module) = self.analyze_rust_file(entry.path()) { | ||||
|                             modules.push(module); | ||||
|                         } | ||||
|                     } | ||||
|                 } | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         Ok(modules) | ||||
|     } | ||||
|  | ||||
|     fn analyze_rust_file(&self, file_path: &Path) -> Result<ModuleInfo> { | ||||
|         let content = std::fs::read_to_string(file_path)?; | ||||
|         let syntax_tree = syn::parse_file(&content)?; | ||||
|  | ||||
|         let mut visitor = RustVisitor::new(); | ||||
|         visitor.visit_file(&syntax_tree); | ||||
|  | ||||
|         let module_name = file_path | ||||
|             .file_stem() | ||||
|             .unwrap() | ||||
|             .to_string_lossy() | ||||
|             .to_string(); | ||||
|  | ||||
|         // Extract module-level documentation | ||||
|         let docs = self.extract_module_docs(&content); | ||||
|  | ||||
|         Ok(ModuleInfo { | ||||
|             name: module_name, | ||||
|             path: file_path.to_path_buf(), | ||||
|             functions: visitor.functions, | ||||
|             structs: visitor.structs, | ||||
|             enums: visitor.enums, | ||||
|             traits: visitor.traits, | ||||
|             docs, | ||||
|         }) | ||||
|     } | ||||
|  | ||||
|     fn extract_module_docs(&self, content: &str) -> Option<String> { | ||||
|         let lines: Vec<&str> = content.lines().collect(); | ||||
|         let mut doc_lines = Vec::new(); | ||||
|         let mut in_module_doc = false; | ||||
|  | ||||
|         for line in lines { | ||||
|             let trimmed = line.trim(); | ||||
|             if trimmed.starts_with("//!") { | ||||
|                 in_module_doc = true; | ||||
|                 doc_lines.push(trimmed.trim_start_matches("//!").trim()); | ||||
|             } else if trimmed.starts_with("/*!") { | ||||
|                 in_module_doc = true; | ||||
|                 let content = trimmed.trim_start_matches("/*!").trim_end_matches("*/").trim(); | ||||
|                 doc_lines.push(content); | ||||
|             } else if in_module_doc && !trimmed.is_empty() && !trimmed.starts_with("//") { | ||||
|                 break; | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         if doc_lines.is_empty() { | ||||
|             None | ||||
|         } else { | ||||
|             Some(doc_lines.join("\n")) | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     fn calculate_metrics(&self, modules: &[ModuleInfo], dependencies: &HashMap<String, String>) -> ProjectMetrics { | ||||
|         let total_lines = modules.iter().map(|m| { | ||||
|             std::fs::read_to_string(&m.path) | ||||
|                 .map(|content| content.lines().count()) | ||||
|                 .unwrap_or(0) | ||||
|         }).sum(); | ||||
|  | ||||
|         let total_files = modules.len(); | ||||
|         let test_files = modules.iter().filter(|m| { | ||||
|             m.name.contains("test") || m.path.to_string_lossy().contains("/tests/") | ||||
|         }).count(); | ||||
|  | ||||
|         let dependency_count = dependencies.len(); | ||||
|  | ||||
|         // Simple complexity calculation based on number of functions and structs | ||||
|         let complexity_score = modules.iter().map(|m| { | ||||
|             (m.functions.len() + m.structs.len() + m.enums.len() + m.traits.len()) as f32 | ||||
|         }).sum::<f32>() / modules.len().max(1) as f32; | ||||
|  | ||||
|         ProjectMetrics { | ||||
|             total_lines, | ||||
|             total_files, | ||||
|             test_files, | ||||
|             dependency_count, | ||||
|             complexity_score, | ||||
|             test_coverage: None, // TODO: Implement test coverage calculation | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     fn analyze_project_structure(&self, path: &Path) -> Result<ProjectStructure> { | ||||
|         let mut directories = Vec::new(); | ||||
|         let mut files = Vec::new(); | ||||
|  | ||||
|         self.walk_directory(path, &mut directories, &mut files)?; | ||||
|  | ||||
|         Ok(ProjectStructure { | ||||
|             directories, | ||||
|             files, | ||||
|             dependency_graph: HashMap::new(), // TODO: Implement dependency graph | ||||
|         }) | ||||
|     } | ||||
|  | ||||
|     fn walk_directory( | ||||
|         &self, | ||||
|         path: &Path, | ||||
|         directories: &mut Vec<DirectoryInfo>, | ||||
|         files: &mut Vec<FileInfo>, | ||||
|     ) -> Result<()> { | ||||
|         for entry in walkdir::WalkDir::new(path).max_depth(3) { | ||||
|             let entry = entry?; | ||||
|             let relative_path = entry.path().strip_prefix(path)?; | ||||
|  | ||||
|             if entry.file_type().is_dir() && relative_path != Path::new("") { | ||||
|                 let file_count = std::fs::read_dir(entry.path())? | ||||
|                     .filter_map(|e| e.ok()) | ||||
|                     .filter(|e| e.file_type().map(|ft| ft.is_file()).unwrap_or(false)) | ||||
|                     .count(); | ||||
|  | ||||
|                 let subdirectories = std::fs::read_dir(entry.path())? | ||||
|                     .filter_map(|e| e.ok()) | ||||
|                     .filter(|e| e.file_type().map(|ft| ft.is_dir()).unwrap_or(false)) | ||||
|                     .map(|e| e.file_name().to_string_lossy().to_string()) | ||||
|                     .collect(); | ||||
|  | ||||
|                 directories.push(DirectoryInfo { | ||||
|                     name: entry.path().file_name().unwrap().to_string_lossy().to_string(), | ||||
|                     path: relative_path.to_path_buf(), | ||||
|                     file_count, | ||||
|                     subdirectories, | ||||
|                 }); | ||||
|             } else if entry.file_type().is_file() { | ||||
|                 let language = match entry.path().extension().and_then(|s| s.to_str()) { | ||||
|                     Some("rs") => "rust".to_string(), | ||||
|                     Some("toml") => "toml".to_string(), | ||||
|                     Some("md") => "markdown".to_string(), | ||||
|                     _ => "unknown".to_string(), | ||||
|                 }; | ||||
|  | ||||
|                 let lines_of_code = std::fs::read_to_string(entry.path()) | ||||
|                     .map(|content| content.lines().count()) | ||||
|                     .unwrap_or(0); | ||||
|  | ||||
|                 let is_test = entry.path().to_string_lossy().contains("test"); | ||||
|  | ||||
|                 files.push(FileInfo { | ||||
|                     name: entry.path().file_name().unwrap().to_string_lossy().to_string(), | ||||
|                     path: relative_path.to_path_buf(), | ||||
|                     language, | ||||
|                     lines_of_code, | ||||
|                     is_test, | ||||
|                 }); | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         Ok(()) | ||||
|     } | ||||
| } | ||||
|  | ||||
| struct RustVisitor { | ||||
|     functions: Vec<FunctionInfo>, | ||||
|     structs: Vec<StructInfo>, | ||||
|     enums: Vec<EnumInfo>, | ||||
|     traits: Vec<TraitInfo>, | ||||
|     current_line: usize, | ||||
| } | ||||
|  | ||||
| impl RustVisitor { | ||||
|     fn new() -> Self { | ||||
|         Self { | ||||
|             functions: Vec::new(), | ||||
|             structs: Vec::new(), | ||||
|             enums: Vec::new(), | ||||
|             traits: Vec::new(), | ||||
|             current_line: 1, | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     fn visibility_to_string(&self, vis: &Visibility) -> String { | ||||
|         match vis { | ||||
|             Visibility::Public(_) => "pub".to_string(), | ||||
|             Visibility::Restricted(_) => "pub(restricted)".to_string(), | ||||
|             Visibility::Inherited => "private".to_string(), | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     fn extract_docs(&self, attrs: &[syn::Attribute]) -> Option<String> { | ||||
|         let mut docs = Vec::new(); | ||||
|         for attr in attrs { | ||||
|             if attr.path().is_ident("doc") { | ||||
|                 if let syn::Meta::NameValue(meta) = &attr.meta { | ||||
|                     if let syn::Expr::Lit(expr_lit) = &meta.value { | ||||
|                         if let syn::Lit::Str(lit_str) = &expr_lit.lit { | ||||
|                             docs.push(lit_str.value()); | ||||
|                         } | ||||
|                     } | ||||
|                 } | ||||
|             } | ||||
|         } | ||||
|         if docs.is_empty() { | ||||
|             None | ||||
|         } else { | ||||
|             Some(docs.join("\n")) | ||||
|         } | ||||
|     } | ||||
| } | ||||
|  | ||||
| impl<'ast> Visit<'ast> for RustVisitor { | ||||
|     fn visit_item_fn(&mut self, node: &'ast ItemFn) { | ||||
|         let name = node.sig.ident.to_string(); | ||||
|         let visibility = self.visibility_to_string(&node.vis); | ||||
|         let is_async = node.sig.asyncness.is_some(); | ||||
|          | ||||
|         let parameters = node.sig.inputs.iter().map(|input| { | ||||
|             match input { | ||||
|                 syn::FnArg::Receiver(_) => Parameter { | ||||
|                     name: "self".to_string(), | ||||
|                     param_type: "Self".to_string(), | ||||
|                     is_mutable: false, | ||||
|                 }, | ||||
|                 syn::FnArg::Typed(typed) => { | ||||
|                     let name = match &*typed.pat { | ||||
|                         syn::Pat::Ident(ident) => ident.ident.to_string(), | ||||
|                         _ => "unknown".to_string(), | ||||
|                     }; | ||||
|                     Parameter { | ||||
|                         name, | ||||
|                         param_type: quote::quote!(#typed.ty).to_string(), | ||||
|                         is_mutable: false, // TODO: Detect mutability | ||||
|                     } | ||||
|                 } | ||||
|             } | ||||
|         }).collect(); | ||||
|  | ||||
|         let return_type = match &node.sig.output { | ||||
|             syn::ReturnType::Default => None, | ||||
|             syn::ReturnType::Type(_, ty) => Some(quote::quote!(#ty).to_string()), | ||||
|         }; | ||||
|  | ||||
|         let docs = self.extract_docs(&node.attrs); | ||||
|  | ||||
|         self.functions.push(FunctionInfo { | ||||
|             name, | ||||
|             visibility, | ||||
|             is_async, | ||||
|             parameters, | ||||
|             return_type, | ||||
|             docs, | ||||
|             line_number: self.current_line, | ||||
|         }); | ||||
|  | ||||
|         syn::visit::visit_item_fn(self, node); | ||||
|     } | ||||
|  | ||||
|     fn visit_item_struct(&mut self, node: &'ast ItemStruct) { | ||||
|         let name = node.ident.to_string(); | ||||
|         let visibility = self.visibility_to_string(&node.vis); | ||||
|         let docs = self.extract_docs(&node.attrs); | ||||
|  | ||||
|         let fields = match &node.fields { | ||||
|             syn::Fields::Named(fields) => { | ||||
|                 fields.named.iter().map(|field| { | ||||
|                     FieldInfo { | ||||
|                         name: field.ident.as_ref().unwrap().to_string(), | ||||
|                         field_type: quote::quote!(#field.ty).to_string(), | ||||
|                         visibility: self.visibility_to_string(&field.vis), | ||||
|                         docs: self.extract_docs(&field.attrs), | ||||
|                     } | ||||
|                 }).collect() | ||||
|             } | ||||
|             syn::Fields::Unnamed(fields) => { | ||||
|                 fields.unnamed.iter().enumerate().map(|(i, field)| { | ||||
|                     FieldInfo { | ||||
|                         name: format!("field_{}", i), | ||||
|                         field_type: quote::quote!(#field.ty).to_string(), | ||||
|                         visibility: self.visibility_to_string(&field.vis), | ||||
|                         docs: self.extract_docs(&field.attrs), | ||||
|                     } | ||||
|                 }).collect() | ||||
|             } | ||||
|             syn::Fields::Unit => Vec::new(), | ||||
|         }; | ||||
|  | ||||
|         self.structs.push(StructInfo { | ||||
|             name, | ||||
|             visibility, | ||||
|             fields, | ||||
|             docs, | ||||
|             line_number: self.current_line, | ||||
|         }); | ||||
|  | ||||
|         syn::visit::visit_item_struct(self, node); | ||||
|     } | ||||
|  | ||||
|     fn visit_item_enum(&mut self, node: &'ast ItemEnum) { | ||||
|         let name = node.ident.to_string(); | ||||
|         let visibility = self.visibility_to_string(&node.vis); | ||||
|         let docs = self.extract_docs(&node.attrs); | ||||
|  | ||||
|         let variants = node.variants.iter().map(|variant| { | ||||
|             let variant_name = variant.ident.to_string(); | ||||
|             let variant_docs = self.extract_docs(&variant.attrs); | ||||
|  | ||||
|             let fields = match &variant.fields { | ||||
|                 syn::Fields::Named(fields) => { | ||||
|                     fields.named.iter().map(|field| { | ||||
|                         FieldInfo { | ||||
|                             name: field.ident.as_ref().unwrap().to_string(), | ||||
|                             field_type: quote::quote!(#field.ty).to_string(), | ||||
|                             visibility: self.visibility_to_string(&field.vis), | ||||
|                             docs: self.extract_docs(&field.attrs), | ||||
|                         } | ||||
|                     }).collect() | ||||
|                 } | ||||
|                 syn::Fields::Unnamed(fields) => { | ||||
|                     fields.unnamed.iter().enumerate().map(|(i, field)| { | ||||
|                         FieldInfo { | ||||
|                             name: format!("field_{}", i), | ||||
|                             field_type: quote::quote!(#field.ty).to_string(), | ||||
|                             visibility: self.visibility_to_string(&field.vis), | ||||
|                             docs: self.extract_docs(&field.attrs), | ||||
|                         } | ||||
|                     }).collect() | ||||
|                 } | ||||
|                 syn::Fields::Unit => Vec::new(), | ||||
|             }; | ||||
|  | ||||
|             VariantInfo { | ||||
|                 name: variant_name, | ||||
|                 fields, | ||||
|                 docs: variant_docs, | ||||
|             } | ||||
|         }).collect(); | ||||
|  | ||||
|         self.enums.push(EnumInfo { | ||||
|             name, | ||||
|             visibility, | ||||
|             variants, | ||||
|             docs, | ||||
|             line_number: self.current_line, | ||||
|         }); | ||||
|  | ||||
|         syn::visit::visit_item_enum(self, node); | ||||
|     } | ||||
|  | ||||
|     fn visit_item_trait(&mut self, node: &'ast ItemTrait) { | ||||
|         let name = node.ident.to_string(); | ||||
|         let visibility = self.visibility_to_string(&node.vis); | ||||
|         let docs = self.extract_docs(&node.attrs); | ||||
|  | ||||
|         let methods = node.items.iter().filter_map(|item| { | ||||
|             match item { | ||||
|                 syn::TraitItem::Fn(method) => { | ||||
|                     let method_name = method.sig.ident.to_string(); | ||||
|                     let method_visibility = "pub".to_string(); // Trait methods are inherently public | ||||
|                     let is_async = method.sig.asyncness.is_some(); | ||||
|                      | ||||
|                     let parameters = method.sig.inputs.iter().map(|input| { | ||||
|                         match input { | ||||
|                             syn::FnArg::Receiver(_) => Parameter { | ||||
|                                 name: "self".to_string(), | ||||
|                                 param_type: "Self".to_string(), | ||||
|                                 is_mutable: false, | ||||
|                             }, | ||||
|                             syn::FnArg::Typed(typed) => { | ||||
|                                 let name = match &*typed.pat { | ||||
|                                     syn::Pat::Ident(ident) => ident.ident.to_string(), | ||||
|                                     _ => "unknown".to_string(), | ||||
|                                 }; | ||||
|                                 Parameter { | ||||
|                                     name, | ||||
|                                     param_type: quote::quote!(#typed.ty).to_string(), | ||||
|                                     is_mutable: false, | ||||
|                                 } | ||||
|                             } | ||||
|                         } | ||||
|                     }).collect(); | ||||
|  | ||||
|                     let return_type = match &method.sig.output { | ||||
|                         syn::ReturnType::Default => None, | ||||
|                         syn::ReturnType::Type(_, ty) => Some(quote::quote!(#ty).to_string()), | ||||
|                     }; | ||||
|  | ||||
|                     let method_docs = self.extract_docs(&method.attrs); | ||||
|  | ||||
|                     Some(FunctionInfo { | ||||
|                         name: method_name, | ||||
|                         visibility: method_visibility, | ||||
|                         is_async, | ||||
|                         parameters, | ||||
|                         return_type, | ||||
|                         docs: method_docs, | ||||
|                         line_number: self.current_line, | ||||
|                     }) | ||||
|                 } | ||||
|                 _ => None, | ||||
|             } | ||||
|         }).collect(); | ||||
|  | ||||
|         self.traits.push(TraitInfo { | ||||
|             name, | ||||
|             visibility, | ||||
|             methods, | ||||
|             docs, | ||||
|             line_number: self.current_line, | ||||
|         }); | ||||
|  | ||||
|         syn::visit::visit_item_trait(self, node); | ||||
|     } | ||||
| } | ||||
							
								
								
									
										287
									
								
								src/commands/doc.rs
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										287
									
								
								src/commands/doc.rs
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,287 @@ | ||||
| use anyhow::Result; | ||||
| use clap::{Subcommand, Parser}; | ||||
| use std::path::PathBuf; | ||||
| use crate::analyzer::CodeAnalyzer; | ||||
| use crate::doc_generator::DocGenerator; | ||||
| use crate::translator::{TranslationConfig, Translator}; | ||||
| use crate::translator::ollama_translator::OllamaTranslator; | ||||
|  | ||||
| #[derive(Parser)] | ||||
| #[command(about = "Generate documentation from code")] | ||||
| pub struct DocCommand { | ||||
|     #[command(subcommand)] | ||||
|     pub action: DocAction, | ||||
| } | ||||
|  | ||||
| #[derive(Subcommand)] | ||||
| pub enum DocAction { | ||||
|     /// Generate README.md from project analysis | ||||
|     Readme { | ||||
|         /// Source directory to analyze | ||||
|         #[arg(long, default_value = ".")] | ||||
|         source: PathBuf, | ||||
|         /// Output file path | ||||
|         #[arg(long, default_value = "README.md")] | ||||
|         output: PathBuf, | ||||
|         /// Include AI-generated insights | ||||
|         #[arg(long)] | ||||
|         with_ai: bool, | ||||
|     }, | ||||
|     /// Generate API documentation | ||||
|     Api { | ||||
|         /// Source directory to analyze | ||||
|         #[arg(long, default_value = "./src")] | ||||
|         source: PathBuf, | ||||
|         /// Output directory | ||||
|         #[arg(long, default_value = "./docs")] | ||||
|         output: PathBuf, | ||||
|         /// Output format (markdown, html, json) | ||||
|         #[arg(long, default_value = "markdown")] | ||||
|         format: String, | ||||
|     }, | ||||
|     /// Analyze and document project structure | ||||
|     Structure { | ||||
|         /// Source directory to analyze | ||||
|         #[arg(long, default_value = ".")] | ||||
|         source: PathBuf, | ||||
|         /// Output file path | ||||
|         #[arg(long, default_value = "docs/structure.md")] | ||||
|         output: PathBuf, | ||||
|         /// Include dependency graph | ||||
|         #[arg(long)] | ||||
|         include_deps: bool, | ||||
|     }, | ||||
|     /// Generate changelog from git commits | ||||
|     Changelog { | ||||
|         /// Start from this commit/tag | ||||
|         #[arg(long)] | ||||
|         from: Option<String>, | ||||
|         /// End at this commit/tag | ||||
|         #[arg(long)] | ||||
|         to: Option<String>, | ||||
|         /// Output file path | ||||
|         #[arg(long, default_value = "CHANGELOG.md")] | ||||
|         output: PathBuf, | ||||
|         /// Include AI explanations for changes | ||||
|         #[arg(long)] | ||||
|         explain_changes: bool, | ||||
|     }, | ||||
|     /// Translate documentation using Ollama | ||||
|     Translate { | ||||
|         /// Input file path | ||||
|         #[arg(long)] | ||||
|         input: PathBuf, | ||||
|         /// Target language (en, ja, zh, ko, es) | ||||
|         #[arg(long)] | ||||
|         target_lang: String, | ||||
|         /// Source language (auto-detect if not specified) | ||||
|         #[arg(long)] | ||||
|         source_lang: Option<String>, | ||||
|         /// Output file path (auto-generated if not specified) | ||||
|         #[arg(long)] | ||||
|         output: Option<PathBuf>, | ||||
|         /// Ollama model to use | ||||
|         #[arg(long, default_value = "qwen2.5:latest")] | ||||
|         model: String, | ||||
|         /// Ollama endpoint | ||||
|         #[arg(long, default_value = "http://localhost:11434")] | ||||
|         ollama_endpoint: String, | ||||
|     }, | ||||
| } | ||||
|  | ||||
| impl DocCommand { | ||||
|     pub async fn execute(self, base_path: PathBuf) -> Result<()> { | ||||
|         match self.action { | ||||
|             DocAction::Readme { ref source, ref output, with_ai } => { | ||||
|                 self.generate_readme(base_path, source.clone(), output.clone(), with_ai).await | ||||
|             } | ||||
|             DocAction::Api { ref source, ref output, ref format } => { | ||||
|                 self.generate_api_docs(base_path, source.clone(), output.clone(), format.clone()).await | ||||
|             } | ||||
|             DocAction::Structure { ref source, ref output, include_deps } => { | ||||
|                 self.analyze_structure(base_path, source.clone(), output.clone(), include_deps).await | ||||
|             } | ||||
|             DocAction::Changelog { ref from, ref to, ref output, explain_changes } => { | ||||
|                 self.generate_changelog(base_path, from.clone(), to.clone(), output.clone(), explain_changes).await | ||||
|             } | ||||
|             DocAction::Translate { ref input, ref target_lang, ref source_lang, ref output, ref model, ref ollama_endpoint } => { | ||||
|                 self.translate_document(input.clone(), target_lang.clone(), source_lang.clone(), output.clone(), model.clone(), ollama_endpoint.clone()).await | ||||
|             } | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     async fn generate_readme( | ||||
|         &self, | ||||
|         base_path: PathBuf, | ||||
|         source: PathBuf, | ||||
|         output: PathBuf, | ||||
|         with_ai: bool, | ||||
|     ) -> Result<()> { | ||||
|         println!("🔍 Analyzing project for README generation..."); | ||||
|          | ||||
|         let analyzer = CodeAnalyzer::new(); | ||||
|         let generator = DocGenerator::new(base_path.clone(), with_ai); | ||||
|          | ||||
|         let project_info = analyzer.analyze_project(&source)?; | ||||
|         let readme_content = generator.generate_readme(&project_info).await?; | ||||
|          | ||||
|         std::fs::write(&output, readme_content)?; | ||||
|          | ||||
|         println!("✅ README generated: {}", output.display()); | ||||
|         Ok(()) | ||||
|     } | ||||
|  | ||||
|     async fn generate_api_docs( | ||||
|         &self, | ||||
|         base_path: PathBuf, | ||||
|         source: PathBuf, | ||||
|         output: PathBuf, | ||||
|         format: String, | ||||
|     ) -> Result<()> { | ||||
|         println!("📚 Generating API documentation..."); | ||||
|          | ||||
|         let analyzer = CodeAnalyzer::new(); | ||||
|         let generator = DocGenerator::new(base_path.clone(), true); | ||||
|          | ||||
|         let api_info = analyzer.analyze_api(&source)?; | ||||
|          | ||||
|         match format.as_str() { | ||||
|             "markdown" => { | ||||
|                 let docs = generator.generate_api_markdown(&api_info).await?; | ||||
|                 std::fs::create_dir_all(&output)?; | ||||
|                  | ||||
|                 for (filename, content) in docs { | ||||
|                     let file_path = output.join(filename); | ||||
|                     std::fs::write(&file_path, content)?; | ||||
|                     println!("  📄 Generated: {}", file_path.display()); | ||||
|                 } | ||||
|             } | ||||
|             "html" => { | ||||
|                 println!("HTML format not yet implemented"); | ||||
|             } | ||||
|             "json" => { | ||||
|                 let json_content = serde_json::to_string_pretty(&api_info)?; | ||||
|                 let file_path = output.join("api.json"); | ||||
|                 std::fs::create_dir_all(&output)?; | ||||
|                 std::fs::write(&file_path, json_content)?; | ||||
|                 println!("  📄 Generated: {}", file_path.display()); | ||||
|             } | ||||
|             _ => { | ||||
|                 anyhow::bail!("Unsupported format: {}", format); | ||||
|             } | ||||
|         } | ||||
|          | ||||
|         println!("✅ API documentation generated in: {}", output.display()); | ||||
|         Ok(()) | ||||
|     } | ||||
|  | ||||
|     async fn analyze_structure( | ||||
|         &self, | ||||
|         base_path: PathBuf, | ||||
|         source: PathBuf, | ||||
|         output: PathBuf, | ||||
|         include_deps: bool, | ||||
|     ) -> Result<()> { | ||||
|         println!("🏗️  Analyzing project structure..."); | ||||
|          | ||||
|         let analyzer = CodeAnalyzer::new(); | ||||
|         let generator = DocGenerator::new(base_path.clone(), false); | ||||
|          | ||||
|         let structure = analyzer.analyze_structure(&source, include_deps)?; | ||||
|         let structure_doc = generator.generate_structure_doc(&structure).await?; | ||||
|          | ||||
|         // Ensure output directory exists | ||||
|         if let Some(parent) = output.parent() { | ||||
|             std::fs::create_dir_all(parent)?; | ||||
|         } | ||||
|          | ||||
|         std::fs::write(&output, structure_doc)?; | ||||
|          | ||||
|         println!("✅ Structure documentation generated: {}", output.display()); | ||||
|         Ok(()) | ||||
|     } | ||||
|  | ||||
|     async fn generate_changelog( | ||||
|         &self, | ||||
|         base_path: PathBuf, | ||||
|         from: Option<String>, | ||||
|         to: Option<String>, | ||||
|         output: PathBuf, | ||||
|         explain_changes: bool, | ||||
|     ) -> Result<()> { | ||||
|         println!("📝 Generating changelog from git history..."); | ||||
|          | ||||
|         let generator = DocGenerator::new(base_path.clone(), explain_changes); | ||||
|         let changelog = generator.generate_changelog(from, to).await?; | ||||
|          | ||||
|         std::fs::write(&output, changelog)?; | ||||
|          | ||||
|         println!("✅ Changelog generated: {}", output.display()); | ||||
|         Ok(()) | ||||
|     } | ||||
|  | ||||
|     async fn translate_document( | ||||
|         &self, | ||||
|         input: PathBuf, | ||||
|         target_lang: String, | ||||
|         source_lang: Option<String>, | ||||
|         output: Option<PathBuf>, | ||||
|         model: String, | ||||
|         ollama_endpoint: String, | ||||
|     ) -> Result<()> { | ||||
|         println!("🌍 Translating document with Ollama..."); | ||||
|          | ||||
|         // Read input file | ||||
|         let content = std::fs::read_to_string(&input)?; | ||||
|         println!("📖 Read {} characters from {}", content.len(), input.display()); | ||||
|          | ||||
|         // Setup translation config | ||||
|         let config = TranslationConfig { | ||||
|             source_lang: source_lang.unwrap_or_else(|| { | ||||
|                 // Simple language detection based on content | ||||
|                 if content.chars().any(|c| { | ||||
|                     (c >= '\u{3040}' && c <= '\u{309F}') || // Hiragana | ||||
|                     (c >= '\u{30A0}' && c <= '\u{30FF}') || // Katakana | ||||
|                     (c >= '\u{4E00}' && c <= '\u{9FAF}')    // CJK Unified Ideographs | ||||
|                 }) { | ||||
|                     "ja".to_string() | ||||
|                 } else { | ||||
|                     "en".to_string() | ||||
|                 } | ||||
|             }), | ||||
|             target_lang, | ||||
|             ollama_endpoint, | ||||
|             model, | ||||
|             preserve_code: true, | ||||
|             preserve_links: true, | ||||
|         }; | ||||
|          | ||||
|         println!("🔧 Translation config: {} → {}", config.source_lang, config.target_lang); | ||||
|         println!("🤖 Using model: {} at {}", config.model, config.ollama_endpoint); | ||||
|          | ||||
|         // Create translator | ||||
|         let translator = OllamaTranslator::new(); | ||||
|          | ||||
|         // Perform translation | ||||
|         let translated = translator.translate_markdown(&content, &config).await?; | ||||
|          | ||||
|         // Determine output path | ||||
|         let output_path = match output { | ||||
|             Some(path) => path, | ||||
|             None => { | ||||
|                 let input_stem = input.file_stem().unwrap().to_string_lossy(); | ||||
|                 let input_ext = input.extension().unwrap_or_default().to_string_lossy(); | ||||
|                 let output_name = format!("{}.{}.{}", input_stem, config.target_lang, input_ext); | ||||
|                 input.parent().unwrap_or_else(|| std::path::Path::new(".")).join(output_name) | ||||
|             } | ||||
|         }; | ||||
|          | ||||
|         // Write translated content | ||||
|         std::fs::write(&output_path, translated)?; | ||||
|          | ||||
|         println!("✅ Translation completed: {}", output_path.display()); | ||||
|         println!("📝 Language: {} → {}", config.source_lang, config.target_lang); | ||||
|          | ||||
|         Ok(()) | ||||
|     } | ||||
| } | ||||
| @@ -2,4 +2,5 @@ pub mod init; | ||||
| pub mod build; | ||||
| pub mod new; | ||||
| pub mod serve; | ||||
| pub mod clean; | ||||
| pub mod clean; | ||||
| pub mod doc; | ||||
							
								
								
									
										235
									
								
								src/doc_generator.rs
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										235
									
								
								src/doc_generator.rs
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,235 @@ | ||||
| use anyhow::Result; | ||||
| use std::path::PathBuf; | ||||
| use crate::analyzer::{ProjectInfo, ApiInfo, ProjectStructure}; | ||||
| use crate::ai::gpt_client::GptClient; | ||||
|  | ||||
| pub struct DocGenerator { | ||||
|     base_path: PathBuf, | ||||
|     ai_enabled: bool, | ||||
|     templates: DocTemplates, | ||||
| } | ||||
|  | ||||
| pub struct DocTemplates { | ||||
|     readme_template: String, | ||||
|     api_template: String, | ||||
|     structure_template: String, | ||||
|     changelog_template: String, | ||||
| } | ||||
|  | ||||
| impl DocGenerator { | ||||
|     pub fn new(base_path: PathBuf, ai_enabled: bool) -> Self { | ||||
|         let templates = DocTemplates::default(); | ||||
|         Self { | ||||
|             base_path, | ||||
|             ai_enabled, | ||||
|             templates, | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     pub async fn generate_readme(&self, project_info: &ProjectInfo) -> Result<String> { | ||||
|         let mut content = self.templates.readme_template.clone(); | ||||
|          | ||||
|         // Simple template substitution | ||||
|         content = content.replace("{{name}}", &project_info.name); | ||||
|         content = content.replace("{{description}}",  | ||||
|             &project_info.description.as_ref().unwrap_or(&"A Rust project".to_string())); | ||||
|         content = content.replace("{{module_count}}", &project_info.modules.len().to_string()); | ||||
|         content = content.replace("{{total_lines}}", &project_info.metrics.total_lines.to_string()); | ||||
|          | ||||
|         let deps = project_info.dependencies.iter() | ||||
|             .map(|(name, version)| format!("- {}: {}", name, version)) | ||||
|             .collect::<Vec<_>>() | ||||
|             .join("\n"); | ||||
|         content = content.replace("{{dependencies}}", &deps); | ||||
|         content = content.replace("{{license}}",  | ||||
|             &project_info.license.as_ref().unwrap_or(&"MIT".to_string())); | ||||
|          | ||||
|         if self.ai_enabled { | ||||
|             content = self.enhance_with_ai(&content, "readme").await?; | ||||
|         } | ||||
|          | ||||
|         Ok(content) | ||||
|     } | ||||
|  | ||||
|     pub async fn generate_api_markdown(&self, api_info: &ApiInfo) -> Result<Vec<(String, String)>> { | ||||
|         let mut files = Vec::new(); | ||||
|          | ||||
|         // Generate main API documentation | ||||
|         let main_content = self.templates.api_template.replace("{{content}}", "Generated API Documentation"); | ||||
|         files.push(("api.md".to_string(), main_content)); | ||||
|          | ||||
|         // Generate individual module docs | ||||
|         for module in &api_info.modules { | ||||
|             if !module.functions.is_empty() || !module.structs.is_empty() { | ||||
|                 let module_content = self.generate_module_doc(module).await?; | ||||
|                 files.push((format!("{}.md", module.name), module_content)); | ||||
|             } | ||||
|         } | ||||
|          | ||||
|         Ok(files) | ||||
|     } | ||||
|  | ||||
|     pub async fn generate_structure_doc(&self, structure: &ProjectStructure) -> Result<String> { | ||||
|         let content = self.templates.structure_template.replace("{{content}}",  | ||||
|             &format!("Found {} directories and {} files",  | ||||
|                 structure.directories.len(),  | ||||
|                 structure.files.len())); | ||||
|         Ok(content) | ||||
|     } | ||||
|  | ||||
|     pub async fn generate_changelog(&self, from: Option<String>, to: Option<String>) -> Result<String> { | ||||
|         let commits = self.get_git_commits(from, to)?; | ||||
|          | ||||
|         let mut content = self.templates.changelog_template.replace("{{content}}",  | ||||
|             &format!("Found {} commits", commits.len())); | ||||
|          | ||||
|         if self.ai_enabled { | ||||
|             content = self.enhance_changelog_with_ai(&content, &commits).await?; | ||||
|         } | ||||
|          | ||||
|         Ok(content) | ||||
|     } | ||||
|  | ||||
|  | ||||
|     async fn enhance_with_ai(&self, content: &str, doc_type: &str) -> Result<String> { | ||||
|         if !self.ai_enabled { | ||||
|             return Ok(content.to_string()); | ||||
|         } | ||||
|  | ||||
|         let gpt_client = GptClient::new( | ||||
|             std::env::var("OPENAI_API_KEY").unwrap_or_default(), | ||||
|             None, | ||||
|         ); | ||||
|  | ||||
|         let prompt = format!( | ||||
|             "Enhance this {} documentation with additional insights and improve readability:\n\n{}", | ||||
|             doc_type, content | ||||
|         ); | ||||
|  | ||||
|         match gpt_client.chat("You are a technical writer helping to improve documentation.", &prompt).await { | ||||
|             Ok(enhanced) => Ok(enhanced), | ||||
|             Err(_) => Ok(content.to_string()), // Fallback to original content | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     async fn generate_module_doc(&self, module: &crate::analyzer::ModuleInfo) -> Result<String> { | ||||
|         let mut content = format!("# Module: {}\n\n", module.name); | ||||
|          | ||||
|         if let Some(docs) = &module.docs { | ||||
|             content.push_str(&format!("{}\n\n", docs)); | ||||
|         } | ||||
|  | ||||
|         // Add functions | ||||
|         if !module.functions.is_empty() { | ||||
|             content.push_str("## Functions\n\n"); | ||||
|             for func in &module.functions { | ||||
|                 content.push_str(&self.format_function_doc(func)); | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         // Add structs | ||||
|         if !module.structs.is_empty() { | ||||
|             content.push_str("## Structs\n\n"); | ||||
|             for struct_info in &module.structs { | ||||
|                 content.push_str(&self.format_struct_doc(struct_info)); | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         Ok(content) | ||||
|     } | ||||
|  | ||||
|     fn format_function_doc(&self, func: &crate::analyzer::FunctionInfo) -> String { | ||||
|         let mut doc = format!("### `{}`\n\n", func.name); | ||||
|          | ||||
|         if let Some(docs) = &func.docs { | ||||
|             doc.push_str(&format!("{}\n\n", docs)); | ||||
|         } | ||||
|  | ||||
|         doc.push_str(&format!("**Visibility:** `{}`\n", func.visibility)); | ||||
|          | ||||
|         if func.is_async { | ||||
|             doc.push_str("**Async:** Yes\n"); | ||||
|         } | ||||
|  | ||||
|         if !func.parameters.is_empty() { | ||||
|             doc.push_str("\n**Parameters:**\n"); | ||||
|             for param in &func.parameters { | ||||
|                 doc.push_str(&format!("- `{}`: `{}`\n", param.name, param.param_type)); | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         if let Some(return_type) = &func.return_type { | ||||
|             doc.push_str(&format!("\n**Returns:** `{}`\n", return_type)); | ||||
|         } | ||||
|  | ||||
|         doc.push_str("\n---\n\n"); | ||||
|         doc | ||||
|     } | ||||
|  | ||||
|     fn format_struct_doc(&self, struct_info: &crate::analyzer::StructInfo) -> String { | ||||
|         let mut doc = format!("### `{}`\n\n", struct_info.name); | ||||
|          | ||||
|         if let Some(docs) = &struct_info.docs { | ||||
|             doc.push_str(&format!("{}\n\n", docs)); | ||||
|         } | ||||
|  | ||||
|         doc.push_str(&format!("**Visibility:** `{}`\n\n", struct_info.visibility)); | ||||
|  | ||||
|         if !struct_info.fields.is_empty() { | ||||
|             doc.push_str("**Fields:**\n"); | ||||
|             for field in &struct_info.fields { | ||||
|                 doc.push_str(&format!("- `{}`: `{}` ({})\n", field.name, field.field_type, field.visibility)); | ||||
|                 if let Some(field_docs) = &field.docs { | ||||
|                     doc.push_str(&format!("  - {}\n", field_docs)); | ||||
|                 } | ||||
|             } | ||||
|         } | ||||
|  | ||||
|         doc.push_str("\n---\n\n"); | ||||
|         doc | ||||
|     } | ||||
|  | ||||
|     async fn enhance_changelog_with_ai(&self, content: &str, _commits: &[GitCommit]) -> Result<String> { | ||||
|         // TODO: Implement AI-enhanced changelog generation | ||||
|         Ok(content.to_string()) | ||||
|     } | ||||
|  | ||||
|     fn get_git_commits(&self, _from: Option<String>, _to: Option<String>) -> Result<Vec<GitCommit>> { | ||||
|         // TODO: Implement git history parsing | ||||
|         Ok(vec![]) | ||||
|     } | ||||
| } | ||||
|  | ||||
| #[derive(Debug)] | ||||
| pub struct GitCommit { | ||||
|     pub hash: String, | ||||
|     pub message: String, | ||||
|     pub author: String, | ||||
|     pub date: String, | ||||
| } | ||||
|  | ||||
| impl DocTemplates { | ||||
|     fn default() -> Self { | ||||
|         Self { | ||||
|             readme_template: r#"# {{name}} | ||||
|  | ||||
| {{description}} | ||||
|  | ||||
| ## Overview | ||||
|  | ||||
| This project contains {{module_count}} modules with {{total_lines}} lines of code. | ||||
|  | ||||
| ## Dependencies | ||||
|  | ||||
| {{dependencies}} | ||||
|  | ||||
| ## License | ||||
|  | ||||
| {{license}} | ||||
| "#.to_string(), | ||||
|             api_template: "# API Documentation\n\n{{content}}".to_string(), | ||||
|             structure_template: "# Project Structure\n\n{{content}}".to_string(), | ||||
|             changelog_template: "# Changelog\n\n{{content}}".to_string(), | ||||
|         } | ||||
|     } | ||||
| } | ||||
| @@ -2,10 +2,13 @@ use anyhow::Result; | ||||
| use clap::{Parser, Subcommand}; | ||||
| use std::path::PathBuf; | ||||
|  | ||||
| mod analyzer; | ||||
| mod commands; | ||||
| mod doc_generator; | ||||
| mod generator; | ||||
| mod markdown; | ||||
| mod template; | ||||
| mod translator; | ||||
| mod config; | ||||
| mod ai; | ||||
| mod atproto; | ||||
| @@ -59,6 +62,8 @@ enum Commands { | ||||
|         #[arg(default_value = ".")] | ||||
|         path: PathBuf, | ||||
|     }, | ||||
|     /// Generate documentation from code | ||||
|     Doc(commands::doc::DocCommand), | ||||
| } | ||||
|  | ||||
| #[tokio::main] | ||||
| @@ -86,6 +91,9 @@ async fn main() -> Result<()> { | ||||
|             let server = McpServer::new(path); | ||||
|             server.serve(port).await?; | ||||
|         } | ||||
|         Commands::Doc(doc_cmd) => { | ||||
|             doc_cmd.execute(std::env::current_dir()?).await?; | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     Ok(()) | ||||
|   | ||||
| @@ -113,6 +113,12 @@ async fn call_tool( | ||||
|                 .ok_or(StatusCode::BAD_REQUEST)?; | ||||
|             state.blog_tools.get_post_content(slug).await | ||||
|         } | ||||
|         "translate_document" => { | ||||
|             state.blog_tools.translate_document(arguments).await | ||||
|         } | ||||
|         "generate_documentation" => { | ||||
|             state.blog_tools.generate_documentation(arguments).await | ||||
|         } | ||||
|         _ => { | ||||
|             return Ok(Json(McpResponse { | ||||
|                 jsonrpc: "2.0".to_string(), | ||||
|   | ||||
							
								
								
									
										205
									
								
								src/mcp/tools.rs
									
									
									
									
									
								
							
							
						
						
									
										205
									
								
								src/mcp/tools.rs
									
									
									
									
									
								
							| @@ -217,6 +217,141 @@ impl BlogTools { | ||||
|         }) | ||||
|     } | ||||
|  | ||||
|     pub async fn translate_document(&self, args: Value) -> Result<ToolResult> { | ||||
|         use crate::commands::doc::DocCommand; | ||||
|         use crate::commands::doc::DocAction; | ||||
|  | ||||
|         let input_file = args.get("input_file") | ||||
|             .and_then(|v| v.as_str()) | ||||
|             .ok_or_else(|| anyhow::anyhow!("input_file is required"))?; | ||||
|          | ||||
|         let target_lang = args.get("target_lang") | ||||
|             .and_then(|v| v.as_str()) | ||||
|             .ok_or_else(|| anyhow::anyhow!("target_lang is required"))?; | ||||
|          | ||||
|         let source_lang = args.get("source_lang").and_then(|v| v.as_str()).map(|s| s.to_string()); | ||||
|         let output_file = args.get("output_file").and_then(|v| v.as_str()).map(|s| PathBuf::from(s)); | ||||
|         let model = args.get("model").and_then(|v| v.as_str()).unwrap_or("qwen2.5:latest"); | ||||
|         let ollama_endpoint = args.get("ollama_endpoint").and_then(|v| v.as_str()).unwrap_or("http://localhost:11434"); | ||||
|  | ||||
|         let doc_cmd = DocCommand { | ||||
|             action: DocAction::Translate { | ||||
|                 input: PathBuf::from(input_file), | ||||
|                 target_lang: target_lang.to_string(), | ||||
|                 source_lang: source_lang.clone(), | ||||
|                 output: output_file, | ||||
|                 model: model.to_string(), | ||||
|                 ollama_endpoint: ollama_endpoint.to_string(), | ||||
|             } | ||||
|         }; | ||||
|  | ||||
|         match doc_cmd.execute(self.base_path.clone()).await { | ||||
|             Ok(_) => { | ||||
|                 let output_path = if let Some(output) = args.get("output_file").and_then(|v| v.as_str()) { | ||||
|                     output.to_string() | ||||
|                 } else { | ||||
|                     let input_path = PathBuf::from(input_file); | ||||
|                     let stem = input_path.file_stem().unwrap().to_string_lossy(); | ||||
|                     let ext = input_path.extension().unwrap_or_default().to_string_lossy(); | ||||
|                     format!("{}.{}.{}", stem, target_lang, ext) | ||||
|                 }; | ||||
|  | ||||
|                 Ok(ToolResult { | ||||
|                     content: vec![Content { | ||||
|                         content_type: "text".to_string(), | ||||
|                         text: format!("Document translated successfully from {} to {}. Output: {}",  | ||||
|                                     source_lang.unwrap_or_else(|| "auto-detected".to_string()),  | ||||
|                                     target_lang, output_path), | ||||
|                     }], | ||||
|                     is_error: None, | ||||
|                 }) | ||||
|             } | ||||
|             Err(e) => Ok(ToolResult { | ||||
|                 content: vec![Content { | ||||
|                     content_type: "text".to_string(), | ||||
|                     text: format!("Translation failed: {}", e), | ||||
|                 }], | ||||
|                 is_error: Some(true), | ||||
|             }) | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     pub async fn generate_documentation(&self, args: Value) -> Result<ToolResult> { | ||||
|         use crate::commands::doc::DocCommand; | ||||
|         use crate::commands::doc::DocAction; | ||||
|  | ||||
|         let doc_type = args.get("doc_type") | ||||
|             .and_then(|v| v.as_str()) | ||||
|             .ok_or_else(|| anyhow::anyhow!("doc_type is required"))?; | ||||
|  | ||||
|         let source_path = args.get("source_path").and_then(|v| v.as_str()).unwrap_or("."); | ||||
|         let output_path = args.get("output_path").and_then(|v| v.as_str()); | ||||
|         let with_ai = args.get("with_ai").and_then(|v| v.as_bool()).unwrap_or(true); | ||||
|         let include_deps = args.get("include_deps").and_then(|v| v.as_bool()).unwrap_or(false); | ||||
|         let format_type = args.get("format_type").and_then(|v| v.as_str()).unwrap_or("markdown"); | ||||
|  | ||||
|         let action = match doc_type { | ||||
|             "readme" => DocAction::Readme { | ||||
|                 source: PathBuf::from(source_path), | ||||
|                 output: PathBuf::from(output_path.unwrap_or("README.md")), | ||||
|                 with_ai, | ||||
|             }, | ||||
|             "api" => DocAction::Api { | ||||
|                 source: PathBuf::from(source_path), | ||||
|                 output: PathBuf::from(output_path.unwrap_or("./docs")), | ||||
|                 format: format_type.to_string(), | ||||
|             }, | ||||
|             "structure" => DocAction::Structure { | ||||
|                 source: PathBuf::from(source_path), | ||||
|                 output: PathBuf::from(output_path.unwrap_or("docs/structure.md")), | ||||
|                 include_deps, | ||||
|             }, | ||||
|             "changelog" => DocAction::Changelog { | ||||
|                 from: None, | ||||
|                 to: None, | ||||
|                 output: PathBuf::from(output_path.unwrap_or("CHANGELOG.md")), | ||||
|                 explain_changes: with_ai, | ||||
|             }, | ||||
|             _ => return Ok(ToolResult { | ||||
|                 content: vec![Content { | ||||
|                     content_type: "text".to_string(), | ||||
|                     text: format!("Unsupported doc_type: {}. Supported types: readme, api, structure, changelog", doc_type), | ||||
|                 }], | ||||
|                 is_error: Some(true), | ||||
|             }) | ||||
|         }; | ||||
|  | ||||
|         let doc_cmd = DocCommand { action }; | ||||
|  | ||||
|         match doc_cmd.execute(self.base_path.clone()).await { | ||||
|             Ok(_) => { | ||||
|                 let output_path = match doc_type { | ||||
|                     "readme" => output_path.unwrap_or("README.md"), | ||||
|                     "api" => output_path.unwrap_or("./docs"), | ||||
|                     "structure" => output_path.unwrap_or("docs/structure.md"), | ||||
|                     "changelog" => output_path.unwrap_or("CHANGELOG.md"), | ||||
|                     _ => "unknown" | ||||
|                 }; | ||||
|  | ||||
|                 Ok(ToolResult { | ||||
|                     content: vec![Content { | ||||
|                         content_type: "text".to_string(), | ||||
|                         text: format!("{} documentation generated successfully. Output: {}",  | ||||
|                                     doc_type.to_uppercase(), output_path), | ||||
|                     }], | ||||
|                     is_error: None, | ||||
|                 }) | ||||
|             } | ||||
|             Err(e) => Ok(ToolResult { | ||||
|                 content: vec![Content { | ||||
|                     content_type: "text".to_string(), | ||||
|                     text: format!("Documentation generation failed: {}", e), | ||||
|                 }], | ||||
|                 is_error: Some(true), | ||||
|             }) | ||||
|         } | ||||
|     } | ||||
|  | ||||
|     pub fn get_tools() -> Vec<Tool> { | ||||
|         vec![ | ||||
|             Tool { | ||||
| @@ -294,6 +429,76 @@ impl BlogTools { | ||||
|                     "required": ["slug"] | ||||
|                 }), | ||||
|             }, | ||||
|             Tool { | ||||
|                 name: "translate_document".to_string(), | ||||
|                 description: "Translate markdown documents using Ollama AI while preserving structure".to_string(), | ||||
|                 input_schema: json!({ | ||||
|                     "type": "object", | ||||
|                     "properties": { | ||||
|                         "input_file": { | ||||
|                             "type": "string", | ||||
|                             "description": "Path to the input markdown file" | ||||
|                         }, | ||||
|                         "target_lang": { | ||||
|                             "type": "string", | ||||
|                             "description": "Target language code (en, ja, zh, ko, es)" | ||||
|                         }, | ||||
|                         "source_lang": { | ||||
|                             "type": "string", | ||||
|                             "description": "Source language code (auto-detect if not specified)" | ||||
|                         }, | ||||
|                         "output_file": { | ||||
|                             "type": "string", | ||||
|                             "description": "Output file path (auto-generated if not specified)" | ||||
|                         }, | ||||
|                         "model": { | ||||
|                             "type": "string", | ||||
|                             "description": "Ollama model to use (default: qwen2.5:latest)" | ||||
|                         }, | ||||
|                         "ollama_endpoint": { | ||||
|                             "type": "string", | ||||
|                             "description": "Ollama API endpoint (default: http://localhost:11434)" | ||||
|                         } | ||||
|                     }, | ||||
|                     "required": ["input_file", "target_lang"] | ||||
|                 }), | ||||
|             }, | ||||
|             Tool { | ||||
|                 name: "generate_documentation".to_string(), | ||||
|                 description: "Generate various types of documentation from code analysis".to_string(), | ||||
|                 input_schema: json!({ | ||||
|                     "type": "object", | ||||
|                     "properties": { | ||||
|                         "doc_type": { | ||||
|                             "type": "string", | ||||
|                             "enum": ["readme", "api", "structure", "changelog"], | ||||
|                             "description": "Type of documentation to generate" | ||||
|                         }, | ||||
|                         "source_path": { | ||||
|                             "type": "string", | ||||
|                             "description": "Source directory to analyze (default: current directory)" | ||||
|                         }, | ||||
|                         "output_path": { | ||||
|                             "type": "string", | ||||
|                             "description": "Output file or directory path" | ||||
|                         }, | ||||
|                         "with_ai": { | ||||
|                             "type": "boolean", | ||||
|                             "description": "Include AI-generated insights (default: true)" | ||||
|                         }, | ||||
|                         "include_deps": { | ||||
|                             "type": "boolean", | ||||
|                             "description": "Include dependency analysis (default: false)" | ||||
|                         }, | ||||
|                         "format_type": { | ||||
|                             "type": "string", | ||||
|                             "enum": ["markdown", "html", "json"], | ||||
|                             "description": "Output format (default: markdown)" | ||||
|                         } | ||||
|                     }, | ||||
|                     "required": ["doc_type"] | ||||
|                 }), | ||||
|             }, | ||||
|         ] | ||||
|     } | ||||
| } | ||||
							
								
								
									
										253
									
								
								src/translator/markdown_parser.rs
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										253
									
								
								src/translator/markdown_parser.rs
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,253 @@ | ||||
| use anyhow::Result; | ||||
| use regex::Regex; | ||||
| use super::MarkdownSection; | ||||
|  | ||||
| pub struct MarkdownParser { | ||||
|     code_block_regex: Regex, | ||||
|     header_regex: Regex, | ||||
|     link_regex: Regex, | ||||
|     image_regex: Regex, | ||||
|     table_regex: Regex, | ||||
|     list_regex: Regex, | ||||
|     quote_regex: Regex, | ||||
| } | ||||
|  | ||||
| impl MarkdownParser { | ||||
|     pub fn new() -> Self { | ||||
|         Self { | ||||
|             code_block_regex: Regex::new(r"```([a-zA-Z0-9]*)\n([\s\S]*?)\n```").unwrap(), | ||||
|             header_regex: Regex::new(r"^(#{1,6})\s+(.+)$").unwrap(), | ||||
|             link_regex: Regex::new(r"\[([^\]]+)\]\(([^)]+)\)").unwrap(), | ||||
|             image_regex: Regex::new(r"!\[([^\]]*)\]\(([^)]+)\)").unwrap(), | ||||
|             table_regex: Regex::new(r"^\|.*\|$").unwrap(), | ||||
|             list_regex: Regex::new(r"^[\s]*[-*+]\s+(.+)$").unwrap(), | ||||
|             quote_regex: Regex::new(r"^>\s+(.+)$").unwrap(), | ||||
|         } | ||||
|     } | ||||
|      | ||||
|     pub fn parse_markdown(&self, content: &str) -> Result<Vec<MarkdownSection>> { | ||||
|         let mut sections = Vec::new(); | ||||
|         let mut current_text = String::new(); | ||||
|         let lines: Vec<&str> = content.lines().collect(); | ||||
|         let mut i = 0; | ||||
|          | ||||
|         while i < lines.len() { | ||||
|             let line = lines[i]; | ||||
|              | ||||
|             // Check for code blocks | ||||
|             if line.starts_with("```") { | ||||
|                 // Save accumulated text | ||||
|                 if !current_text.trim().is_empty() { | ||||
|                     sections.extend(self.parse_text_sections(¤t_text)?); | ||||
|                     current_text.clear(); | ||||
|                 } | ||||
|                  | ||||
|                 // Parse code block | ||||
|                 let (code_section, lines_consumed) = self.parse_code_block(&lines[i..])?; | ||||
|                 sections.push(code_section); | ||||
|                 i += lines_consumed; | ||||
|                 continue; | ||||
|             } | ||||
|              | ||||
|             // Check for headers | ||||
|             if let Some(caps) = self.header_regex.captures(line) { | ||||
|                 // Save accumulated text | ||||
|                 if !current_text.trim().is_empty() { | ||||
|                     sections.extend(self.parse_text_sections(¤t_text)?); | ||||
|                     current_text.clear(); | ||||
|                 } | ||||
|                  | ||||
|                 let level = caps.get(1).unwrap().as_str().len() as u8; | ||||
|                 let header_text = caps.get(2).unwrap().as_str().to_string(); | ||||
|                 sections.push(MarkdownSection::Header(header_text, level)); | ||||
|                 i += 1; | ||||
|                 continue; | ||||
|             } | ||||
|              | ||||
|             // Check for tables | ||||
|             if self.table_regex.is_match(line) { | ||||
|                 // Save accumulated text | ||||
|                 if !current_text.trim().is_empty() { | ||||
|                     sections.extend(self.parse_text_sections(¤t_text)?); | ||||
|                     current_text.clear(); | ||||
|                 } | ||||
|                  | ||||
|                 let (table_section, lines_consumed) = self.parse_table(&lines[i..])?; | ||||
|                 sections.push(table_section); | ||||
|                 i += lines_consumed; | ||||
|                 continue; | ||||
|             } | ||||
|              | ||||
|             // Check for quotes | ||||
|             if let Some(caps) = self.quote_regex.captures(line) { | ||||
|                 // Save accumulated text | ||||
|                 if !current_text.trim().is_empty() { | ||||
|                     sections.extend(self.parse_text_sections(¤t_text)?); | ||||
|                     current_text.clear(); | ||||
|                 } | ||||
|                  | ||||
|                 let quote_text = caps.get(1).unwrap().as_str().to_string(); | ||||
|                 sections.push(MarkdownSection::Quote(quote_text)); | ||||
|                 i += 1; | ||||
|                 continue; | ||||
|             } | ||||
|              | ||||
|             // Check for lists | ||||
|             if let Some(caps) = self.list_regex.captures(line) { | ||||
|                 // Save accumulated text | ||||
|                 if !current_text.trim().is_empty() { | ||||
|                     sections.extend(self.parse_text_sections(¤t_text)?); | ||||
|                     current_text.clear(); | ||||
|                 } | ||||
|                  | ||||
|                 let list_text = caps.get(1).unwrap().as_str().to_string(); | ||||
|                 sections.push(MarkdownSection::List(list_text)); | ||||
|                 i += 1; | ||||
|                 continue; | ||||
|             } | ||||
|              | ||||
|             // Accumulate regular text | ||||
|             current_text.push_str(line); | ||||
|             current_text.push('\n'); | ||||
|             i += 1; | ||||
|         } | ||||
|          | ||||
|         // Process remaining text | ||||
|         if !current_text.trim().is_empty() { | ||||
|             sections.extend(self.parse_text_sections(¤t_text)?); | ||||
|         } | ||||
|          | ||||
|         Ok(sections) | ||||
|     } | ||||
|      | ||||
|     fn parse_code_block(&self, lines: &[&str]) -> Result<(MarkdownSection, usize)> { | ||||
|         if lines.is_empty() || !lines[0].starts_with("```") { | ||||
|             anyhow::bail!("Not a code block"); | ||||
|         } | ||||
|          | ||||
|         let first_line = lines[0]; | ||||
|         let language = if first_line.len() > 3 { | ||||
|             Some(first_line[3..].trim().to_string()) | ||||
|         } else { | ||||
|             None | ||||
|         }; | ||||
|          | ||||
|         let mut content = String::new(); | ||||
|         let mut end_index = 1; | ||||
|          | ||||
|         for (i, &line) in lines[1..].iter().enumerate() { | ||||
|             if line.starts_with("```") { | ||||
|                 end_index = i + 2; // +1 for slice offset, +1 for closing line | ||||
|                 break; | ||||
|             } | ||||
|             if i > 0 { | ||||
|                 content.push('\n'); | ||||
|             } | ||||
|             content.push_str(line); | ||||
|         } | ||||
|          | ||||
|         Ok((MarkdownSection::Code(content, language), end_index)) | ||||
|     } | ||||
|      | ||||
|     fn parse_table(&self, lines: &[&str]) -> Result<(MarkdownSection, usize)> { | ||||
|         let mut table_content = String::new(); | ||||
|         let mut line_count = 0; | ||||
|          | ||||
|         for &line in lines { | ||||
|             if self.table_regex.is_match(line) { | ||||
|                 if line_count > 0 { | ||||
|                     table_content.push('\n'); | ||||
|                 } | ||||
|                 table_content.push_str(line); | ||||
|                 line_count += 1; | ||||
|             } else { | ||||
|                 break; | ||||
|             } | ||||
|         } | ||||
|          | ||||
|         Ok((MarkdownSection::Table(table_content), line_count)) | ||||
|     } | ||||
|      | ||||
|     fn parse_text_sections(&self, text: &str) -> Result<Vec<MarkdownSection>> { | ||||
|         let mut sections = Vec::new(); | ||||
|         let mut remaining = text; | ||||
|          | ||||
|         // Look for images first (they should be preserved) | ||||
|         while let Some(caps) = self.image_regex.captures(remaining) { | ||||
|             let full_match = caps.get(0).unwrap(); | ||||
|             let before = &remaining[..full_match.start()]; | ||||
|             let alt = caps.get(1).unwrap().as_str().to_string(); | ||||
|             let url = caps.get(2).unwrap().as_str().to_string(); | ||||
|              | ||||
|             if !before.trim().is_empty() { | ||||
|                 sections.push(MarkdownSection::Text(before.to_string())); | ||||
|             } | ||||
|              | ||||
|             sections.push(MarkdownSection::Image(alt, url)); | ||||
|             remaining = &remaining[full_match.end()..]; | ||||
|         } | ||||
|          | ||||
|         // Look for links | ||||
|         let mut current_text = remaining.to_string(); | ||||
|         while let Some(caps) = self.link_regex.captures(¤t_text) { | ||||
|             let full_match = caps.get(0).unwrap(); | ||||
|             let before = ¤t_text[..full_match.start()]; | ||||
|             let link_text = caps.get(1).unwrap().as_str().to_string(); | ||||
|             let url = caps.get(2).unwrap().as_str().to_string(); | ||||
|              | ||||
|             if !before.trim().is_empty() { | ||||
|                 sections.push(MarkdownSection::Text(before.to_string())); | ||||
|             } | ||||
|              | ||||
|             sections.push(MarkdownSection::Link(link_text, url)); | ||||
|             current_text = current_text[full_match.end()..].to_string(); | ||||
|         } | ||||
|          | ||||
|         // Add remaining text | ||||
|         if !current_text.trim().is_empty() { | ||||
|             sections.push(MarkdownSection::Text(current_text)); | ||||
|         } | ||||
|          | ||||
|         Ok(sections) | ||||
|     } | ||||
|      | ||||
|     pub fn rebuild_markdown(&self, sections: Vec<MarkdownSection>) -> String { | ||||
|         let mut result = String::new(); | ||||
|          | ||||
|         for section in sections { | ||||
|             match section { | ||||
|                 MarkdownSection::Text(text) => { | ||||
|                     result.push_str(&text); | ||||
|                 } | ||||
|                 MarkdownSection::Code(content, Some(lang)) => { | ||||
|                     result.push_str(&format!("```{}\n{}\n```\n", lang, content)); | ||||
|                 } | ||||
|                 MarkdownSection::Code(content, None) => { | ||||
|                     result.push_str(&format!("```\n{}\n```\n", content)); | ||||
|                 } | ||||
|                 MarkdownSection::Header(text, level) => { | ||||
|                     let hashes = "#".repeat(level as usize); | ||||
|                     result.push_str(&format!("{} {}\n", hashes, text)); | ||||
|                 } | ||||
|                 MarkdownSection::Link(text, url) => { | ||||
|                     result.push_str(&format!("[{}]({})", text, url)); | ||||
|                 } | ||||
|                 MarkdownSection::Image(alt, url) => { | ||||
|                     result.push_str(&format!("", alt, url)); | ||||
|                 } | ||||
|                 MarkdownSection::Table(content) => { | ||||
|                     result.push_str(&content); | ||||
|                     result.push('\n'); | ||||
|                 } | ||||
|                 MarkdownSection::List(text) => { | ||||
|                     result.push_str(&format!("- {}\n", text)); | ||||
|                 } | ||||
|                 MarkdownSection::Quote(text) => { | ||||
|                     result.push_str(&format!("> {}\n", text)); | ||||
|                 } | ||||
|             } | ||||
|         } | ||||
|          | ||||
|         result | ||||
|     } | ||||
| } | ||||
							
								
								
									
										123
									
								
								src/translator/mod.rs
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										123
									
								
								src/translator/mod.rs
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,123 @@ | ||||
| pub mod ollama_translator; | ||||
| pub mod markdown_parser; | ||||
|  | ||||
| use anyhow::Result; | ||||
| use serde::{Deserialize, Serialize}; | ||||
| use std::collections::HashMap; | ||||
|  | ||||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||||
| pub struct TranslationConfig { | ||||
|     pub source_lang: String, | ||||
|     pub target_lang: String, | ||||
|     pub ollama_endpoint: String, | ||||
|     pub model: String, | ||||
|     pub preserve_code: bool, | ||||
|     pub preserve_links: bool, | ||||
| } | ||||
|  | ||||
| impl Default for TranslationConfig { | ||||
|     fn default() -> Self { | ||||
|         Self { | ||||
|             source_lang: "ja".to_string(), | ||||
|             target_lang: "en".to_string(), | ||||
|             ollama_endpoint: "http://localhost:11434".to_string(), | ||||
|             model: "qwen2.5:latest".to_string(), | ||||
|             preserve_code: true, | ||||
|             preserve_links: true, | ||||
|         } | ||||
|     } | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone)] | ||||
| pub enum MarkdownSection { | ||||
|     Text(String), | ||||
|     Code(String, Option<String>), // content, language | ||||
|     Header(String, u8), // content, level (1-6) | ||||
|     Link(String, String), // text, url | ||||
|     Image(String, String), // alt, url | ||||
|     Table(String), | ||||
|     List(String), | ||||
|     Quote(String), | ||||
| } | ||||
|  | ||||
| pub trait Translator { | ||||
|     async fn translate(&self, content: &str, config: &TranslationConfig) -> Result<String>; | ||||
|     async fn translate_markdown(&self, content: &str, config: &TranslationConfig) -> Result<String>; | ||||
|     async fn translate_sections(&self, sections: Vec<MarkdownSection>, config: &TranslationConfig) -> Result<Vec<MarkdownSection>>; | ||||
| } | ||||
|  | ||||
| pub struct TranslationResult { | ||||
|     pub original: String, | ||||
|     pub translated: String, | ||||
|     pub source_lang: String, | ||||
|     pub target_lang: String, | ||||
|     pub model: String, | ||||
|     pub metrics: TranslationMetrics, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone, Default)] | ||||
| pub struct TranslationMetrics { | ||||
|     pub character_count: usize, | ||||
|     pub word_count: usize, | ||||
|     pub translation_time_ms: u64, | ||||
|     pub sections_translated: usize, | ||||
|     pub sections_preserved: usize, | ||||
| } | ||||
|  | ||||
| pub struct LanguageMapping { | ||||
|     pub mappings: HashMap<String, LanguageInfo>, | ||||
| } | ||||
|  | ||||
| #[derive(Debug, Clone)] | ||||
| pub struct LanguageInfo { | ||||
|     pub name: String, | ||||
|     pub code: String, | ||||
|     pub ollama_prompt: String, | ||||
| } | ||||
|  | ||||
| impl LanguageMapping { | ||||
|     pub fn new() -> Self { | ||||
|         let mut mappings = HashMap::new(); | ||||
|          | ||||
|         // 主要言語の設定 | ||||
|         mappings.insert("ja".to_string(), LanguageInfo { | ||||
|             name: "Japanese".to_string(), | ||||
|             code: "ja".to_string(), | ||||
|             ollama_prompt: "You are a professional Japanese translator specializing in technical documentation.".to_string(), | ||||
|         }); | ||||
|          | ||||
|         mappings.insert("en".to_string(), LanguageInfo { | ||||
|             name: "English".to_string(), | ||||
|             code: "en".to_string(), | ||||
|             ollama_prompt: "You are a professional English translator specializing in technical documentation.".to_string(), | ||||
|         }); | ||||
|          | ||||
|         mappings.insert("zh".to_string(), LanguageInfo { | ||||
|             name: "Chinese".to_string(), | ||||
|             code: "zh".to_string(), | ||||
|             ollama_prompt: "You are a professional Chinese translator specializing in technical documentation.".to_string(), | ||||
|         }); | ||||
|          | ||||
|         mappings.insert("ko".to_string(), LanguageInfo { | ||||
|             name: "Korean".to_string(), | ||||
|             code: "ko".to_string(), | ||||
|             ollama_prompt: "You are a professional Korean translator specializing in technical documentation.".to_string(), | ||||
|         }); | ||||
|          | ||||
|         mappings.insert("es".to_string(), LanguageInfo { | ||||
|             name: "Spanish".to_string(), | ||||
|             code: "es".to_string(), | ||||
|             ollama_prompt: "You are a professional Spanish translator specializing in technical documentation.".to_string(), | ||||
|         }); | ||||
|          | ||||
|         Self { mappings } | ||||
|     } | ||||
|      | ||||
|     pub fn get_language_info(&self, code: &str) -> Option<&LanguageInfo> { | ||||
|         self.mappings.get(code) | ||||
|     } | ||||
|      | ||||
|     pub fn get_supported_languages(&self) -> Vec<String> { | ||||
|         self.mappings.keys().cloned().collect() | ||||
|     } | ||||
| } | ||||
							
								
								
									
										214
									
								
								src/translator/ollama_translator.rs
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										214
									
								
								src/translator/ollama_translator.rs
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,214 @@ | ||||
| use anyhow::Result; | ||||
| use reqwest::Client; | ||||
| use serde_json::json; | ||||
| use std::time::Instant; | ||||
| use super::*; | ||||
| use crate::translator::markdown_parser::MarkdownParser; | ||||
|  | ||||
| pub struct OllamaTranslator { | ||||
|     client: Client, | ||||
|     language_mapping: LanguageMapping, | ||||
|     parser: MarkdownParser, | ||||
| } | ||||
|  | ||||
| impl OllamaTranslator { | ||||
|     pub fn new() -> Self { | ||||
|         Self { | ||||
|             client: Client::new(), | ||||
|             language_mapping: LanguageMapping::new(), | ||||
|             parser: MarkdownParser::new(), | ||||
|         } | ||||
|     } | ||||
|      | ||||
|     async fn call_ollama(&self, prompt: &str, config: &TranslationConfig) -> Result<String> { | ||||
|         let request_body = json!({ | ||||
|             "model": config.model, | ||||
|             "prompt": prompt, | ||||
|             "stream": false, | ||||
|             "options": { | ||||
|                 "temperature": 0.3, | ||||
|                 "top_p": 0.9, | ||||
|                 "top_k": 40 | ||||
|             } | ||||
|         }); | ||||
|          | ||||
|         let url = format!("{}/api/generate", config.ollama_endpoint); | ||||
|          | ||||
|         let response = self.client | ||||
|             .post(&url) | ||||
|             .json(&request_body) | ||||
|             .send() | ||||
|             .await?; | ||||
|          | ||||
|         if !response.status().is_success() { | ||||
|             anyhow::bail!("Ollama API request failed: {}", response.status()); | ||||
|         } | ||||
|          | ||||
|         let response_text = response.text().await?; | ||||
|         let response_json: serde_json::Value = serde_json::from_str(&response_text)?; | ||||
|          | ||||
|         let translated = response_json | ||||
|             .get("response") | ||||
|             .and_then(|v| v.as_str()) | ||||
|             .ok_or_else(|| anyhow::anyhow!("Invalid response from Ollama"))?; | ||||
|          | ||||
|         Ok(translated.to_string()) | ||||
|     } | ||||
|      | ||||
|     fn build_translation_prompt(&self, text: &str, config: &TranslationConfig) -> Result<String> { | ||||
|         let source_info = self.language_mapping.get_language_info(&config.source_lang) | ||||
|             .ok_or_else(|| anyhow::anyhow!("Unsupported source language: {}", config.source_lang))?; | ||||
|          | ||||
|         let target_info = self.language_mapping.get_language_info(&config.target_lang) | ||||
|             .ok_or_else(|| anyhow::anyhow!("Unsupported target language: {}", config.target_lang))?; | ||||
|          | ||||
|         let prompt = format!( | ||||
|             r#"{system_prompt} | ||||
|  | ||||
| Translate the following text from {source_lang} to {target_lang}. | ||||
|  | ||||
| IMPORTANT RULES: | ||||
| 1. Preserve all Markdown formatting (headers, links, code blocks, etc.) | ||||
| 2. Do NOT translate content within code blocks (```) | ||||
| 3. Do NOT translate URLs or file paths | ||||
| 4. Preserve technical terms when appropriate | ||||
| 5. Maintain the original structure and formatting | ||||
| 6. Only output the translated text, no explanations | ||||
|  | ||||
| Original text ({source_code}): | ||||
| {text} | ||||
|  | ||||
| Translated text ({target_code}):"#, | ||||
|             system_prompt = target_info.ollama_prompt, | ||||
|             source_lang = source_info.name, | ||||
|             target_lang = target_info.name, | ||||
|             source_code = source_info.code, | ||||
|             target_code = target_info.code, | ||||
|             text = text | ||||
|         ); | ||||
|          | ||||
|         Ok(prompt) | ||||
|     } | ||||
|      | ||||
|     fn build_section_translation_prompt(&self, section: &MarkdownSection, config: &TranslationConfig) -> Result<String> { | ||||
|         let target_info = self.language_mapping.get_language_info(&config.target_lang) | ||||
|             .ok_or_else(|| anyhow::anyhow!("Unsupported target language: {}", config.target_lang))?; | ||||
|          | ||||
|         let (content, section_type) = match section { | ||||
|             MarkdownSection::Text(text) => (text.clone(), "text"), | ||||
|             MarkdownSection::Header(text, _) => (text.clone(), "header"), | ||||
|             MarkdownSection::Quote(text) => (text.clone(), "quote"), | ||||
|             MarkdownSection::List(text) => (text.clone(), "list"), | ||||
|             _ => return Ok(String::new()), // Skip translation for code, links, etc. | ||||
|         }; | ||||
|          | ||||
|         let prompt = format!( | ||||
|             r#"{system_prompt} | ||||
|  | ||||
| Translate this {section_type} from {source_lang} to {target_lang}. | ||||
|  | ||||
| RULES: | ||||
| - Only translate the text content | ||||
| - Preserve formatting symbols (*, #, >, etc.) | ||||
| - Keep technical terms when appropriate | ||||
| - Output only the translated text | ||||
|  | ||||
| Text to translate: | ||||
| {content} | ||||
|  | ||||
| Translation:"#, | ||||
|             system_prompt = target_info.ollama_prompt, | ||||
|             section_type = section_type, | ||||
|             source_lang = config.source_lang, | ||||
|             target_lang = config.target_lang, | ||||
|             content = content | ||||
|         ); | ||||
|          | ||||
|         Ok(prompt) | ||||
|     } | ||||
| } | ||||
|  | ||||
| impl Translator for OllamaTranslator { | ||||
|     async fn translate(&self, content: &str, config: &TranslationConfig) -> Result<String> { | ||||
|         let prompt = self.build_translation_prompt(content, config)?; | ||||
|         self.call_ollama(&prompt, config).await | ||||
|     } | ||||
|      | ||||
|     async fn translate_markdown(&self, content: &str, config: &TranslationConfig) -> Result<String> { | ||||
|         println!("🔄 Parsing markdown content..."); | ||||
|         let sections = self.parser.parse_markdown(content)?; | ||||
|          | ||||
|         println!("📝 Found {} sections to process", sections.len()); | ||||
|         let translated_sections = self.translate_sections(sections, config).await?; | ||||
|          | ||||
|         println!("✅ Rebuilding markdown from translated sections..."); | ||||
|         let result = self.parser.rebuild_markdown(translated_sections); | ||||
|          | ||||
|         Ok(result) | ||||
|     } | ||||
|      | ||||
|     async fn translate_sections(&self, sections: Vec<MarkdownSection>, config: &TranslationConfig) -> Result<Vec<MarkdownSection>> { | ||||
|         let mut translated_sections = Vec::new(); | ||||
|         let start_time = Instant::now(); | ||||
|          | ||||
|         for (index, section) in sections.into_iter().enumerate() { | ||||
|             println!("  🔤 Processing section {}", index + 1); | ||||
|              | ||||
|             let translated_section = match §ion { | ||||
|                 MarkdownSection::Code(content, lang) => { | ||||
|                     if config.preserve_code { | ||||
|                         println!("    ⏭️  Preserving code block"); | ||||
|                         section // Preserve code blocks | ||||
|                     } else { | ||||
|                         section // Still preserve for now | ||||
|                     } | ||||
|                 } | ||||
|                 MarkdownSection::Link(text, url) => { | ||||
|                     if config.preserve_links { | ||||
|                         println!("    ⏭️  Preserving link"); | ||||
|                         section // Preserve links | ||||
|                     } else { | ||||
|                         // Translate link text only | ||||
|                         let prompt = self.build_section_translation_prompt(&MarkdownSection::Text(text.clone()), config)?; | ||||
|                         let translated_text = self.call_ollama(&prompt, config).await?; | ||||
|                         MarkdownSection::Link(translated_text.trim().to_string(), url.clone()) | ||||
|                     } | ||||
|                 } | ||||
|                 MarkdownSection::Image(alt, url) => { | ||||
|                     println!("    🖼️  Preserving image"); | ||||
|                     section // Preserve images | ||||
|                 } | ||||
|                 MarkdownSection::Table(content) => { | ||||
|                     println!("    📊 Translating table content"); | ||||
|                     let prompt = self.build_section_translation_prompt(&MarkdownSection::Text(content.clone()), config)?; | ||||
|                     let translated_content = self.call_ollama(&prompt, config).await?; | ||||
|                     MarkdownSection::Table(translated_content.trim().to_string()) | ||||
|                 } | ||||
|                 _ => { | ||||
|                     // Translate text sections | ||||
|                     println!("    🔤 Translating text"); | ||||
|                     let prompt = self.build_section_translation_prompt(§ion, config)?; | ||||
|                     let translated_text = self.call_ollama(&prompt, config).await?; | ||||
|                      | ||||
|                     match section { | ||||
|                         MarkdownSection::Text(_) => MarkdownSection::Text(translated_text.trim().to_string()), | ||||
|                         MarkdownSection::Header(_, level) => MarkdownSection::Header(translated_text.trim().to_string(), level), | ||||
|                         MarkdownSection::Quote(_) => MarkdownSection::Quote(translated_text.trim().to_string()), | ||||
|                         MarkdownSection::List(_) => MarkdownSection::List(translated_text.trim().to_string()), | ||||
|                         _ => section, | ||||
|                     } | ||||
|                 } | ||||
|             }; | ||||
|              | ||||
|             translated_sections.push(translated_section); | ||||
|              | ||||
|             // Add small delay to avoid overwhelming Ollama | ||||
|             tokio::time::sleep(tokio::time::Duration::from_millis(100)).await; | ||||
|         } | ||||
|          | ||||
|         let elapsed = start_time.elapsed(); | ||||
|         println!("⏱️  Translation completed in {:.2}s", elapsed.as_secs_f64()); | ||||
|          | ||||
|         Ok(translated_sections) | ||||
|     } | ||||
| } | ||||
							
								
								
									
										103
									
								
								templates/api.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										103
									
								
								templates/api.md
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,103 @@ | ||||
| # API Documentation | ||||
|  | ||||
| ## Public Functions | ||||
|  | ||||
| {{#each api.public_functions}} | ||||
| ### `{{this.name}}` | ||||
|  | ||||
| {{#if this.docs}} | ||||
| {{this.docs}} | ||||
| {{/if}} | ||||
|  | ||||
| **Visibility:** `{{this.visibility}}` | ||||
| {{#if this.is_async}}**Async:** Yes{{/if}} | ||||
|  | ||||
| {{#if this.parameters}} | ||||
| **Parameters:** | ||||
| {{#each this.parameters}} | ||||
| - `{{this.name}}`: `{{this.param_type}}`{{#if this.is_mutable}} (mutable){{/if}} | ||||
| {{/each}} | ||||
| {{/if}} | ||||
|  | ||||
| {{#if this.return_type}} | ||||
| **Returns:** `{{this.return_type}}` | ||||
| {{/if}} | ||||
|  | ||||
| --- | ||||
|  | ||||
| {{/each}} | ||||
|  | ||||
| ## Public Structs | ||||
|  | ||||
| {{#each api.public_structs}} | ||||
| ### `{{this.name}}` | ||||
|  | ||||
| {{#if this.docs}} | ||||
| {{this.docs}} | ||||
| {{/if}} | ||||
|  | ||||
| **Visibility:** `{{this.visibility}}` | ||||
|  | ||||
| {{#if this.fields}} | ||||
| **Fields:** | ||||
| {{#each this.fields}} | ||||
| - `{{this.name}}`: `{{this.field_type}}` ({{this.visibility}}) | ||||
| {{#if this.docs}}  - {{this.docs}}{{/if}} | ||||
| {{/each}} | ||||
| {{/if}} | ||||
|  | ||||
| --- | ||||
|  | ||||
| {{/each}} | ||||
|  | ||||
| ## Public Enums | ||||
|  | ||||
| {{#each api.public_enums}} | ||||
| ### `{{this.name}}` | ||||
|  | ||||
| {{#if this.docs}} | ||||
| {{this.docs}} | ||||
| {{/if}} | ||||
|  | ||||
| **Visibility:** `{{this.visibility}}` | ||||
|  | ||||
| {{#if this.variants}} | ||||
| **Variants:** | ||||
| {{#each this.variants}} | ||||
| - `{{this.name}}` | ||||
| {{#if this.docs}}  - {{this.docs}}{{/if}} | ||||
| {{#if this.fields}} | ||||
|   **Fields:** | ||||
| {{#each this.fields}} | ||||
|   - `{{this.name}}`: `{{this.field_type}}` | ||||
| {{/each}} | ||||
| {{/if}} | ||||
| {{/each}} | ||||
| {{/if}} | ||||
|  | ||||
| --- | ||||
|  | ||||
| {{/each}} | ||||
|  | ||||
| ## Public Traits | ||||
|  | ||||
| {{#each api.public_traits}} | ||||
| ### `{{this.name}}` | ||||
|  | ||||
| {{#if this.docs}} | ||||
| {{this.docs}} | ||||
| {{/if}} | ||||
|  | ||||
| **Visibility:** `{{this.visibility}}` | ||||
|  | ||||
| {{#if this.methods}} | ||||
| **Methods:** | ||||
| {{#each this.methods}} | ||||
| - `{{this.name}}({{#each this.parameters}}{{this.name}}: {{this.param_type}}{{#unless @last}}, {{/unless}}{{/each}}){{#if this.return_type}} -> {{this.return_type}}{{/if}}` | ||||
| {{#if this.docs}}  - {{this.docs}}{{/if}} | ||||
| {{/each}} | ||||
| {{/if}} | ||||
|  | ||||
| --- | ||||
|  | ||||
| {{/each}} | ||||
							
								
								
									
										19
									
								
								templates/changelog.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										19
									
								
								templates/changelog.md
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,19 @@ | ||||
| # Changelog | ||||
|  | ||||
| ## Recent Changes | ||||
|  | ||||
| {{#each commits}} | ||||
| ### {{this.date}} | ||||
|  | ||||
| **{{this.hash}}** by {{this.author}} | ||||
|  | ||||
| {{this.message}} | ||||
|  | ||||
| --- | ||||
|  | ||||
| {{/each}} | ||||
|  | ||||
| ## Summary | ||||
|  | ||||
| - **Total Commits:** {{commits.length}} | ||||
| - **Contributors:** {{#unique commits "author"}}{{this.author}}{{#unless @last}}, {{/unless}}{{/unique}} | ||||
							
								
								
									
										76
									
								
								templates/readme.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										76
									
								
								templates/readme.md
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,76 @@ | ||||
| # {{project.name}} | ||||
|  | ||||
| {{#if project.description}} | ||||
| {{project.description}} | ||||
| {{/if}} | ||||
|  | ||||
| ## Overview | ||||
|  | ||||
| This project contains {{project.modules.length}} modules with a total of {{project.metrics.total_lines}} lines of code. | ||||
|  | ||||
| ## Installation | ||||
|  | ||||
| ```bash | ||||
| cargo install {{project.name}} | ||||
| ``` | ||||
|  | ||||
| ## Usage | ||||
|  | ||||
| ```bash | ||||
| {{project.name}} --help | ||||
| ``` | ||||
|  | ||||
| ## Dependencies | ||||
|  | ||||
| {{#each project.dependencies}} | ||||
| - `{{@key}}`: {{this}} | ||||
| {{/each}} | ||||
|  | ||||
| ## Project Structure | ||||
|  | ||||
| ``` | ||||
| {{#each project.structure.directories}} | ||||
| {{this.name}}/ | ||||
| {{/each}} | ||||
| ``` | ||||
|  | ||||
| ## API Documentation | ||||
|  | ||||
| {{#each project.modules}} | ||||
| ### {{this.name}} | ||||
|  | ||||
| {{#if this.docs}} | ||||
| {{this.docs}} | ||||
| {{/if}} | ||||
|  | ||||
| {{#if this.functions}} | ||||
| **Functions:** {{this.functions.length}} | ||||
| {{/if}} | ||||
|  | ||||
| {{#if this.structs}} | ||||
| **Structs:** {{this.structs.length}} | ||||
| {{/if}} | ||||
|  | ||||
| {{/each}} | ||||
|  | ||||
| ## Metrics | ||||
|  | ||||
| - **Lines of Code:** {{project.metrics.total_lines}} | ||||
| - **Total Files:** {{project.metrics.total_files}} | ||||
| - **Test Files:** {{project.metrics.test_files}} | ||||
| - **Dependencies:** {{project.metrics.dependency_count}} | ||||
| - **Complexity Score:** {{project.metrics.complexity_score}} | ||||
|  | ||||
| ## License | ||||
|  | ||||
| {{#if project.license}} | ||||
| {{project.license}} | ||||
| {{else}} | ||||
| MIT | ||||
| {{/if}} | ||||
|  | ||||
| ## Authors | ||||
|  | ||||
| {{#each project.authors}} | ||||
| - {{this}} | ||||
| {{/each}} | ||||
							
								
								
									
										39
									
								
								templates/structure.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										39
									
								
								templates/structure.md
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,39 @@ | ||||
| # Project Structure | ||||
|  | ||||
| ## Directory Overview | ||||
|  | ||||
| ``` | ||||
| {{#each structure.directories}} | ||||
| {{this.name}}/ | ||||
| {{#each this.subdirectories}} | ||||
| ├── {{this}}/ | ||||
| {{/each}} | ||||
| {{#if this.file_count}} | ||||
| └── ({{this.file_count}} files) | ||||
| {{/if}} | ||||
|  | ||||
| {{/each}} | ||||
| ``` | ||||
|  | ||||
| ## File Distribution | ||||
|  | ||||
| {{#each structure.files}} | ||||
| - **{{this.name}}** ({{this.language}}) - {{this.lines_of_code}} lines{{#if this.is_test}} [TEST]{{/if}} | ||||
| {{/each}} | ||||
|  | ||||
| ## Statistics | ||||
|  | ||||
| - **Total Directories:** {{structure.directories.length}} | ||||
| - **Total Files:** {{structure.files.length}} | ||||
| - **Languages Used:** | ||||
| {{#group structure.files by="language"}} | ||||
|   - {{@key}}: {{this.length}} files | ||||
| {{/group}} | ||||
|  | ||||
| {{#if structure.dependency_graph}} | ||||
| ## Dependencies | ||||
|  | ||||
| {{#each structure.dependency_graph}} | ||||
| - **{{@key}}** depends on: {{#each this}}{{this}}{{#unless @last}}, {{/unless}}{{/each}} | ||||
| {{/each}} | ||||
| {{/if}} | ||||
		Reference in New Issue
	
	Block a user