first
This commit is contained in:
commit
77cd34f4f8
58
README.md
Normal file
58
README.md
Normal file
@ -0,0 +1,58 @@
|
||||
# <img src="./icon/ai.png" width="30"> ai `ai`
|
||||
|
||||
AI model `ai`
|
||||
|
||||
the aim is to incorporate it into `aibot` and `aios`.
|
||||
|
||||
|name|full|code|repo|
|
||||
|---|---|---|---|
|
||||
|ai|ai ai|aiai|https://git.syui.ai/ai/ai|
|
||||
|os|ai os|aios|https://git.syui.ai/ai/os|
|
||||
|bot|ai bot|aibot|https://git.syui.ai/ai/bot|
|
||||
|at|ai|ai.syu.is|https://git.syui.ai/ai/at|
|
||||
|
||||
## model
|
||||
|
||||
1. gemma3:1b
|
||||
2. deepseek-r1:12b
|
||||
|
||||
```json
|
||||
{
|
||||
"model": [ "gemma3", "deepseek-r1" ],
|
||||
"tag": [ "ollama", "LoRA", "unsloth", "open-webui", "n8n" ]
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
$ brew install ollama
|
||||
$ brew services restart ollama
|
||||
$ ollama pull gemma3:1b
|
||||
$ ollama run gemma3:1b "hello"
|
||||
```
|
||||
|
||||
## n8n
|
||||
|
||||
```sh
|
||||
# https://github.com/n8n-io/n8n/
|
||||
$ docker volume create n8n_data
|
||||
$ docker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n
|
||||
```
|
||||
|
||||
## webui
|
||||
|
||||
```sh
|
||||
$ winget install ollama.ollama
|
||||
$ ollama server
|
||||
$ ollama run gemma3:1b
|
||||
|
||||
$ winget install --id Python.Python.3.11 -e
|
||||
$ python --version
|
||||
$ python -m venv webui
|
||||
$ cd webui
|
||||
$ .\Scripts\activate
|
||||
$ pip install open-webui
|
||||
$ open-webui serve
|
||||
|
||||
http://localhost:8080
|
||||
```
|
||||
|
BIN
icon/ai.png
Normal file
BIN
icon/ai.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 140 KiB |
203
lang/ja.md
Normal file
203
lang/ja.md
Normal file
@ -0,0 +1,203 @@
|
||||
# ai
|
||||
|
||||
AI modelの`ai`は、`aibot`や`aios`に組み込むことを目指します。
|
||||
|
||||
|name|full|code|repo|
|
||||
|---|---|---|---|
|
||||
|ai|ai ai|aiai|https://git.syui.ai/ai/ai|
|
||||
|os|ai os|aios|https://git.syui.ai/ai/os|
|
||||
|bot|ai bot|aibot|https://git.syui.ai/ai/bot|
|
||||
|at|ai|ai.syu.is|https://git.syui.ai/ai/at|
|
||||
|
||||
## 学習
|
||||
|
||||
物語を学習させます。特定の語彙を使用します。例えば「自分のことをアイという」などです。
|
||||
|
||||
> アイね、回答するの
|
||||
|
||||
## できること
|
||||
|
||||
基本的には`aibot`へのrequestに応じて、`comfyui`で画像や動画生成、LLMで回答を行います。
|
||||
|
||||
webからはatprotoを通じて実行されます。
|
||||
|
||||
```sh
|
||||
[web]aiat --> [server]aios --> [at]aibot --> [ai]aiai
|
||||
```
|
||||
|
||||
## 使用するもの
|
||||
|
||||
- https://github.com/ollama/ollama
|
||||
- https://github.com/n8n-io/n8n
|
||||
- https://github.com/comfyanonymous/comfyui
|
||||
- https://github.com/NVIDIA/cosmos
|
||||
- https://github.com/stability-ai/stablediffusion
|
||||
- https://github.com/unslothai/unsloth
|
||||
- https://github.com/ml-explore/mlx-examples
|
||||
- https://github.com/ggml-org/llama.cpp
|
||||
|
||||
## LoRA
|
||||
|
||||
apple siliconでLoRA(finetuning)するには`mlx_lm`を使用します。
|
||||
|
||||
```sh
|
||||
$ brew install --cask anaconda
|
||||
$ brew info anaconda
|
||||
$ cd /opt/homebrew/Caskroom/anaconda/*
|
||||
$ ./Anaconda3*.sh
|
||||
```
|
||||
|
||||
`google/gemma-3-1b-it`を承認しておきます。
|
||||
|
||||
- https://huggingface.co/google/gemma-3-1b-it
|
||||
|
||||
```sh
|
||||
$ pip install -U "huggingface_hub[cli]"
|
||||
# https://huggingface.co/settings/tokens
|
||||
# Repositories permissions:Read access to contents of selected repos
|
||||
|
||||
$ huggingface_hub login
|
||||
```
|
||||
|
||||
```sh
|
||||
$ conda create -n finetuning python=3.12
|
||||
$ conda activate finetuning
|
||||
$ pip install mlx-lm
|
||||
$ echo "{ \"model\": \"https://huggingface.co/google/gemma-3-1b-it\", \"data\": \"https://github.com/ml-explore/mlx-examples/tree/main/lora/data\" }"|jq .
|
||||
$ git clone https://github.com/ml-explore/mlx-examples
|
||||
$ model=google/gemma-3-1b-it
|
||||
$ data=mlx-examples/lora/data
|
||||
$ mlx_lm.lora --train --model $model --data $data --batch-size 3
|
||||
|
||||
$ ls adapters
|
||||
adapter_config.json
|
||||
adapters.safetensors
|
||||
```
|
||||
|
||||
## unsloth
|
||||
|
||||
windowsでLoRA(finetuning)するには`unsloth`を使います。
|
||||
|
||||
```sh
|
||||
$ nvidia-smi
|
||||
$ nvcc --version
|
||||
|
||||
# https://github.com/unslothai/notebooks/blob/main/unsloth_windows.ps1
|
||||
cuda: 12.4
|
||||
python: 3.11
|
||||
```
|
||||
|
||||
```sh
|
||||
$ winget install --scope machine nvidia.cuda --version 12.4.1
|
||||
$ winget install curl.curl
|
||||
```
|
||||
|
||||
```sh
|
||||
# https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation
|
||||
$ curl -sLO https://raw.githubusercontent.com/unslothai/notebooks/refs/heads/main/unsloth_windows.ps1
|
||||
$ powershell.exe -ExecutionPolicy Bypass -File .\unsloth_windows.ps1
|
||||
$ vim custom.py
|
||||
```
|
||||
|
||||
上記はpwshでunsolthを使う方法ですが、wslを使ったほうがいいです。
|
||||
|
||||
```py
|
||||
# https://docs.unsloth.ai/get-started/fine-tuning-guide
|
||||
from unsloth import FastModel
|
||||
import torch
|
||||
|
||||
fourbit_models = [
|
||||
# 4bit dynamic quants for superior accuracy and low memory use
|
||||
# https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-gemma-3
|
||||
# https://huggingface.co/unsloth/gemma-3-4b-it
|
||||
"unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
|
||||
"unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
|
||||
"unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
|
||||
"unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
|
||||
|
||||
# Other popular models!
|
||||
"unsloth/Llama-3.1-8B",
|
||||
"unsloth/Llama-3.2-3B",
|
||||
"unsloth/Llama-3.3-70B",
|
||||
"unsloth/mistral-7b-instruct-v0.3",
|
||||
"unsloth/Phi-4",
|
||||
] # More models at https://huggingface.co/unsloth
|
||||
|
||||
model, tokenizer = FastModel.from_pretrained(
|
||||
model_name = "unsloth/gemma-3-4b-it",
|
||||
max_seq_length = 2048, # Choose any for long context!
|
||||
load_in_4bit = True, # 4 bit quantization to reduce memory
|
||||
load_in_8bit = False, # [NEW!] A bit more accurate, uses 2x memory
|
||||
full_finetuning = False, # [NEW!] We have full finetuning now!
|
||||
# token = "hf_...", # use one if using gated models
|
||||
)
|
||||
model = FastModel.get_peft_model(
|
||||
model,
|
||||
finetune_vision_layers = False, # Turn off for just text!
|
||||
finetune_language_layers = True, # Should leave on!
|
||||
finetune_attention_modules = True, # Attention good for GRPO
|
||||
finetune_mlp_modules = True, # SHould leave on always!
|
||||
|
||||
r = 8, # Larger = higher accuracy, but might overfit
|
||||
lora_alpha = 8, # Recommended alpha == r at least
|
||||
lora_dropout = 0,
|
||||
bias = "none",
|
||||
random_state = 3407,
|
||||
)
|
||||
```
|
||||
|
||||
## comfyui
|
||||
|
||||
https://github.com/comfyanonymous/comfyui
|
||||
|
||||
- https://github.com/ltdrdata/ComfyUI-Manager
|
||||
- https://github.com/ltdrdata/ComfyUI-Impact-Pack
|
||||
|
||||
### comfyui + ollama
|
||||
|
||||
- https://github.com/stavsap/comfyui-ollama
|
||||
- https://github.com/pythongosssss/ComfyUI-Custom-Scripts
|
||||
|
||||
`show text`のcustom nodeを使用するには`ComfyUI-Custom-Scripts`が必要です。
|
||||
|
||||
### comfyui + torch + cuda:12.8
|
||||
|
||||
`python:3.12`を使用します。
|
||||
|
||||
```sh
|
||||
$ cd ComfyUI/.venv/Scripts/
|
||||
$ ./python.exe -V
|
||||
```
|
||||
|
||||
`torch`のnightly versionは`cuda:12.8`に対応しています。
|
||||
|
||||
https://pytorch.org/get-started/locally/
|
||||
|
||||
```sh
|
||||
$ pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
|
||||
```
|
||||
|
||||
しかし、`torchaudio`などで衝突が発生します。衝突が発生する場合はversionを指定します。しかし、この方法で互換性が完全に解消されるわけではありません。
|
||||
|
||||
```sh
|
||||
$ ./python.exe -m pip uninstall torch torchvision torchaudio -y
|
||||
$ ./python.exe -m pip install --pre torch==2.7.0.dev20250306+cu128 torchvision==0.22.0.dev20250306+cu128 torchaudio==2.6.0.dev20250306+cu128 --index-url https://download.pytorch.org/whl/nightly/cu128
|
||||
```
|
||||
|
||||
ホイールファイルを使用すると安定するようです。
|
||||
|
||||
```sh
|
||||
# https://huggingface.co/w-e-w/torch-2.6.0-cu128.nv
|
||||
$ ./python.exe -m pip install torch-2.x.x+cu128-cp312-cp312-win_amd64.whl
|
||||
$ ./python.exe -m pip install torchvision-x.x.x+cu128-cp312-cp312-win_amd64.whl
|
||||
$ ./python.exe -m pip install torchaudio-x.x.x+cu128-cp312-cp312-win_amd64.whl
|
||||
|
||||
$ ./python.exe -c "import torch; print(torch.cuda.is_available()); print(torch.__version__); print(torch.cuda.get_arch_list())"
|
||||
```
|
||||
|
||||
### comfyui + cosmos
|
||||
|
||||
nvidiaのcosmosを使った動画生成です。
|
||||
|
||||
https://comfyanonymous.github.io/ComfyUI_examples/cosmos/
|
||||
|
Loading…
x
Reference in New Issue
Block a user