ai ai
AI model ai
the aim is to incorporate it into aibot
and aios
.
name | full | code | repo |
---|---|---|---|
ai | ai ai | aiai | https://git.syui.ai/ai/ai |
os | ai os | aios | https://git.syui.ai/ai/os |
bot | ai bot | aibot | https://git.syui.ai/ai/bot |
at | ai | ai.syu.is | https://git.syui.ai/ai/at |
model
- gemma3:12b
- deepseek-r1:12b
{
"model": [ "gemma3", "deepseek-r1" ],
"tag": [ "ollama", "LoRA", "unsloth", "open-webui", "n8n" ]
}
$ brew install ollama
$ brew service restart ollama
$ ollama pull gemma3:12b
$ ollama run gemma3:12b "hello"
n8n
# https://github.com/n8n-io/n8n/
$ docker volume create n8n_data
$ docker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n
webui
$ winget install ollama.ollama
$ ollama server
$ ollama run llama3.2:1b
$ winget install --id Python.Python.3.11 -e
$ python --version
$ python -m venv webui
$ cd webui
$ .\Scripts\activate
$ pip install open-webui
$ open-webui serve
http://localhost:8080
LoRA
finetuning
# https://ai.google.dev/gemma/docs/core/lora_tuning
$ conda create -n finetuning python=3.11
$ conda activate finetuning
$ pip install mlx-lm #apple silicon
$ ollama run llama3.2:1b
$ echo "{ \"model\": \"https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct\", \"data\": \"https://github.com/ml-explore/mlx-examples/tree/main/lora/data\" }"|jq .
$ model=meta-llama/Llama-3.2-1B-Instruct
$ data=ml-explore/mlx-examples/lora/data
$ mlx_lm.lora --train --model $model --data $data --batch-size 3
$ ls adapters
$ vim Modelfile
FROM llama3.2:1b
ADAPTER ./adapters
$ ollama create ai -f ./Modelfile
unsloth
$ pip install unsloth
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Qwen2.5-1.5B",
grpo=True
)
Description