From 8422e4e6a1a832ed49c3376cc6495eacd14e0d07 Mon Sep 17 00:00:00 2001 From: Benson Wong Date: Mon, 26 May 2025 15:46:08 -0700 Subject: [PATCH] move some docs to the wiki [no-ci] --- README.md | 26 -------------------------- 1 file changed, 26 deletions(-) diff --git a/README.md b/README.md index f6b1f18..0711301 100644 --- a/README.md +++ b/README.md @@ -299,32 +299,6 @@ Any OpenAI compatible server would work. llama-swap was originally designed for For Python based inference servers like vllm or tabbyAPI it is recommended to run them via podman or docker. This provides clean environment isolation as well as responding correctly to `SIGTERM` signals to shutdown. -## Systemd Unit Files - -Use this unit file to start llama-swap on boot. This is only tested on Ubuntu. - -`/etc/systemd/system/llama-swap.service` - -``` -[Unit] -Description=llama-swap -After=network.target - -[Service] -User=nobody - -# set this to match your environment -ExecStart=/path/to/llama-swap --config /path/to/llama-swap.config.yml - -Restart=on-failure -RestartSec=3 -StartLimitBurst=3 -StartLimitInterval=30 - -[Install] -WantedBy=multi-user.target -``` - ## Star History [![Star History Chart](https://api.star-history.com/svg?repos=mostlygeek/llama-swap&type=Date)](https://www.star-history.com/#mostlygeek/llama-swap&Date)