From c97b80bdfe0045a528a0e44be8c070f60bfc395a Mon Sep 17 00:00:00 2001 From: Benson Wong Date: Wed, 1 Jan 2025 12:25:45 -0800 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1d5ac80..349df14 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ ![llama-swap header image](header.jpeg) # Introduction -llama-swap is an OpenAI API compatible server that gives you complete control over how you use your hardware. It automatically swaps to the configuration of your choice for serving a model. Since [llama.cpp's server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server) can't swap models, let's swap the server instead! +llama-swap is a transparent proxy server that automatically loads the right llama.cpp server configuration to serve the request. Features: