diff --git a/README.md b/README.md index f29cc4d..89f841a 100644 --- a/README.md +++ b/README.md @@ -189,11 +189,6 @@ groups: - [Speculative Decoding](examples/speculative-decoding/README.md) - using a small draft model can increase inference speeds from 20% to 40%. This example includes a configurations Qwen2.5-Coder-32B (2.5x increase) and Llama-3.1-70B (1.4x increase) in the best cases. - [Optimizing Code Generation](examples/benchmark-snakegame/README.md) - find the optimal settings for your machine. This example demonstrates defining multiple configurations and testing which one is fastest. - [Restart on Config Change](examples/restart-on-config-change/README.md) - automatically restart llama-swap when trying out different configurations. - -## Configuration - -llama-s - ## Docker Install ([download images](https://github.com/mostlygeek/llama-swap/pkgs/container/llama-swap))