support v1/rerank endpoint
This commit is contained in:
@@ -92,7 +92,11 @@ profiles:
|
||||
- "llama"
|
||||
```
|
||||
|
||||
More [examples](examples/README.md) are available for different use cases.
|
||||
**Guides and examples**
|
||||
|
||||
- [config.example.yaml](config.example.yaml) includes example for supporting `v1/embeddings` and `v1/rerank` endpoints
|
||||
- [Speculative Decoding](examples/speculative-decoding/README.md) - using a small draft model can increase inference speeds from 20% to 40%. This example includes a configurations Qwen2.5-Coder-32B (2.5x increase) and Llama-3.1-70B (1.4x increase) in the best cases.
|
||||
- [Optimizing Code Generation](examples/benchmark-snakegame/README.md) - find the optimal settings for your machine. This example demonstrates defining multiple configurations and testing which one is fastest.
|
||||
|
||||
## Installation
|
||||
|
||||
|
||||
Reference in New Issue
Block a user