LiteLLM (YC W23)’s Post

LiteLLM (YC W23) reposted this

View profile for Ishaan Jaffer, graphic

Building LiteLLM (YC W23) 12.5K+ Github stars | Call 100+ LLMs using the OpenAI format.

Excited to launch OAuth 2.0 on LiteLLM (YC W23) Gateway 🔐 use OAuth 2.0 to authenticate /chat/completions, /completions, /embeddings requests (h/t Nivid Dholakia) -> Start here: https://lnkd.in/dcnjqt2k 🪨 Support for returning trace for Bedrock Guardrails requests (h/t Ivane Kelaptrishvili) 📖 Docs - add example on setting up LiteLLM Gateway with Sagemaker 🛠️ Support for setting temperature=0 for Sagemaker requests on LiteLLM Gateway 🔐 PR - setting tpm/rpm limits per model for a given API Key https://lnkd.in/d-t_4pR9

  • No alternative text description for this image
Athos G

Principal Software Consultant

2mo

So, someone wanting to make a backend fetch to an OpenAI endpoint using your proxy will have to essentially make 2 API requests and authenticate via OAuth 2(or 3?) times. If they use Azure, +1 request and Auth. Unless I'm missing something?

Like
Reply

To view or add a comment, sign in

Explore topics