xxxxxxxxxx
$ litellm --model huggingface/bigcode/starcoder
#INFO: Proxy running on http://0.0.0.0:8000
xxxxxxxxxx
The computer on which OpenGL commands are executed. This might differ from the computer from which commands are issued.
xxxxxxxxxx
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)
xxxxxxxxxx
<!-- Web.Config Configuration File -->
<configuration>
<system.web>
<customErrors mode="ON"/>
</system.web>
</configuration>