0

For some reason, whenever I do:

response = openai.ChatCompletion.create(
    engine="chatgpt",
    model="gpt-35-turbo",
    messages=[
        {"role": "system", "content": "some text here"},
        {"role": "user", "content": "some more text here"}
    ],
    stream=True
)

for chunk in response:
    print(chunk)

The create function will always return:

Invalid response object from API: '{ "statusCode": 500, "message": "Internal server error", "activityId": "some_id_here" }' (HTTP response code was 500)

Setting stream=False works fine. Is this a problem on my side? Or some sort of restriction OpenAI put on my account? Can't find anything in the API documentation that explains this behaviour

1 Answer 1

0

"Engines" as deprecated

https://help.openai.com/en/articles/6283125-what-happened-to-engines

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "user", "content": "Hello!"}
  ],
  stream=True
)

And you can use these models

/v1/chat/completions    gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301

Models are case sensitive so you have to use them exactly as written.

https://platform.openai.com/docs/models/model-endpoint-compatibility

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.