You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: upstream spec sync + all chat completions apis (#469)
* updates to vector store api and types
* types::vectorstores
* examples updates for types::vectorstores
* types::videos
* examples for types::videos
* types::containers
* examples for types::containers
* updates to chat completions types
* updates
* fix example
* fix examples compilation
* types::chat
* fix examples for types::chat
* remaining chat completion apis
* update chat-store example to run all the new apis
* video example
/// and [audio](https://platform.openai.com/docs/guides/audio) guides.
26
+
/// Creates a model response for the given chat conversation.
29
27
///
28
+
/// Returns a [chat completion](https://platform.openai.com/docs/api-reference/chat/object) object, or a streamed sequence of [chat completion chunk](https://platform.openai.com/docs/api-reference/chat/streaming) objects if the request is streamed.
30
29
///
31
-
/// Parameter support can differ depending on the model used to generate the
32
-
/// response, particularly for newer reasoning models. Parameters that are
33
-
/// only supported for reasoning models are noted below. For the current state
34
-
/// of unsupported parameters in reasoning models,
30
+
/// Learn more in the [text generation](https://platform.openai.com/docs/guides/text-generation), [vision](https://platform.openai.com/docs/guides/vision), and [audio](https://platform.openai.com/docs/guides/audio) guides.
35
31
///
36
-
/// [refer to the reasoning guide](https://platform.openai.com/docs/guides/reasoning).
32
+
/// Parameter support can differ depending on the model used to generate the response, particularly for newer reasoning models. Parameters that are only supported for reasoning models are noted below. For the current state of unsupported parameters in reasoning models, [refer to the reasoning guide](https://platform.openai.com/docs/guides/reasoning).
37
33
///
38
34
/// byot: You must ensure "stream: false" in serialized `request`
/// partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
56
+
/// If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).
57
+
///
58
+
/// See the [Streaming section](https://platform.openai.com/docs/api-reference/chat/streaming) for more information, along with the [streaming responses](https://platform.openai.com/docs/guides/streaming-responses) guide for more information on how to handle the streaming events.
61
59
///
62
60
/// [ChatCompletionResponseStream] is a parsed SSE stream until a \[DONE\] is received from server.
0 commit comments