Skip to content

merge main branch #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 226 commits into
base: master
Choose a base branch
from
Open

Conversation

Leizhenpeng
Copy link
Member

No description provided.

GargantuaX and others added 30 commits May 11, 2023 01:30
* change azure engine config to azure modelMapper config

* Update go.mod

* Revert "Update go.mod"

This reverts commit 78d14c5.

* lint fix

* add test

* lint fix

* lint fix

* lint fix

* opt

* opt

* opt

* opt
* Move form_uilder into internal pkg.

* Fix import of audio.go

* Reorganize.

* Fix import.

* Fix

---------

Co-authored-by: JoyShi <[email protected]>
* chore(config.go): update Azure API version to 2023-05-15 to use the latest version available

* chore(api_internal_test.go): update Azure API version to 2023-05-15 to match the latest version
Added in `unofficial` to the README to make it clear it's not official.
* move request_builder into internal pkg (#304)

* add some test for internal.RequestBuilder

* add a test for openai.GetEngine
* Implement optional io.Reader in AudioRequest (#303) (#265)

* Fix err shadowing

* Add test to cover AudioRequest io.Reader usage

* Add additional test cases to cover AudioRequest io.Reader usage

* Add test to cover opening the file specified in an AudioRequest
* move error_accumulator into internal pkg (#304)

* move error_accumulator into internal pkg (#304)

* add a test for ErrTooManyEmptyStreamMessages in stream_reader (#304)
* Support Retrieve model API (#340)

* Test for GetModel error cases. (#340)

* Reduce the cognitive complexity of TestClientReturnsRequestBuilderErrors (#340)
#339)

* test: Add tests for improved coverage before refactoring

This commit adds tests to improve coverage before refactoring
to ensure that the changes do not break anything.

* refactor: replace goto statement with loop

This commit introduces a refactor to improve the clarity of the control flow within the method.
The goto statement can sometimes make the code hard to understand and maintain, hence this refactor aims to resolve that.

* refactor: extract for-loop from Recv to another method

This commit improves code readability and maintainability
by making the Recv method simpler.
* fix json marshaling error response of azure openai (#343)

* add a test case for handleErrorResp func (#343)
* Support Retrieve file content API (#347)

* add timeout test for GetFileContent (#347)
Added GPT3Dot5Turbo0613, GPT3Dot5Turbo16K, GPT40613, and GPT432K0613
models from June update
(https://openai.com/blog/function-calling-and-other-api-updates)

Issue #360
* Improve (#356) to support registration of wildcard URLs

* Add TestAzureChatCompletions & TestAzureChatCompletionsWithCustomDeploymentName

* Remove TestAzureChatCompletionsWithCustomDeploymentName

---------

Co-authored-by: coggsflod <[email protected]>
* add 16k_0613 model

* add 16k_0613 model

* add model:
* feat(chat): support function call api

* rename struct & add const ChatMessageRoleFunction
* add items, which is required for array type

* use JSONSchemaDefine directly
* audio: add items to AudioResponseFormat enum

* audio: expand AudioResponse struct to accommodate verbose json response

---------

Co-authored-by: Roman Zubov <[email protected]>
* fix: chat stream resp has 'data: ' prefix

* fix: lint error

* fix: lint error

* fix: lint error
* feat: use json.rawMessage, test functions

* chore: lint

* fix: tests

the ChatCompletion mock server doesn't actually run otherwise. N=0
is the default request but the server will treat it as n=1

* fix: tests should default to n=1 completions

* chore: add back removed interfaces, custom marshal

* chore: lint

* chore: lint

* chore: add some tests

* chore: appease lint

* clean up JSON schema + tests

* chore: lint

* feat: remove backwards compatible functions

for illustrative purposes

* fix: revert params change

* chore: use interface{}

* chore: add test

* chore: add back FunctionDefine

* chore: /s/interface{}/any

* chore: add back jsonschemadefinition

* chore: testcov

* chore: lint

* chore: remove pointers

* chore: update comment

* chore: address CR

added test for compatibility as well

---------

Co-authored-by: James <[email protected]>
Yu0u and others added 30 commits October 29, 2024 07:22
* add chatcompletion stream refusal and logprobs

* fix slice to struct

* add  integration test

* fix lint

* fix lint

* fix: the object should be pointer

---------

Co-authored-by: genglixia <[email protected]>
* add attachments in MessageRequest

* Move tools const to message

* remove const, just use assistanttool const
* updated client_test to solve lint error

* modified golangci yml to solve linter issues

* minor change
* make user optional in embedding request

* fix unit test
The ID field is not always present for streaming responses. Without omitempty, the entire ToolCall struct will be missing.
* add reasoning_effort param

* add o1 model

* fix lint
* Add support for O3-mini

- Add support for the o3 mini set of models, including tests that match the constraints in OpenAI's API docs (https://platform.openai.com/docs/models#o3-mini).

* Deprecate and refactor

- Deprecate `ErrO1BetaLimitationsLogprobs` and `ErrO1BetaLimitationsOther`

- Implement `validationRequestForReasoningModels`, which works on both o1 & o3, and has per-model-type restrictions on functionality (eg, o3 class are allowed function calls and system messages, o1 isn't)

* Move reasoning validation to `reasoning_validator.go`

- Add a `NewReasoningValidator` which exposes a `Validate()` method for a given request

- Also adds a test for chat streams

* Final nits
* ref: add image url support to messages

* fix linter error

* fix linter error
* fix: remove validateO1Specific

* update golangci-lint-action version

* fix actions

* fix actions

* fix actions

* fix actions

* remove some o1 test
* feat: add Anthropic API support with custom version header

* refactor: use switch statement for API type header handling

* refactor: add OpenAI & AzureAD types to be exhaustive

* Update client.go

need explicit fallthrough in empty case statements

* constant for APIVersion; addtl tests
* fix lint

* remove linters
* fix jsonschema tests

* ensure all run during PR Github Action

* add test for struct to schema

* add support for enum tag

* support nullable tag
* feat: add new GPT-4.1 model variants to completion.go

* feat: add tests for unsupported models in completion endpoint

* fix: add missing periods to test function comments in completion_test.go
* feat: Add missing TTS models and voices

* feat: Add new instruction field to create speech request

- From docs: Control the voice of your generated audio with additional instructions. Does not work with tts-1 or tts-1-hd.

* fix: add canary-tts back to SpeechModel
…or DeepSeek R1 (#925)

* support deepseek field "reasoning_content"

* support deepseek field "reasoning_content"

* Comment ends in a period (godot)

* add comment on field reasoning_content

* fix go lint error

* chore: trigger CI

* make field "content" in MarshalJSON function omitempty

* remove reasoning_content in TestO1ModelChatCompletions func

* feat: Add test and handler for deepseek-reasoner chat model completions, including support for reasoning content in responses.

* feat: Add test and handler for deepseek-reasoner chat model completions, including support for reasoning content in responses.

* feat: Add test and handler for deepseek-reasoner chat model completions, including support for reasoning content in responses.
- This adds supports, and tests, for the 3o and 4o-mini class of models
The legacy completion API supports a `stream_options` object when
`stream` is set to true [0]. This adds a StreamOptions property to the
CompletionRequest struct to support this setting.

[0] https://platform.openai.com/docs/api-reference/completions/create#completions-create-stream_options

Signed-off-by: Sean McGinnis <[email protected]>
* Add Prediction field to ChatCompletionRequest

* Include prediction tokens in response
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment