adaptive_sdk.resources
Resource to interact with AB Tests.
Creates a new A/B test in the client's use case.
Arguments:
- ab_test_key: A unique key to identify the AB test.
- feedback_key: The feedback key against which the AB test will run.
- models: The models to include in the AB test; they must be attached to the use case.
- traffic_split: Percentage of production traffic to route to AB test.
traffic_split*100
% of inference requests for the use case will be sent randomly to one of the models included in the AB test. - feedback_type: What type of feedback to run the AB test on, metric or preference.
- auto_deploy: If set to
True
, when the AB test is completed, the winning model automatically gets promoted to the use case default model.
List the use case AB tests.
Arguments:
- active: Filter on active or inactive AB tests.
- status: Filter on one of the possible AB test status.
Create a chat completion.
Arguments:
- messages: Input messages, each dict with keys
role
andcontent
. - stream: If
True
, partial message deltas will be returned. If stream is over, chunk.choices will be None. - model: Target model key for inference. If
None
, the requests will be routed to the use case's default model. - stop: Sequences or where the API will stop generating further tokens.
- max_tokens: Maximum # of tokens allowed to generate.
- temperature: Sampling temperature.
- top_p: Threshold for top-p sampling.
- stream_include_usage: If set, an additional chunk will be streamed with the token usage statistics for the entire request.
- user: ID of user making request. If not
None
, will be logged as metadata for the request. - ab_campaign: AB test key. If set, request will be guaranteed to count towards AB test results,
no matter the configured
traffic_split
. - n: Number of chat completions to generate for each input messages.
- labels: Key-value pairs of interaction labels.
Examples:
# streaming chat request
stream_response = client.chat.create(
model="model_key", messages=[{"role": "user", "content": "Hello from SDK"}], stream=True
)
print("Streaming response: ", end="", flush=True)
for chunk in stream_response:
if chunk.choices:
content = chunk.choices[0].delta.content
print(content, end="", flush=True)
Resource to interact with compute pools.
Resource to interact with file datasets.
Upload a dataset from a file. File must be jsonl, where each line should match structure in example below.
Arguments:
- file_path: Path to jsonl file.
- dataset_key: New dataset key.
- name: Optional name to render in UI; if
None
, defaults to same asdataset_key
.
Example:
{"messages": [{"role": "system", "content": "<optional system prompt>"}, {"role": "user", "content": "<user content>"}, {"role": "assistant", "content": "<assistant answer>"}], "completion": "hey"}
List previously uploaded datasets.
Resource to interact with embeddings.
Creates embeddings inference request.
Arguments:
- input: Input text to embed.
- model: Target model key for inference. If
None
, the requests will be routed to the use case's default model. Request will error if default model is not an embedding model. - encoding_format: Encoding format of response.
- user: ID of user making the requests. If not
None
, will be logged as metadata for the request.
Resource to interact with evaluation jobs.
Create a new evaluation job.
Arguments:
- data_config: Input data configuration.
- models: Models to evaluate.
- judge_model: Model key of judge.
- method: Eval method (built in method, or custom eval).
- custom_eval_config: Only required if method=="custom".
- name: Optional name for evaluation job.
Resource to interact with and log feedback.
Register a new feedback key. Feedback can be logged against this key once it is created.
Arguments:
- key: Feedback key.
- kind: Feedback kind.
If
"bool"
, you can log values0
,1
,True
orFalse
only. If"scalar"
, you can log any integer or float value. - scoring_type: Indication of what good means for this feeback key; a higher numeric value (or
True
) , or a lower numeric value (orFalse
). - name Human-readable feedback name that will render in the UI. If
None
, will be the same askey
. - description: Description of intended purpose or nuances of feedback. Will render in the UI.
Get the details of a feedback key.
Arguments:
- feedback_key: The feedback key.
return self._gql_client.describe_metric(input=feedback_key).metric
Link a feedback key to the client's use case. Once a feedback key is linked to a use case, its statistics and associations with interactions will render in the UI.
Arguments:
- feedback_key: The feedback key to be linked.
Unlink a feedback key from the client's use case.
Arguments:
- feedback_key: The feedback key to be unlinked.
Log metric feedback for a single completion, which can be a float, int or bool depending on the kind of feedback_key
it is logged against.
Arguments:
- value: The feedback values.
- completion_id: The completion_id to attach the feedback to.
- feedback_key: The feedback key to log against.
- user: ID of user submitting feedback. If not
None
, will be logged as metadata for the request. - details: Textual details for the feedback. Can be used to provide further context on the feedback
value
.
Log preference feedback between 2 completions.
Arguments:
- feedback_key: The feedback key to log against.
- preferred_completion: Can be a completion_id or a dict with keys
model
andtext
, corresponding the a valid model key and its attributed completion. - other_completion: Can be a completion_id or a dict with keys
model
andtext
, corresponding the a valid model key and its attributed completion. - user: ID of user submitting feedback.
- prompt: Input text prompt. Ignored if
preferred_
andother_completion
are completion_ids. - messages: Input chat messages, each dict with keys
role
andcontent
. Ignored ifpreferred_
andother_completion
are completion_ids. - tied: Indicator if both completions tied as equally bad or equally good.
Resource to interact with interactions.
Create/log an interaction.
Arguments:
- model: Model key.
- completion: Model completion.
- prompt: Input text prompt.
- messages: Input chat messages, each dict should have keys
role
andcontent
. - feedbacks: List of feedbacks, each dict should with keys
feedback_key
,value
and optional(details
). - user: ID of user making the request. If not
None
, will be logged as metadata for the interaction. - ab_campaign: AB test key. If set, provided
feedbacks
will count towards AB test results. - labels: Key-value pairs of interaction labels.
- created_at: Timestamp of interaction creation or ingestion.
List interactions in client's use case.
Arguments:
- order: Ordering of results.
- filters: List filters.
- page: Paging config.
- group_by: Retrieve interactions grouped by selected dimension.
Get the details for one specific interaction.
Arguments:
- completion_id: The ID of the completion.
Resource to interact with models.
Add model from the HuggingFace Model hub to Adaptive model registry. It will take several minutes for the model to be downloaded and converted to Adaptive format.
Arguments:
- hf_model_id: The ID of the selected model repo on HuggingFace Model Hub.
- output_model_key: The key that will identify the new model in Adaptive.
- hf_token: Your HuggingFace Token, needed to validate access to gated/restricted model.
Add proprietary external model to Adaptive model registry.
Arguments:
- name: Adaptive name for the new model.
- external_model_id: Should match the model id publicly shared by the model provider.
- api_key: API Key for authentication against external model provider.
- provider: External proprietary model provider.
List all models in Adaptive model registry.
Attach a model to the client's use case.
Arguments:
- model: Model key.
- wait: If the model is not deployed already, attaching it to the use case will automatically deploy it.
If
True
, this call blocks until model isOnline
. - make_default: Make the model the use case's default on attachment.
Detach model from client's use case.
Arguments:
- model: Model key.
Update compute config of model.
Update config of model attached to client's use case.
Arguments:
- model: Model key.
- is_default: Change the selection of the model as default for the use case.
True
to promote to default,False
to demote from default. IfNone
, no changes are applied. - attached: Whether model should be attached or detached to/from use case. If
None
, no changes are applied. - desired_online: Turn model inference on or off for the client use case.
This does not influence the global status of the model, it is use case-bounded.
If
None
, no changes are applied.
Resource to list permissions.
Resource to interact with external reward servers.
Resource to manage roles.
Creates new role.
Arguments:
- key: Role key.
- permissions: List of permission identifiers such as
use_case:read
. You can list all possible permissions with client.permissions.list(). - name: Role name; if not provided, defaults to
key
.
Resource to manage teams.
Resource to interact with training jobs.
Create a new training job.
Arguments:
- model: Model to train.
- config: Training config.
- name: Name for training job.
Resource to interact with use cases.
Create new use case.
Arguments:
- key: Use case key.
- name: Human-readable use case name which will be rendered in the UI.
If not set, will be the same as
key
. - description: Description of model which will be rendered in the UI.
Resource to manage users and permissions.
Update team and role for user.
Arguments:
- email: User email.
- team: Key of team to which user will be added to.
- role: Assigned role
Resource to interact with AB Tests.
Creates a new A/B test in the client's use case.
Arguments:
- ab_test_key: A unique key to identify the AB test.
- feedback_key: The feedback key against which the AB test will run.
- models: The models to include in the AB test; they must be attached to the use case.
- traffic_split: Percentage of production traffic to route to AB test.
traffic_split*100
% of inference requests for the use case will be sent randomly to one of the models included in the AB test. - feedback_type: What type of feedback to run the AB test on, metric (direct) or preference (comparison).
- auto_deploy: If set to
True
, when the AB test is completed, the winning model automatically gets promoted to the use case default model.
Cancel an ongoing AB test.
Arguments:
- key: The AB test key.
List the use case AB tests.
Arguments:
- active: Filter on active or inactive AB tests.
- status: Filter on one of the possible AB test status.
Create a chat completion.
Arguments:
- messages: Input messages, each dict with keys
role
andcontent
. - stream: If
True
, partial message deltas will be returned. - model: Target model key for inference. If
None
, the requests will be routed to the use case's default model. - stop: Sequences or where the API will stop generating further tokens.
- max_tokens: Maximum # of tokens allowed to generate.
- temperature: Sampling temperature.
- top_p: Threshold for top-p sampling.
- stream_include_usage: If set, an additional chunk will be streamed with the token usaage statistics for the entire request.
- user: ID of user making request. If not
None
, will be logged as metadata for the request. - ab_campaign: AB test key. If set, request will be guaranteed to count towards AB test results,
no matter the configured
traffic_split
. - n: Number of chat completions to generate for each input messages.
- labels: Key-value pairs of interaction labels.
Examples:
# async streaming chat request
async def async_chat_stream():
stream_response = aclient.chat.create(
model="model_key", messages=[{"role": "user", "content": "Hello from SDK"}], stream=True
)
print("Async chat streaming response: ", end="", flush=True)
async for chunk in await stream_response:
if chunk.choices:
content = chunk.choices[0].delta.content
print(content, end="", flush=True)
Resource to interact with compute pools.
Upload a dataset from a file. File must be jsonl, where each line should match structure in example below.
Arguments:
- file_path: Path to jsonl file.
- dataset_key: New dataset key.
- name: Optional name to render in UI; if
None
, defaults to same asdataset_key
.
Example:
{"messages": [{"role": "system", "content": "<optional system prompt>"}, {"role": "user", "content": "<user content>"}, {"role": "assistant", "content": "<assistant answer>"}], "completion": "hey"}
List previously uploaded datasets.
Resource to interact with embeddings.
Creates embeddings inference request.
Arguments:
- input: Input text to embed.
- model: Target model key for inference. If
None
, the requests will be routed to the use case's default model. Request will error if default model is not an embedding model. - encoding_format: Encoding format of response.
- user: ID of user making the requests. If not
None
, will be logged as metadata for the request.
Create a new evaluation job.
Arguments:
- data_config: Input data configuration.
- models: Models to evaluate.
- judge_model: Model key of judge.
- method: Eval method (built in method, or custom eval).
- custom_eval_config: Configuration for custom eval. Only required if method=="custom".
- name: Optional name for evaluation job.
Resource to interact with and log feedback.
Register a new feedback key. Feedback can be logged against this key once it is created.
Arguments:
- key: Feedback key.
- kind: Feedback kind.
If
"bool"
, you can log values0
,1
,True
orFalse
only. If"scalar"
, you can log any integer or float value. - scoring_type: Indication of what good means for this feeback key; a higher numeric value (or
True
) , or a lower numeric value (orFalse
). - name: Human-readable feedback name that will render in the UI. If
None
, will be the same askey
. - description: Description of intended purpose or nuances of feedback. Will render in the UI.
Link a feedback key to the client's use case. Once a feedback key is linked to a use case, its statistics and associations with interactions will render in the UI.
Arguments:
- feedback_key: The feedback key to be linked.
Unlink a feedback key from the client's use case.
Arguments:
- feedback_key: The feedback key to be unlinked.
Log metric feedback for a single completion, which can be a float, int or bool depending on the kind of feedback_key
it is logged against.
Arguments:
- value: The feedback values.
- completion_id: The completion_id to attach the feedback to.
- feedback_key: The feedback key to log against.
- user: ID of user submitting feedback. If not
None
, will be logged as metadata for the request. - details: Textual details for the feedback. Can be used to provide further context on the feedback
value
.
Log preference feedback between 2 completions.
Arguments:
- feedback_key: The feedback key to log against.
- preferred_completion: Can be a completion_id or a dict with keys
model
andtext
, corresponding the a valid model key and its attributed completion. - other_completion: Can be a completion_id or a dict with keys
model
andtext
, corresponding the a valid model key and its attributed completion. - user: ID of user submitting feedback.
- prompt: Input text prompt. Ignored if
preferred_
andother_completion
are completion_ids. - messages: Input chat messages, each dict with keys
role
andcontent
. Ignored ifpreferred_
andother_completion
are completion_ids. - tied: Indicator if both completions tied as equally bad or equally good.
Resource to interact with interactions.
Create/log an interaction.
Arguments:
- model: Model key.
- completion: Model completion.
- prompt: Input text prompt.
- messages: Input chat messages, each dict should have keys
role
andcontent
. - feedbacks: List of feedbacks, each dict should with keys
feedback_key
,value
and optional(details
). - user: ID of user making the request. If not
None
, will be logged as metadata for the interaction. - ab_campaign: AB test key. If set, provided
feedbacks
will count towards AB test results. - labels: Key-value pairs of interaction labels.
- created_at: Timestamp of interaction creation or ingestion.
List interactions in client's use case.
Arguments:
- order: Ordering of results.
- filters: List filters.
- page: Paging config.
- group_by: Retrieve interactions grouped by selected dimension.
Get the details for one specific interaction.
Arguments:
- completion_id: The ID of the completion.
Resource to interact with models.
Add model from the HuggingFace Model hub to Adaptive model registry. It will take several minutes for the model to be downloaded and converted to Adaptive format.
Arguments:
- hf_model_id: The ID of the selected model repo on HuggingFace Model Hub.
- output_model_key: The key that will identify the new model in Adaptive.
- hf_token: Your HuggingFace Token, needed to validate access to gated/restricted model.
Add proprietary external model to Adaptive model registry.
Arguments:
- name: Adaptive name for the new model.
- external_model_id: Should match the model id publicly shared by the model provider.
- api_key: API Key for authentication against external model provider.
- provider: External proprietary model provider.
List all models in Adaptive model registry.
Attach a model to the client's use case.
Arguments:
- model: Model key.
- wait: If the model is not deployed already, attaching it to the use case will automatically deploy it.
If
True
, this call blocks until model isOnline
. - make_default: Make the model the use case's default on attachment.
Detach model from client's use case.
Arguments:
- model: Model key.
Update compute config of model.
Update config of model attached to client's use case.
Arguments:
- model: Model key.
- is_default: Change the selection of the model as default for the use case.
True
to promote to default,False
to demote from default. IfNone
, no changes are applied. - attached: Whether model should be attached or detached to/from use case. If
None
, no changes are applied. - desired_online: Turn model inference on or off for the client use case.
This does not influence the global status of the model, it is use case-bounded.
If
None
, no changes are applied.
Resource to list permissions.
Async resource to interact with external rewards servers.
Resource to manage roles.
Creates new role.
Arguments:
- key: Role key.
- permissions: List of permission identifiers such as
use_case:read
. You can list all possible permissions with client.permissions.list(). - name: Role name; if not provided, defaults to
key
.
Resource to manage teams.
Resource to interact with training jobs.
Create a new training job.
Arguments:
- model: Model to train.
- config: Training config.
- name: Name for training job.
Get details for training job.
Resource to interact with use cases.
Create new use case.
Arguments:
- key: Use case key.
- name: Human-readable use case name which will be rendered in the UI.
If not set, will be the same as
key
. - description: Description of model which will be rendered in the UI.
Get details for the client's use case.
Resource to manage users and permissions.
Update team and role for user.
Arguments:
- email: User email.
- team: Key of team to which user will be added to.
- role: Assigned role