AIClient
| This component is currently in the incubation phase. Although it is available for use, it is under active development and may be subject to changes. We welcome feedback and encourage users to explore its capabilities. |
Short description
The AIClient component lets you compose and send queries to various online language models and processes the responses.
You can define control logic which either refines your query based on assistant’s response, or moves to the next record. This is great in situations when you’re not happy with assistant’s response and want to continue “chatting about the same input record”, until you’re satisfied with the results.
Warning Make sure the queries you generate and send to the assistant only contain data you are willing to share with their service. Make sure you conform to your data security, privacy and governance standards.
| Same input metadata | Sorted inputs | Inputs | Outputs | Each to all outputs | Java | CTL | Auto-propagated metadata |
|---|---|---|---|---|---|---|---|
- |
⨯ |
1 |
1 |
⨯ |
⨯ |
✓ |
✓ |
Ports
| Port type | Number | Required | Description | Metadata |
|---|---|---|---|---|
Input |
1 |
✓ |
Input prompts for the assistant |
Any |
Output |
1 |
✓ |
Generated responses |
At least one |
Metadata
AIClient propagates input metadata to output.
AIClient attributes
| Attribute | Req | Description | Possible values |
|---|---|---|---|
Basic |
|||
Connection |
yes |
Configuration of the AI provider, usually set via API key, model name, and temperature. See Connection. |
|
System message URL |
no |
An URL or relative path to a .txt file containing a system message. Used to load longer system prompts from external files. |
|
System message |
no |
A instruction message that is sent to the GPT model as context. Can help shape the tone or style of the response. |
|
Query and response processor |
yes |
CTL transformation script to fully control the GPT interaction workflow (input → query → response → output). See CTL interface. |
|
Output field |
yes |
The name of the field in the output record where the GPT response will be stored. |
|
Error handling |
|||
Request timeout |
How long the component waits to get a response. If it does not receive a response within a specified limit, the execution of the component fails. The AIClient has one minute request timeout by default. Request timeout is in milliseconds. Different time units can be used. See Time intervals. |
1 minute (default) |
|
Retry count |
How many times should the component retry a request in the case of a failure. The retry count specifies the number of times the component will attempt to re-send a failed request. A failure is considered to occur when the component encounters an error processing the request or response. If Query and response processor specifies the |
0 (default) |
|
Retry delay |
Specifies the time intervals between retry attempts. It is a comma-separated list of integers, where each integer represents the delay time in seconds for the corresponding retry. For example, a delay of |
0 (default) |
|
Rate limit: time interval |
Limits the number of allowed requests defined by the Rate limit: max results property per time period (seconds, minutes, hours, days). When a limit is set and reached, the component waits until the defined time interval passes to continue sending requests. |
1s (default), 1m, 1h, 1d |
|
Rate limit: max requests |
Limits the total number of requests per time period defined by the Rate limit: time interval property. By default, the number of requests is unlimited. When a limit is set and reached, the component waits until the defined time interval passes to continue sending requests. |
||
Connection
Connection represents provider-specific configuration of connection to assistant:
-
Anthropic (https://www.anthropic.com/) offers Claude models, focusing on AI safety and alignment. It is configured via API key, model name, and temperature.
-
Azure OpenAI (https://azure.microsoft.com/en-us/products/ai-foundry/models/openai) provides OpenAI models via Microsoft Azure AI Foundry with enterprise security and compliance features. It is configured via API key, endpoint, deployment name, and temperature.
-
Google Gemini (https://gemini.google.com/) supplies text generation and reasoning models integrated with Google Cloud services. It is configured via API key, model name, and temperature.
-
OpenAI (https://openai.com/api/) delivers GPT models for text generation, coding, and reasoning. It is configured via API key, model name, and temperature.
The model name is represented by a combo box listing models available when CloverDX was released; however, you can enter different models manually.
The temperature controls the creativity or randomness of the model’s output. Lower values make the output more deterministic, higher values more random. The lowest possible value is 0.0; the upper bound is model-dependent, usually 1.0 or 2.0. Some models might not support temperature at all.
The baseUrl is used only for OpenAI and defines the base URL for API requests. The default value is https://api.openai.com/v1.
CTL interface
AIClient requires a CTL transformation (named Query and response processor).
Its function newChat() is called once for each input record. Consequently, the functions prepareQuery() and processResponse() are called repeatedly for each input record until assistant response is either accepted or skipped, or the processing is stopped.
CTL template
| CTL Template Functions | |
|---|---|
boolean newChat() |
|
Required |
Yes |
Description |
Prepares the component for a new input record. The return value signifies whether the chat history (messages for previous input records) shall be preserved or reset; reset clears the history but keeps the component-level system message. |
Invocation |
Called once for each input record. |
Returns |
|
boolean newChatOnError(string errorMessage, string stackTrace) |
|
Required |
No |
Description |
An optional fallback function to handle possible errors in |
Invocation |
Called if |
Input Parameters |
|
|
|
Returns |
|
Example |
|
string prepareQuery(list[ChatMessage] chatContext, integer iterationIndex) |
|
Required |
Yes |
Description |
Prepare a new query. The returned string will be added to the chat context as user message; if |
Invocation |
Called repeatedly for each input record, after |
Input Parameters |
|
|
|
Returns |
|
Example |
|
boolean prepareQueryOnError(string errorMessage, string stackTrace, list[ChatMessage] chatContext, integer iterationIndex) |
|
Required |
No |
Description |
An optional fallback function to handle possible errors in |
Invocation |
Called if |
Input Parameters |
|
|
|
|
|
|
|
Returns |
|
Example |
|
boolean sendRequestOnError(string errorMessage, string stackTrace, list[ChatMessage] chatContext, integer iterationIndex) |
|
Required |
No |
Description |
An optional function to handle possible errors in communication with the assistant. |
Invocation |
Called if an exception is thrown during communication with the assistant. If the Retry count attribute is specified, this function is called only after all retries have failed, using the last exception. |
Input Parameters |
|
|
|
|
|
|
|
Returns |
|
Example |
|
integer processResponse(list[ChatMessage] chatContext, integer iterationIndex, string assistantResponse) |
|
Required |
Yes |
Description |
Process assistant response: write it to output, repeat a query with additional instructions, or ignore it. |
Invocation |
Called repeatedly for each input record when assistant response is received, as long as it returns |
Input Parameters |
|
|
|
|
|
Returns |
|
Example |
|
boolean processResponseOnError(string errorMessage, string stackTrace, list[ChatMessage] chatContext, integer iterationIndex) |
|
Required |
No |
Description |
An optional fallback function to handle possible errors in |
Invocation |
Called if |
Input Parameters |
|
|
|
|
|
|
|
Returns |
|
Example |
|
Compatibility
| Version | Compatibility notice |
|---|---|
7.1.0 |
AIClient was introduced in CloverDX version 7.1 as OpenAIClient – it only supported OpenAI. |
7.3.0 |
The component was renamed to AIClient and gained support for additional providers: Anthropic (Claude models), Azure OpenAI, and Google Gemini. |