OpenAIClient
| This component is currently in the incubation phase. Although it is available for use, it is under active development and may be subject to changes. We welcome feedback and encourage users to explore its capabilities. |
Short description
The OpenAIClient component lets you compose and send queries to OpenAI’s GPT language models and processes the responses.
You can define control logic which either refines your query based on GPT’s response, or moves to the next record. This is great in situations when you’re not happy with GPT’s response and want to continue "chatting about the same input record", until you’re satistifed with the results.
Warning Make sure the queries you generate and send to OpenAI only contain data you are willing to share with their service. Make sure you conform to your data security, privacy and governance standards.
| Same input metadata | Sorted inputs | Inputs | Outputs | Each to all outputs | Java | CTL | Auto-propagated metadata |
|---|---|---|---|---|---|---|---|
- |
⨯ |
1 |
1 |
⨯ |
⨯ |
✓ |
✓ |
Ports
| Port type | Number | Required | Description | Metadata |
|---|---|---|---|---|
Input |
1 |
✓ |
Input prompts to a GPT mode |
Any |
Output |
1 |
✓ |
Generated responses |
At least one |
Metadata
OpenAIClient propagates input metadata to output.
OpenAIClient attributes
| Attribute | Req | Description | Possible values |
|---|---|---|---|
Basic |
|||
API key |
yes |
The key used to authenticate requests to OpenAI’s API. You can get it from your OpenAI dashboard at https://platform.openai.com. |
|
Model |
no |
The OpenAI model ID to use. Supported models are available at https://platform.openai.com/docs/models. |
gpt-4o-mini (default) |
System message URL |
no |
An URL or relative path to a .txt file containing a system message. Used to load longer system prompts from external files. |
|
System message |
no |
A instruction message that is sent to the GPT model as context. Can help shape the tone or style of the response. |
|
Query and response processor |
yes |
CTL transformation script to fully control the GPT interaction workflow (input → query → response → output). See CTL interface. |
|
Output field |
yes |
The name of the field in the output record where the GPT response will be stored. |
|
Temperature |
no |
Controls the creativity or randomness of the model’s output. Lower values make the output more deterministic, higher values more random. Range: 0.0–2.0 |
|
Error handling |
|||
Request timeout |
How long the component waits to get a response. If it does not receive a response within a specified limit, the execution of the component fails. The OpenAIClient has one minute request timeout by default. Request timeout is in milliseconds. Different time units can be used. See Time intervals. |
1 minute (default) |
|
Retry count |
How many times should the component retry a request in the case of a failure. The retry count specifies the number of times the component will attempt to re-send a failed request. A failure is considered to occur when the component encounters an error processing the request or response. |
0 (default) |
|
Retry delay |
Specifies the time intervals between retry attempts. It is a comma-separated list of integers, where each integer represents the delay time in seconds for the corresponding retry. For example, a delay of |
0 (default) |
|
Rate limit: time interval |
Limits the number of allowed requests defined by the Rate limit: max results property per time period (seconds, minutes, hours, days). When a limit is set and reached, the component waits until the defined time interval passes to continue sending requests. |
1s (default), 1m, 1h, 1d |
|
Rate limit: max requests |
Limits the total number of requests per time period defined by the Rate limit: time interval property. By default, the number of requests is unlimited. When a limit is set and reached, the component waits until the defined time interval passes to continue sending requests. |
||
CTL interface
OpenAIClient requires a CTL transformation (named Query and response processor).
Its function newChat() is called once for each input record. Consequently, the functions prepareQuery() and processResponse() are called repeatedly for each input record until assistant response is either accepted or skipped, or the processing is stopped.
CTL template
| CTL Template Functions | |
|---|---|
boolean newChat() |
|
Required |
Yes |
Description |
Prepares the component for a new input record. The return value signifies whether the chat history (messages for previous input records) shall be preserved or reset; reset clears the history but keeps the component-level system message. |
Invocation |
Called once for each input record. |
Returns |
|
string prepareQuery(list[list[string]] chatContext, integer iterationIndex) |
|
Required |
Yes |
Description |
Prepare a new query. The returned string will be added to the chat context as user message; if |
Invocation |
Called repeatedly for each input record, after |
Input Parameters |
|
|
|
Returns |
|
Example |
|
integer processResponse(list[list[string]] chatContext, integer iterationIndex, string assistantResponse) |
|
Required |
Yes |
Description |
Process assistant response: write it to output, repeat a query with additional instructions, or ignore it. |
Invocation |
Called repeatedly for each input record when assistant response is received, as long as it returns |
Input Parameters |
|
|
|
|
|
Returns |
|
Example |
|
Compatibility
| Version | Compatibility notice |
|---|---|
7.1.0 |
OpenAIClient is available since CloverDX version 7.1. |