Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Empty file.
Empty file modified content/en/docs/control-center/company/project-categories.md
100755 → 100644
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file modified content/en/docs/marketplace/deprecate-content/_index.md
100755 → 100644
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -136,12 +136,10 @@ For more technical details, see the [Function Calling](/appstore/modules/genai/f

##### Adding tools from MCP servers

Besides microflow tools, tools exposed by MCP servers are also supported. To add MCP tools to an agent version, select an MCP server configuration from the [MCP client module](/appstore/modules/genai/mcp-modules/mcp-client/). You can then choose one of two import types:
Besides microflow tools, tools exposed by MCP servers are also supported. To add MCP tools to an agent version, select an MCP server configuration from the [MCP client module](/appstore/modules/genai/mcp-modules/mcp-client/). You can then choose one of two to add MCP tools:

* Server: imports the entire server, including all tools it provides.
* Tools: allows you to import specific tools from the server.

Once the agent is called, all tools currently available from the server are added to the request and are available to the model.
* Use all available tools: imports the entire server, including all tools it provides. This also means less control over individual tools and if tools are added in the future, that they get added automatically on agent execution.
* Select Tools: allows you to import specific tools from the server and changing specific fields for individual tools.

#### Adding Knowledge Bases

Expand All @@ -154,6 +152,8 @@ For supported knowledge bases registered in your app, you can connect them to ag

To allow an agent to perform semantic searches, add the knowledge base to the agent definition and configure the retrieval parameters, such as the number of chunks to retrieve, and the threshold similarity. Multiple knowledge bases can be added to the agent to pick from. Give each knowledge base a name and description (in human language) so that the model can decide which retrieves are necessary based on the input it gets.

Note that [user access approval](#enum-useraccessapproval) is always set to `VisibleForUser` for knowledge base retrievals.

#### Testing and Refining the Agent

While writing the system prompt (for both conversational and single-call types) or the user prompt (only for the single-call type), the prompt engineer can include variables by enclosing them in double braces, for example, `{{variable}}`. The actual values of these placeholders are typically known at runtime based on the user's page context.
Expand All @@ -178,7 +178,7 @@ For most use cases, a `Call Agent` microflow activity can be used. You can find

##### Call Agent with History {#call-agent-with-history}

This action uses all defined settings, including the selected model, system prompt, tools, knowledge base, and model parameters to call the Agent using the specified `Request` and execute a `Chat Completions` operation. If a `Request` object is passed that already contains a system prompt, or a value for the parameters temperature, top P or max tokens, those values have priority and will not be overwritten by the agent configurations. If a context entity is configured, the corresponding context object must be passed so that variables in the system prompt can be replaced. The operation returns a `Response` object containing the assistant’s final message, consistent with the chat completions operations from GenAI Commons.
This action uses all defined settings, including the selected model, system prompt, tools, knowledge base, and model parameters to call the Agent using the specified `Request` and execute a `Chat Completions` operation. If a `Request` object is passed that already contains a system prompt, or a value for the parameters temperature, top P or max tokens, those values have priority and will not be overwritten by the agent configurations. If a context entity is configured, the corresponding context object must be passed so that variables in the system prompt can be replaced. The operation returns a `Response` object containing the assistant’s final message, consistent with the chat completions operations from GenAI Commons, unless tool calls are requested by the model.

To use it:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,28 @@ The following operations are used in a (custom) action microflow:
* `Get Current User Prompt` gets the current user prompt. It can be used in the [action microflow](#action-microflow) because the `CurrentUserPrompt` from the chat context is no longer available.
* `Update Assistant Response` processes the response of the model and adds the new message and any sources to the UI. This is typically one of the last steps of the logic in an [action microflow](#action-microflow). It only needs to be included at the end of the happy flow of an action microflow. Make sure to pass the response object.

##### Using Tool or Knowledge Base calling {#action-microflow-tool-calling}

Since version 6.0.0, the module stores messages from toolcalling persistently in the database which will be sent along next chat messages. This makes the model aware of previous called tools (and their results). Additionally, if a tool is visible to the user or needs user confirmation before execution, the `ToolMessage` entity is used to display those. Note that this may increase token consumption as each information sent to an LLM usually counts as input tokens.

This changes the behavior action microflows are used, because they are called each time a tool is called and the UI changes for the user, for example displaying a toolcall or waiting for a user decision if a tool can be executed. Logic that only needs to happen right after the user sends their message (preprocessing) or after the final assistant's message was returned (postprocessing), should perhaps only be executed for those cases.

If no [user-visibility](#enum-useraccessapproval) is configured for tools and you'd like to not store tool messages (and therefore retain the behavior from versions before 6.0.0), you can change the boolean `SaveToolCallHistory` to false on the [Request](/appstore/modules/genai/genai-for-mx/commons/#request). Note that [knowledge base retrievals](/appstore/modules/genai/genai-for-mx/commons/#add-knowledge-base-to-request) are per default `VisibileForUser`.

### Human in the loop {#human-in-the-loop}

When using the [Function Calling](/appstore/modules/genai/function-calling/) pattern by adding tools to the request, you can control when those tools get executed and if they are visible to the user by setting [user access approval](#enum-useraccessapproval) per tool. Human in the loop describes a pattern where the AI can perform powerful tasks, but still requires humans to take certain decisions and oversee the agent's behavior. When using the ConversationalUI module, its basic action microflow pattern to execute requests with history and UI snippets to display the chat, human in the loop works out of the box. Note that action microflows are called until there is a final assistant's response as described [above](#action-microflow-tool-calling), even if all tools are executed without user interaction.

If you're not using the ConversationalUI module for [chat with history exeuctions](/appstore/modules/genai/genai-for-mx/commons/#chat-completions-with-history) or your use case does not contain a chat history, but is [task-focused (without history)](/appstore/modules/genai/genai-for-mx/commons/#chat-completions-without-history), you need to implement the following actions:

1. Store the tool calls from the returned [Response](/appstore/modules/genai/genai-for-mx/commons/#response) in your database. You can either use your own entities or reuse `ToolMessage` from ConversationalUI. The microflow `Response_CreateOrUpdateMessage` updates or creates a `Message` object with its corresponding tool messages, based on the response from the LLM.
2. If `UserConfirmationRequired` was enabled for a tool in the [user access approval](/appstore/modules/genai/genai-for-mx/commons/#enum-useraccessapproval) setting, you can use the tool messages to display the information and wait for the user to decide. The `pending` status of the tool message indicates that a user needs to take action. The `ToolMessage_UserConfirmation_Example` page shows an example as a popup. You can duplicate the page and modify to your own. The buttons for confirmation or rejection should recall the whole action.
3. The content of the tool messages need to be added to the request. [Add a message](/appstore/modules/genai/genai-for-mx/commons/#chat-add-message-to-request) with tole `assistant` that contains the toolcall information and messages with role `tool` for the tool results. You can use the `Request_AddMessage_ToolMessages` microflow to pass the same message from step 1 which takes care of this.
4. Recall the chat completions action. Be aware that the response might contain new toolcalls and not the final message yet, so you need to follow the steps above again. A recursive loop might be helpful.

For an example for a task-based (without history) use case, you can review the [GenAI Showcase App's](https://marketplace.mendix.com/link/component/220475) function calling example, especially the microflows `Task_ProcessWithFunctionCalling` and `Task_CallWithoutHistory`.


### Customizing Styling {#customize-styling}

The ConversationalUI module comes with stylesheets that are intended to work on top of Atlas Core. You can use variables and custom classes to modify the default rendering and think of colors, sizes, and positions. To learn more about customizing styling in a Mendix app in general and targeting elements using SCSS selectors, refer to the [how-to](/howto/front-end/customize-styling-new/#add-custom-styling) page.
Expand Down
Loading