Articoli e notizie

Il blog di Eld Engineering

Eld Engineering Srls

Home > News > DominoIQ & Lotuscript

DominoIQ & Lotuscript

by | 28 Aug 25 | Applications, Domino, Domino and Notes development

Now let’s see how LotusScript interacts with DominoIQ and therefore with an LLM model.

Clicca qui per la versione italiana.

In a previous article I described how I set up a test environment for DominoIQ. Speed is not exactly its strong point, but it is functional, and I was able to run some tests using the two new LotusScript classes that come with Domino version 14.5:

  • NotesLLMRequest
  • NotesLLMResponse

However, before describing these two classes, it is important to understand the mechanism that links the available commands and prompts in the DominoIQ configuration database (dominoiq.nsf).
In this database, a command is basically a keyword (which is then used in LotusScript, as we will see) that contains the parameters used to establish dialogue with the AI engine.
The most important parameter is therefore the prompt to be used (which must be present). You can then specify which LLM model to use (if you have installed more than one), the maximum number of tokens, and the temperature.

dominoIQ lotusscript

As I mentioned, the prompt to be sent to the AI engine must already be present, and this is probably the most important part: we all know that the accuracy of a prompt has a major effect on the final result. As you can see from the image above, the command requires the use of the prompt “PromptRispondiItaliano,” which is defined as follows:

dominoIQ lotusscript

It is obvious that at this point we can indulge ourselves in creating Command/Prompt pairs according to our needs.
What is the advantage of this type of configuration? Starting from the same prompt that I send to the AI, I can define multiple commands that, for example, use different LLMs depending on my needs or preferences. Or, using the same model, I can have different temperatures that then influence the result.
So I can have these two commands:

dominoIQ lotusscript

They point to the same prompt but use different templates.

Let’s now see how to use them with LotusScript.

In this example, I have created an IQ button within the Memo form in my mail.
This button performs two actions:

  • it queries the DominoIQ configuration database (dominoiq.nsf) to obtain the list of available commands and offers them to the user for selection
  • it sends the request to the AI engine using the selected command and the text of the email.

To query the configuration database, I use this sequence

Set llmreq = session.CreateLLMRequest()
availCmds = llmreq.GetAvailableCommands("server/eldeng/it")
response = workspace.Prompt (PROMPT_OKCANCELLIST, "Seleziona un comando","Selezione", availCmds(0), availCmds)

That is, I instantiate an LLM request and then use the .GetAvailableCommands method with my server name as the argument. The response is stored in the availCmds array and is then presented to the user with workspace.prompt.

dominoIQ prompt
At this point, I need to retrieve the text of the email with

Set mailDoc = workspace.CurrentDocument
Set body = maildoc.Document.GetFirstItem("Body")

And then pass everything to the AI engine to display the result in a messagebox with

Set llmres = llmreq.Completion("server/eldeng/it", response, sMailThread)
If (llmres.FinishReason = LLM_FINISH_REASON_STOP) Then
  Messagebox llmres.Content

which displays this response:

dominoIQ reply

At this point, we have seen how to interact from Notes to Domino and then to an LLM model. What is the limitation?

So far, we can pass various types of questions to the AI engine, but all the answers will depend on the LLM model we are using and, above all, on how and when it was trained. Since we are actually using an engine and model external to Domino, there is no way to get answers based on the data in the various Domino databases on the server: the LLM knows nothing about our data, and the answers it provides us do not leverage this data.

The solution? It’s name is RAG, but at the moment it is an activity external to Domino as it is linked to the LLM model we use.

0 Comments