One of the most important and interesting innovations in Domino 14.5 is undoubtedly the opening towards the world of AI with DominoIQ.
But let’s be clear right away: Domino isn’t becoming an AI engine. Rather, it’s acting as a sort of proxy between its applications and an AI engine like ChatGpt or similar.
|
Clicca qui per la versione italiana. |
As reported in the documentation: “Domino 14.5 adds support for running an AI inference engine in the Domino backend. The Domino server, when configured for Domino IQ, starts an inference engine from the Dbserver process. The AI engine runs locally, listening on the configured port alongside core Domino server processes – and handles AI queries locally within the Domino server.”
The first important thing to point out is that LotusScript provides two new classes with related methods to interface with AI engines : NotesLLMRequest e NotesLLMResponse .
A second aspect is the possibility of configuring predefined prompts via the specific configuration database.
Obviously, when talking about AI, there’s a first important aspect to consider: the hardware on which everything will run, which must be powerful enough to handle the AI engine’s processing and have an Nvidia GPU installed. This can be a first hurdle to overcome, as it’s very rare for the hardware running a Domino server to be equipped with an Nvidia card.
There are other options, however: although by default the AI engine is installed on the Domino server (check here the documentation) it is actually possible to use an engine present elsewhere, as described in this useful article by Serdar Basegmez .
As I have a PC with an Nvidia card (not the top of the range but still sufficient for some tests) I therefore decided to follow this path. However, unlike the article, I installed the version of Ollama for windows . And here are some points to keep in mind: installing Ollama itself is quite easy and configuring it isn’t a problem either, but when I tried to use it I noticed that it wasn’t using the Nvidia GPU at all, only the CPU. The mystery was quickly solved: I had somewhat outdated drivers for the graphics card, and downloading and installing the new ones solved the problem and allowed me to have the correct settings, especially regarding the CUDA components. (Note: You don’t need to install the whole toolkit but if you want more information about CUDA read this page ).
At this point, I was able to work on the Domino server settings for DominoIQ, and so:
In the Directory Profile of the names.nsf file, I declared the name of the Domino server that manages IQ.

I created the db Domino IQ Configuration
(the template is installed in version 14.5) and inside it I set the connection with the PC on which Ollama runs:

Obviously, where I declare the model I want to use (in my case, deepseek), I have to enter the name of a model I’ve already downloaded from Ollama. This would open a long discussion about which model is best to use, both in terms of response quality and performance, but I don’t think there’s a clear answer.
The final step was to check the available commands and their prompts, again in the Domino IQ Configuration database.
These are the commands :

and these are the prompts related:

Let’s say there should be a 1:1 correspondence between command and prompt. The concept is that a command is like a shortcut called by LotusScript, and Domino then takes care of transforming it into the associated prompt and passing it to the AI engine.
But here we’re getting into another topic: how to use LotusScript to make calls to AI and get responses from it, and I’ll write another article on that soon…


0 Comments