Writing Domino Server Addins With GraalVM Native Image

Sep 27, 2020, 3:35 PM

Tags: graalvm domino

I was thinking the other day about the task of writing a Domino server addin, the kind that you run by typing load foo on the server console. The way this is generally done is via C or the like: you write a program using your dusty old copy of the C API Toolkit and have an AddinMain function as the entrypoint. That's fine enough if you want to write in C, but, even beyond the language, it carries the tremendous overhead of a fiddly compilation chain that differs per-platform.

I got to thinking, then, about GraalVM, and specifically its Native Image capability. Before I get into what I did, I figure this warrants some background.

What is GraalVM?

GraalVM is a project from Oracle that is, roughly, an alternative core Java Virtual Machine. It's designed to serve a number of goals, but the main ways that I've seen it used is to improve the speed and efficiency of Java-based programs. It also has some neat-looking capabilities for running multiple languages in one app space, but I have yet to look into that.

The Native Image capability is a way to compile Java applications to native executables for a given platform. So, instead of having a JAR file that you then run with an installed JVM, you'd have an executable that you run directly, and which effectively acts as its own "VM". This means you end up with just "some executable" on your system, and the lack of bootstrapping needed to run it opens up some possibilities.

Domino Server Addins

Though Domino server addins have their own set of functions within the Notes C API, they're really just an executable that Domino launches as a sub-process. If you have a basic executable named foo in your Domino program directory, you can type load foo and it'll run it, whether or not the executable does anything with the Notes API at all. It won't necessarily be useful if it doesn't use the Notes API, but it'll run.

It's this "just an executable" bit, though, that was a contributing factor to making Java not a practical language for this. That's also where RunJava fit in: the runjava executable just initialized a JVM and loads the named class, which is afterward responsible for everything, but that was nonetheless obligatory work to get a Java app loaded this way.

The Combination

Once I realized these things, it wasn't a far reach to try implementing an addin this way. One of my initial concerns was the way addins use AddinMain as a C-type entrypoint - my knowledge of how that sort of thing works is limited enough that I wasn't sure if GraalVM's annotations would suffice. However, the C API documentation relieved my worry: using that function name is just a convenience that handles some of the bootstrapping for you. If you just use a normal main(...) entrypoint, the only difference is that you're on the hook for managing your status line more (the thing that shows up when you do show tasks).

Fortunately, the addin-related methods in the lotus.notes.addin.JavaServerAddin class in Notes.jar are extremely-thin wrappers around native calls and aren't actually specific to RunJava in any way. You can subclass it and use it in essentially the same way as in a RunJava addin:

package frostillicus.graalvm;

import lotus.domino.NotesException;
import lotus.notes.addins.JavaServerAddin;

public class Main extends JavaServerAddin {
	static {
		System.setProperty("java.library.path", "/opt/hcl/domino/notes/11000100/linux"); //$NON-NLS-1$ //$NON-NLS-2$
		System.loadLibrary("notes"); //$NON-NLS-1$
		System.loadLibrary("lsxbe"); //$NON-NLS-1$
	public static void main(String[] args) {
		new Main().start();
	public Main() {
		setName("GraalVM Test");
	public void runNotes() throws NotesException {
		AddInLogMessageText("GraalVM Test initialized");
		int taskId = AddInCreateStatusLine(getName());
		try {

			// Do your work here

		} catch(Throwable t) {
		} finally {


GraalVM-specific configuration

The GraalVM project provides a Maven plugin to do native compilation for you, and I make use of that in the project's pom.xml:

		<!-- snip <buildArgs> -->

Including that in your project will produce a native executable for your current platform in the target folder, alongside the normal JAR file.

The bit I snipped out, though, ends up being important. In a similar way to what happens during Android "Java" compilation, the GraalVM native compiler builds a map of all of the code used in your project to create its native representation. Additionally, it doesn't support reflection as casually as a normal JVM does, and doing a compilation like this shows just how common reflection is in Java.

Reflection and JNI Configuration

What reflection (and JNI) in Java generally needs is a mapping table of class/method/field names to their class representations, and GraalVM doesn't build this for everything by default. Instead, it does its best guess based on your actual code, but then it's up to you to explicitly specify the parts you'll be accessing dynamically.

For the normal case, Oracle wrote a tool that will monitor an actively-running app in Java for such calls. You build your app and run it non-native with this agent, and then it will spit out a configuration file based on the actually-called reflective methods.

However, as with everything else to do with Domino, it's not the normal case: since what I'm running only reasonably exists when launched explicitly from a server, I had to do it the "hard" way. Fortunately, the it's actually just mostly tedious: build the app, launch the Domino Docker container, wait to look for a NoClassDefFoundError or related problem, add that to the config file, and repeat until it stops yelling. Some cases are a little fiddlier, like how JNA's native component misrepresents the class name it was trying to find, but overall it's just time-consuming.


So, this is possible, but is it worth doing? Depending on what you want to do, maybe. It's mildly less unsupported than RunJava, and has the huge advantage of not polluting the server's classpath with all of your application code. Additionally, it should be pretty zippy, as GraalVM boasts some impressive performance numbers. Additionally, at least for Java developers, it's much, much easier to use the native-image-maven-plugin than it is to set up cmake or manual makefiles for a C/etc. project.

However, it can also be a real PITA to get working, especially for a reflection-heavy project. Additionally, though you're technically using Addin* functions with a native executable, it's not like HCL would take your call if you run into trouble with a monstrosity like this (I assume). Most importantly, it's restricted to the sort of thing that would make sense as a server addin to begin with - for example, this wouldn't help with building web apps unless you were planning to use it to (again, just as an example) run a web server that's written in Java.

Future Tinkering

I think that this warrants some more investigation. I'd be curious if this process would work for writing other native components, such as DSAPI filters and ExtMgr addins. In those cases, it absolutely would be important to have the right entrypoints, so it wouldn't be quite so easy. Still, it'd be neat if that worked.

And GraalVM and the Native Image component are definitely worth some time even aside from anything Domino-related. I'm curious about what you can do with the "polyglot" features, for example.

Example Project

I've put an example project up on GitHub, which is a basic example that just accepts strings via tell graalvm-test foo and echoes them back. It also includes a Dockerfile for running via HCL's official Domino 11.0.1 image. I haven't actually tested it any other way, so that's the best way to give it a shot.

Getting to Appreciate the Idioms of Docker

Sep 14, 2020, 9:28 AM

Tags: docker
  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

Now that I've been working with Docker more, I'm starting to get used to its way of doing things. As with any complicated tool - especially one as fond of making up its own syntax as Docker is - there's both the process of learning how to do things as well as learning why they're done that way. Since I'm on this journey myself, I figure it could be useful to share what I've learned so far.

What Is Docker?

To start with, it's useful to understand what Docker is both conceptually and technically, since a lot of discussion about it is buried under terms like "cloud native" that obscure the actual topic. That's even before you get to the giant pile of names like "Kubernetes" and "Rancher" that build on top of the core.

Before I get to the technical bits, the overall idea is that Docker is a way to run programs isolated from each other and in a consistent way across deployments. In a Domino context, it's kind of like how an NSF is still its own mostly-consistent app regardless of what OS Domino is on or what version it is - the NSF is its own little world on Domino-the-host. Technically, it diverges wildly from that, but it can be a loose point of reference.

Now, for the nuts and bolts.

Docker (the tool, not the company or service) is a Linux-born toolset for OS-level virtualization. It uses the term "containers", but other systems over time have used terms like "partitions" and "jails" to mean the same thing. In essence, what OS-level virtualization means is that a program or set of programs is put into a box that looks like the whole OS, but is really just a subset view provided by a host OS. This is distinct from virtualization in the sense of VMWare or Parallels in that the app still uses the code of the host OS, rather than loading up a whole additional OS.

Things admittedly get a little muddled on non-Linux systems. Other than Microsoft's peculiar variant of Docker that runs Windows-based apps, "a Docker container" generally means "a Linux container". To accomplish this, and to avoid having a massively-fragmented array of images (more on those in a bit), Docker Desktop on macOS and (usually) Windows uses hardware virtualization to launch a Linux system. In those cases, Docker is using both hardware virtualization and in-OS container virtualization, but the former is just a technical implementation detail. On a Linux host, though, no such second tier is needed.

Beyond making use of this OS service, Docker consists of a suite of tools for building and managing these images and containers, and then other tools (like Kubernetes) operate at a level above that. But all the stuff you deal with with Docker - Dockerfiles, Compose, all that - comes down to creating and managing these walled-off apps.

Docker Images

Docker images are the part that actually contains the programs and data to run and use, which are then loaded up into a container.

A Docker image is conceptually like a disk image used by a virtualization app or macOS - it's a bunch of files ready to be used in a filesystem. You can make your own or - very commonly - pull them from a centralized library like the main Docker Hub. These images are generally components of a larger system, but are sometimes full-on tools to run yourself. For example, the PostgreSQL image is ready to run in your Docker environment and can be used as essentially a quick-start way to set up a Postgres server.

The particular neat trick that Docker images pull is that they're layered. If you look at a Dockerfile (the script used to build these images), you can see that they tend to start with a FROM line, indicating the base image that they stack on top of. This can go many layers deep - for example, the Maven image builds on top of the OpenJDK image, which is based on the Alpine Linux image.

You can think of this as a usually-simple dependency line in something like Maven. Rather than including all of the third-party code needed, a Maven module will just reference dependencies, which are then brought in and woven together as needed in the final app. This is both useful for creating your images and is also an important efficiency gain down the line.


The main way to create a Docker image is to use a Dockerfile, which is a text file with a syntax that appears to have come from another dimension. Still, once you're used to the general form of one, they make sense. If you look at one of the example files, you can see that it's a sequential series of commands describing the steps to create the final image.

When writing these, you more-or-less can conceptualize them like a shell script, where you're copying around files, setting environment properties, and executing commands. Once the whole thing is run, you end up with an image either in your local registry or as a standalone file. That final image is what is loaded and used as the operating environment of the container.

The neat trick that Dockerfiles pull, though, is that commands that modify the image actually create a new layer each, rather than changing the contents of a single image. For example, take these few lines from a Dockerfile I use for building a Domino-based project:

COPY docker/settings.xml /root/.m2/
RUN mkdir -p /root
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux

Each of these lines creates a new layer. The first two are tiny: one just contains the settings.xml file from my project and then the second just contains an empty /root directory. The third is more complicated, pulling in the whole Domino runtime from the official 11.0.1 image, but it's the same idea.

Each of these images is given a SHA-256 hash identifier that will uniquely identify it as a result of an operation on a previous base image state. This lets Docker cache these results and not have to perform the same operation each time. If it knows that, by the time it gets to the third line above, the starting image and the Domino image are both in the same state as they were the last time it ran, it doesn't actually need to copy the bits around: it can just reuse the same unchanged cached layer.

This is the reason why Maven-build Dockerfiles often include a dependency:go-offline line: because the project's dependencies rarely change, you can create a reusable image from the Maven dependency repository and not have to re-resolve them every build.


So that's the core of it: managing images and walled-off mini OS environments. Things get even more complicated in there even before you get to other tooling, but I've found it useful to keep my perspective grounded in those basics while I learn about the other aspects.

In the future, I think I'll talk about how and why Docker has been particularly useful for me when it comes to building and running Domino-based apps, in particularly helping somewhat to alleviate several of the long-standing impediments to working with Domino.