OpenNTF Fork of p2-maven-plugin

Nov 14, 2020, 6:56 PM

Tags: maven tycho
  1. Converting Tycho Projects to maven-bundle-plugin, Initial Phase
  2. Winter Project #2: Maven P2 Repository Resolver
  3. OpenNTF Fork of p2-maven-plugin
  4. The Intricate Work of OSGi Dependencies on Domino

It's been one of my long-running goals to reduce my use of Tycho for my work. While Tycho does what it says on the tin, the way PDE works in Eclipse means it's an ongoing nightmare to deal with when I want to do simple things like add a new dependency. This isn't really Tycho's fault as such, and the project itself is making major steps to alleviate some issues, but it's the nature of the surrounding tooling. Even beyond that, the shaky support in IntelliJ and total lack of support in Visual Studio Code and similar editors makes it a real thorn in my side.

Still, though, it brings a lot to the table, particularly when dealing with Domino-targeted projects. Because Domino's OSGi layout is... fiddly, it's often safest to use the "Manifest-first" approach for dependencies, and it's definitely important to still be able to do feature projects and p2 repositories for importing into Designer and Domino.

But I've still been trying to whittle away at the constraints over time, and I got fed up enough yesterday to make some major strides.

The Original Project

One of the major tools in my toolbelt for years has been the p2-maven-plugin, which does a lot of heavy lifting when it comes to taking non-Tycho or non-OSGi-focused projects and making them palatable for an OSGi environment. Even when I don't use it as the backbone of a project, I tend to use it to gather third-party dependencies and process them to make them Domino-friendly.

The Fork

It has its limitations, though, that have kept me from using it to replace the final steps of a Tycho build, and those are the ones that I set out to improve. Yesterday, I forked the project and got to work. Most of my work centered around letting it pull more information out of existing p2 repositories. While it already has some knowledge of such repos, it was still geared heavily towards only using them to pick up a bundle here or there. The big annoyance for me there was that I wanted to bring in entire existing p2-housed features into the final update site.

For example, one of my big projects consumes and redistributes a bunch of upstream projects, such as ODA and the XPages Jakarta EE support. While the p2-maven-plugin made it possible to reference those projects as Maven artifacts or individual bundles, I couldn't do what I wanted and just say "bring X and Y features in, including all their bundles".

I also went in and added a few other niceties needed for Domino: generation of the antiquated "site.xml" file for the NSF Update Site, archiving of the final site for distribution, and so forth.

The Implications

With my changes, I was able to delete all of the feature projects in the tree, which lowers the mental complexity a bit. That also means that the only parts "controlled" by Tycho now are the actual bundle projects, and those have a clear path to de-Tycho-ization. Though doing that will make it a little more difficult to know when dependencies are Domino-suitable ahead of time, the conversion should save a ton of hassle overall.

So now, I have a toolchain that should be able to work together to replace Tycho while still working with the Equinox-heavy target:

  • maven-bundle-plugin to generate the OSGi metadata in META-INF/MANIFEST.MF. I could also use bnd-maven-plugin directly for this and bndtools in Eclipse, but I'm not sure that it'd gain me much in practice
  • generate-domino-update-site to create p2 repositories from post-9.0.1 Domino releases' XPages framework, which remains damnably non-Mavenized
  • p2-layout-provider to resolve p2-housed artifacts like those from above and OpenNTF projects and make them available as normal-enough Maven dependencies on the fly
  • The forked p2-maven-plugin to generate features and update sites, as well as to repackage existing bundles to be more Domino-friendly

What's missing now is an ability to run compile-time test suites in a true Equinox environment. I'm hemming and hawing on how important that really is, though. The tests I write only rarely expect the presence of OSGi - the main way it comes into play is for extensions, which are papered over by IBM Commons anyway. I've had a delightful time lately running tests of JAX-RS resources with Liberty's dev mode, and I'm pretty sure I saw some examples somewhere of building up and tearing down a scaffolding to run them during compilation, so maybe I'll switch to that anyway.

In any event, just having a tool to do this stuff is a huge weight off my back, and now the goal of a fully-normal-enough Maven project tree is tantalizingly in sight.

Upcoming Event: Java With Domino Roundtable

Nov 12, 2020, 8:31 PM

Tags: java

The other day, I floated the idea of running an unstructured roundtable discussion of working with Java either on or accessing Domino, and I think it'll be worth giving a shot.

Since Java with Domino is in a weird place, the goal would be to discuss the various ways that people are or want to use it. So that can include XPages, OSGi, REST services generally, Jakarta EE, Spring, Vert.x, and so forth. I'd also like it to be open generally. I imagine I'll have some preliminary remarks, but otherwise the goal is to be less like a webinar and more like a free-flowing discussion, in the vein of the "happy hour" and "coffee break" rooms from CollabSphere and Digital Week.

My current plan is to run it on short notice, next week:

Tuesday, November 17th
2:00 PM US Eastern (19:00 UTC)
https://zoom.us/j/99514285138
Password: Computers!

I'll share the password I come up with on Twitter on the day of the event, so look for it there.

CollabSphere 2020 Slides and Video

Oct 29, 2020, 8:09 PM

One of the nice bonuses of an all-online conference is that session recording comes built-in, so I was able to snag that and put it up on YouTube for posterity:

Additionally, I uploaded my slides to SlideShare, though that loses out on the extremely-fancy 5-second videos I used:

CollabSphere 2020: DEV101 - Add Continuous Delivery to Domino with the NSF ODP Tooling

Oct 26, 2020, 1:54 PM

CollabSphere 2020 is starting tomorrow, this year naturally taking the form of an online conference, which has the nice benefit of meaning that you can still sign up if you haven't done so, and you're only restricted by your time zone offset for attending.

For my part, I'll be giving a presentation on the NSF ODP Tooling, currently slated for tomorrow:

DEV101 - Add Continuous Delivery to Domino with the NSF ODP Tooling

Domino applications, stored in NSFs, have been historically difficult to add to Continuous Integration tools like Jenkins and to have participate in Continous Delivery workflows. This session will discuss the NSF ODP Tooling project on OpenNTF, which allows you to take Domino-based projects - whether targetting the Notes client or web, XPages or not - and integrate them with modern tooling and flows. It will demonstrate use with projects ranging from a single NSF to a suite of a dozen OSGi plguins and two dozen NSFs, showing how they can be built and packaged automatically and consistently.

I hope you'll be able to attend - there are definitely some very-interesting topics lined up.

A Notes-Client-Friendly Way To Access JWT-Protected Resources

Oct 2, 2020, 7:32 PM

Tags: lotusscript

A Notes-Client-Friendly Way To Access JWT-Protected Resources

I recently had call to access the Zoom REST API in a Notes client app that will be maintained by other Notes programmers, so I figured it'd be as good an opportunity as any to use the HTTP and JSON classes added in V10 and 11.

The basics there are fine enough - though those classes aren't featureful, they can get the job done. However, the Zoom API needs specialized authentication, beyond the username/password type that you can kind of work your way to in LotusScript alone. Since my needs will be administrative as opposed to multiple users acting as themselves, I decided to go the JWT route instead of OAuth.

JWT

JWT stands for "JSON Web Token", and it's one of the now-common ways to do secure authorization without passing passwords around. It's simple at its core - just some JSON objects to indicate the type of token and the payload of app-specific claims you're going to make, then a cryptographic signature.

It's that last part that moves it out of the realm of LotusScript (barring some way to wrangle the SEC* functions in the C API to do it), so I went to Java and LS2J to bridge the gap.

The Java Side

I lucked out in that the Zoom API uses a pretty simple path for generating the signature - my previous experience with JWT involved public/private key pairs, which is still doable but is more annoying. Additionally, the payload is pretty simple, just asserting that you're logging in, with nothing like the specialized user ID lookups I had to do with SharePoint. This meant I could get away with writing out the token "manually" rather than going through the onerous process of creating script libraries out of one of the available libraries and its dependency tree.

One gotcha is that the JDK doesn't actually ship with JSON support. Fortunately, in this case, the only values going in were JSON-friendly and didn't need escaping, but I'd suggest using even a basic library like the agent-friendly JSON-java for normal uses.

I ended up making a static method in a single-class Java script library:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
package us.iksg;

import java.nio.charset.StandardCharsets;
import java.util.Base64;
import java.util.concurrent.TimeUnit;

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;

public class JWTGenerator {
    public static final long TIMEOUT = TimeUnit.HOURS.toMillis(1);
    
    public static String generateJWT(String apiKey, String apiSecret) {
        try {
            long now = System.currentTimeMillis();
            long exp = now + TIMEOUT;
            
            // Note to the future: I apologize for writing JSON via string concatenation, but it
            //   _should_ be safe here.
            
            // Header: alg: HS256, typ: JWT
            String headerJson = "{\"alg\": \"HS256\", \"typ\": \"JWT\"}";
            String headerB64 = Base64.getUrlEncoder().encodeToString(headerJson.getBytes(StandardCharsets.UTF_8));
            
            // Payload: iss: API_KEY, exp: exp
            String payloadJson = "{" +
                    "\"iss\": \"" + apiKey + "\"," +
                    "\"exp\": \"" + exp + "\"" +
                "}";
            String payloadB64 = Base64.getUrlEncoder().encodeToString(payloadJson.getBytes(StandardCharsets.UTF_8));
            
            // Codec: HMAC SHA256 (HS256)
            Mac mac = Mac.getInstance("HmacSHA256");
            SecretKeySpec spec = new SecretKeySpec(apiSecret.getBytes(StandardCharsets.UTF_8), "HmacSHA256");
            mac.init(spec);
            byte[] signature = mac.doFinal((headerB64 + "." + payloadB64).getBytes(StandardCharsets.UTF_8));
            String signatureB64 = Base64.getUrlEncoder().encodeToString(signature);
            
            return headerB64 + '.' + payloadB64 + '.' + signatureB64;
        } catch(Throwable t) {
            throw new RuntimeException(t);
        }
    }
}

All of those classes come with the JDK, so it's nice and self-contained.

The LotusScript Side

Back on the LotusScript side, I brought out my trusty old friend LS2J:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Uselsx "*javacon"
Use "JWT Generator"

Sub Click(Source As Button)
    On Error Goto errorHandler
    
    Dim session As New NotesSession, ws As New NotesUIWorkspace, doc As NotesDocument
    Set doc = ws.CurrentDocument.Document
    
    Dim jsession As New JAVASESSION, jwtGenerator As JavaClass
    Set jwtGenerator = jsession.GetClass("us.iksg.JWTGenerator")
    
    Dim apiKey As String, apiSecret As String
    apiKey = doc.ZoomAPIKey(0)
    apiSecret = doc.ZoomAPISecret(0)
    
    Dim generate As JavaMethod
    Set generate = jwtGenerator.GetMethod("generateJWT", "(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;")
    
    Dim token As String
    token = generate.Invoke(Empty, apiKey, apiSecret)
    ' In this case, it's in a "developer playground" form I made for testing.
    ' Do not store JWT tokens long-term - they should be generated for each script.
    doc.ZoomJWTToken = token
    
    Exit Sub
errorHandler:
    Msgbox Erl & ": " & Error
    End
End Sub

The only unusual bit here is that, since I used a static method, I pass Empty as the first parameter to Invoke. I tend to use the reflection-based approach like this out of habit after consistently running into trouble with LS2J's mapping of methods to their Java counterparts, but it'd probably be a little cleaner if I made it an instance method and just called it directly.

Once I had the generated token, I was able to include it in my HTTP requests:

1
2
3
4
5
6
7
8
9
Dim req As NotesHTTPRequest
Set req = session.CreateHTTPRequest()
Call req.SetHeaderField("Authorization", "Bearer " & token)
' Since we just want to plunk this into the field, request a string back
req.PreferStrings = True

Dim result As String
result = req.Get("https://api.zoom.us/v2/users")
doc.Users = result

Not too shabby overall, for the Notes client. I may end up putting all these calls into run-on-server agents regardless just to avoid trouble should the client end up having their users use the Web Assembly or mobile Notes clients, but even then this still ends up very Notes-client-developer-friendly.

Writing Domino Server Addins With GraalVM Native Image

Sep 27, 2020, 7:35 PM

Tags: graalvm domino

I was thinking the other day about the task of writing a Domino server addin, the kind that you run by typing load foo on the server console. The way this is generally done is via C or the like: you write a program using your dusty old copy of the C API Toolkit and have an AddinMain function as the entrypoint. That's fine enough if you want to write in C, but, even beyond the language, it carries the tremendous overhead of a fiddly compilation chain that differs per-platform.

I got to thinking, then, about GraalVM, and specifically its Native Image capability. Before I get into what I did, I figure this warrants some background.

What is GraalVM?

GraalVM is a project from Oracle that is, roughly, an alternative core Java Virtual Machine. It's designed to serve a number of goals, but the main ways that I've seen it used is to improve the speed and efficiency of Java-based programs. It also has some neat-looking capabilities for running multiple languages in one app space, but I have yet to look into that.

The Native Image capability is a way to compile Java applications to native executables for a given platform. So, instead of having a JAR file that you then run with an installed JVM, you'd have an executable that you run directly, and which effectively acts as its own "VM". This means you end up with just "some executable" on your system, and the lack of bootstrapping needed to run it opens up some possibilities.

Domino Server Addins

Though Domino server addins have their own set of functions within the Notes C API, they're really just an executable that Domino launches as a sub-process. If you have a basic executable named foo in your Domino program directory, you can type load foo and it'll run it, whether or not the executable does anything with the Notes API at all. It won't necessarily be useful if it doesn't use the Notes API, but it'll run.

It's this "just an executable" bit, though, that was a contributing factor to making Java not a practical language for this. That's also where RunJava fit in: the runjava executable just initialized a JVM and loads the named class, which is afterward responsible for everything, but that was nonetheless obligatory work to get a Java app loaded this way.

The Combination

Once I realized these things, it wasn't a far reach to try implementing an addin this way. One of my initial concerns was the way addins use AddinMain as a C-type entrypoint - my knowledge of how that sort of thing works is limited enough that I wasn't sure if GraalVM's annotations would suffice. However, the C API documentation relieved my worry: using that function name is just a convenience that handles some of the bootstrapping for you. If you just use a normal main(...) entrypoint, the only difference is that you're on the hook for managing your status line more (the thing that shows up when you do show tasks).

Fortunately, the addin-related methods in the lotus.notes.addin.JavaServerAddin class in Notes.jar are extremely-thin wrappers around native calls and aren't actually specific to RunJava in any way. You can subclass it and use it in essentially the same way as in a RunJava addin:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
package frostillicus.graalvm;

import lotus.domino.NotesException;
import lotus.notes.addins.JavaServerAddin;

public class Main extends JavaServerAddin {
	static {
		System.setProperty("java.library.path", "/opt/hcl/domino/notes/11000100/linux"); //$NON-NLS-1$ //$NON-NLS-2$
		System.loadLibrary("notes"); //$NON-NLS-1$
		System.loadLibrary("lsxbe"); //$NON-NLS-1$
	}
	
	public static void main(String[] args) {
		new Main().start();
	}
	
	public Main() {
		setName("GraalVM Test");
	}
	
	@Override
	public void runNotes() throws NotesException {
		AddInLogMessageText("GraalVM Test initialized");
		int taskId = AddInCreateStatusLine(getName());
		try {

			// Do your work here

		} catch(Throwable t) {
			t.printStackTrace();
		} finally {
			AddInDeleteStatusLine(taskId);
		}
	}

}

GraalVM-specific configuration

The GraalVM project provides a Maven plugin to do native compilation for you, and I make use of that in the project's pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<plugin>
	<groupId>org.graalvm.nativeimage</groupId>
	<artifactId>native-image-maven-plugin</artifactId>
	<version>20.2.0</version>
	<configuration>
		<imageName>${project.name}</imageName>
		<mainClass>frostillicus.graalvm.Main</mainClass>
		<!-- snip <buildArgs> -->
	</configuration>
	<executions>
		<execution>
			<goals>
				<goal>native-image</goal>
			</goals>
			<phase>package</phase>
		</execution>
	</executions>
</plugin>

Including that in your project will produce a native executable for your current platform in the target folder, alongside the normal JAR file.

The bit I snipped out, though, ends up being important. In a similar way to what happens during Android "Java" compilation, the GraalVM native compiler builds a map of all of the code used in your project to create its native representation. Additionally, it doesn't support reflection as casually as a normal JVM does, and doing a compilation like this shows just how common reflection is in Java.

Reflection and JNI Configuration

What reflection (and JNI) in Java generally needs is a mapping table of class/method/field names to their class representations, and GraalVM doesn't build this for everything by default. Instead, it does its best guess based on your actual code, but then it's up to you to explicitly specify the parts you'll be accessing dynamically.

For the normal case, Oracle wrote a tool that will monitor an actively-running app in Java for such calls. You build your app and run it non-native with this agent, and then it will spit out a configuration file based on the actually-called reflective methods.

However, as with everything else to do with Domino, it's not the normal case: since what I'm running only reasonably exists when launched explicitly from a server, I had to do it the "hard" way. Fortunately, the it's actually just mostly tedious: build the app, launch the Domino Docker container, wait to look for a NoClassDefFoundError or related problem, add that to the config file, and repeat until it stops yelling. Some cases are a little fiddlier, like how JNA's native component misrepresents the class name it was trying to find, but overall it's just time-consuming.

Practicality

So, this is possible, but is it worth doing? Depending on what you want to do, maybe. It's mildly less unsupported than RunJava, and has the huge advantage of not polluting the server's classpath with all of your application code. Additionally, it should be pretty zippy, as GraalVM boasts some impressive performance numbers. Additionally, at least for Java developers, it's much, much easier to use the native-image-maven-plugin than it is to set up cmake or manual makefiles for a C/etc. project.

However, it can also be a real PITA to get working, especially for a reflection-heavy project. Additionally, though you're technically using Addin* functions with a native executable, it's not like HCL would take your call if you run into trouble with a monstrosity like this (I assume). Most importantly, it's restricted to the sort of thing that would make sense as a server addin to begin with - for example, this wouldn't help with building web apps unless you were planning to use it to (again, just as an example) run a web server that's written in Java.

Future Tinkering

I think that this warrants some more investigation. I'd be curious if this process would work for writing other native components, such as DSAPI filters and ExtMgr addins. In those cases, it absolutely would be important to have the right entrypoints, so it wouldn't be quite so easy. Still, it'd be neat if that worked.

And GraalVM and the Native Image component are definitely worth some time even aside from anything Domino-related. I'm curious about what you can do with the "polyglot" features, for example.

Example Project

I've put an example project up on GitHub, which is a basic example that just accepts strings via tell graalvm-test foo and echoes them back. It also includes a Dockerfile for running via HCL's official Domino 11.0.1 image. I haven't actually tested it any other way, so that's the best way to give it a shot.

Getting to Appreciate the Idioms of Docker

Sep 14, 2020, 1:28 PM

Tags: docker
  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

Now that I've been working with Docker more, I'm starting to get used to its way of doing things. As with any complicated tool - especially one as fond of making up its own syntax as Docker is - there's both the process of learning how to do things as well as learning why they're done that way. Since I'm on this journey myself, I figure it could be useful to share what I've learned so far.

What Is Docker?

To start with, it's useful to understand what Docker is both conceptually and technically, since a lot of discussion about it is buried under terms like "cloud native" that obscure the actual topic. That's even before you get to the giant pile of names like "Kubernetes" and "Rancher" that build on top of the core.

Before I get to the technical bits, the overall idea is that Docker is a way to run programs isolated from each other and in a consistent way across deployments. In a Domino context, it's kind of like how an NSF is still its own mostly-consistent app regardless of what OS Domino is on or what version it is - the NSF is its own little world on Domino-the-host. Technically, it diverges wildly from that, but it can be a loose point of reference.

Now, for the nuts and bolts.

Docker (the tool, not the company or service) is a Linux-born toolset for OS-level virtualization. It uses the term "containers", but other systems over time have used terms like "partitions" and "jails" to mean the same thing. In essence, what OS-level virtualization means is that a program or set of programs is put into a box that looks like the whole OS, but is really just a subset view provided by a host OS. This is distinct from virtualization in the sense of VMWare or Parallels in that the app still uses the code of the host OS, rather than loading up a whole additional OS.

Things admittedly get a little muddled on non-Linux systems. Other than Microsoft's peculiar variant of Docker that runs Windows-based apps, "a Docker container" generally means "a Linux container". To accomplish this, and to avoid having a massively-fragmented array of images (more on those in a bit), Docker Desktop on macOS and (usually) Windows uses hardware virtualization to launch a Linux system. In those cases, Docker is using both hardware virtualization and in-OS container virtualization, but the former is just a technical implementation detail. On a Linux host, though, no such second tier is needed.

Beyond making use of this OS service, Docker consists of a suite of tools for building and managing these images and containers, and then other tools (like Kubernetes) operate at a level above that. But all the stuff you deal with with Docker - Dockerfiles, Compose, all that - comes down to creating and managing these walled-off apps.

Docker Images

Docker images are the part that actually contains the programs and data to run and use, which are then loaded up into a container.

A Docker image is conceptually like a disk image used by a virtualization app or macOS - it's a bunch of files ready to be used in a filesystem. You can make your own or - very commonly - pull them from a centralized library like the main Docker Hub. These images are generally components of a larger system, but are sometimes full-on tools to run yourself. For example, the PostgreSQL image is ready to run in your Docker environment and can be used as essentially a quick-start way to set up a Postgres server.

The particular neat trick that Docker images pull is that they're layered. If you look at a Dockerfile (the script used to build these images), you can see that they tend to start with a FROM line, indicating the base image that they stack on top of. This can go many layers deep - for example, the Maven image builds on top of the OpenJDK image, which is based on the Alpine Linux image.

You can think of this as a usually-simple dependency line in something like Maven. Rather than including all of the third-party code needed, a Maven module will just reference dependencies, which are then brought in and woven together as needed in the final app. This is both useful for creating your images and is also an important efficiency gain down the line.

Dockerfiles

The main way to create a Docker image is to use a Dockerfile, which is a text file with a syntax that appears to have come from another dimension. Still, once you're used to the general form of one, they make sense. If you look at one of the example files, you can see that it's a sequential series of commands describing the steps to create the final image.

When writing these, you more-or-less can conceptualize them like a shell script, where you're copying around files, setting environment properties, and executing commands. Once the whole thing is run, you end up with an image either in your local registry or as a standalone file. That final image is what is loaded and used as the operating environment of the container.

The neat trick that Dockerfiles pull, though, is that commands that modify the image actually create a new layer each, rather than changing the contents of a single image. For example, take these few lines from a Dockerfile I use for building a Domino-based project:

1
2
3
COPY docker/settings.xml /root/.m2/
RUN mkdir -p /root
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux

Each of these lines creates a new layer. The first two are tiny: one just contains the settings.xml file from my project and then the second just contains an empty /root directory. The third is more complicated, pulling in the whole Domino runtime from the official 11.0.1 image, but it's the same idea.

Each of these images is given a SHA-256 hash identifier that will uniquely identify it as a result of an operation on a previous base image state. This lets Docker cache these results and not have to perform the same operation each time. If it knows that, by the time it gets to the third line above, the starting image and the Domino image are both in the same state as they were the last time it ran, it doesn't actually need to copy the bits around: it can just reuse the same unchanged cached layer.

This is the reason why Maven-build Dockerfiles often include a dependency:go-offline line: because the project's dependencies rarely change, you can create a reusable image from the Maven dependency repository and not have to re-resolve them every build.

Wrap-Up

So that's the core of it: managing images and walled-off mini OS environments. Things get even more complicated in there even before you get to other tooling, but I've found it useful to keep my perspective grounded in those basics while I learn about the other aspects.

In the future, I think I'll talk about how and why Docker has been particularly useful for me when it comes to building and running Domino-based apps, in particularly helping somewhat to alleviate several of the long-standing impediments to working with Domino.

NSF ODP Tooling: Setting Up Jenkins Builds

Aug 27, 2020, 2:50 PM

Tags: nsfodp
  1. Getting Started with the NSF ODP Tooling
  2. NSF ODP Tooling: Setting Up Jenkins Builds

In my last post, I talked about the process of setting up a basic NSF ODP project from an NSF without worrying about OSGi plugins or other complicated aspects.

In this post, I'll go over one of the main reasons why you might want to do this: automated builds via Jenkins or other CI server. This process assumes that you're keeping your project in source control of some sort, most likely a Git repository.

Jenkins Setup

The specifics for installing Jenkins are a bit outside the bailiwick of my blog, but they have some good instructions on their site. Those instructions currently start out heavily with Docker, which would work well, but I've found it pretty easy to set up with a Linux VM. That usually involves adding the Jenkins package source and letting the package manager do its thing. You should also install git while you're here.

Once it's configured, the Maven configuration is the same as in the previous post: find the home directory for the user running Jenkins (generally jenkins with those Linux installs or your current user in a simpler local setup) and configure the .m2/settings.xml file the same way.

Beyond the normal Jenkins setup with your default user, there are a few things to configure.

To start out with, we'll add support for Maven projects. Jenkins is trending towards doing everything via "Pipeline" projects, which is a fine idea, but the older Maven support will suit our needs better for now. Go to "Manage Jenkins" and then "Manage Plugins". On the "Available" tab, search for "maven". You should find the "Maven integration plugin" - in my case, it's under "Installed" since I already have it:

Maven Jenkins plugin

Then, make your way back to "Manage Jenkins" and to "Global Tool Configuration". In there, add a JDK if one doesn't already exist. You can either point to an existing Java installation or install one automatically:

JDK Setup

Do similarly for Git. If you installed it in Linux or are running on macOS, you can just write "git" in for the executable path. On Windows, you should install it first.

Git Setup

Finally, do the same for Maven. Like Java, this is one that you can configure automatically. 3.6.3 is a good choice:

Mavan Setup

Project Setup

Now that that's all set up, go back to the main Jenkins page and click on "New Item". Here, you should be able to select "Maven project". In general, I like to give my Jenkins projects names without too many special characters, in particular without spaces - there's always the chance that an odd tool here or there will cause trouble with complicated path names.

Maven item

When you create the item, you'll be presented with an intimidating tower of options, but fortunately only a few are important at the moment.

Our first stop is the "Source Code Management" section, where you should configure the location of your source repository. In my case here, I'm building one of the examples in the public NSF ODP Tooling repository, but you may have to add credentials if you're using a private repository.

Source Code Management

The next important step is the "Build" section. In here, pick your Maven version if you have multiple ones, fill in the path to your root POM file (most likely "pom.xml" if your project is in the root of the repo, but it's within a subdirectory here), and set the goals to be "clean install":

Build config

Finally, go to "Post-build Actions" and add an "Archive the artifacts" action. Set the "Files to archive" to "**/target/*.nsf":

Post-build Actions

Then, hit "Save".

Back on the project page, click "Build Now" on the left:

Build Now

If all goes well, you should see the build churn for a bit below the actions and eventually go blue. Unfortunately, there's also plenty of room here for things to go awry. If they do, your best bet is to hover over the build, click the disclosure triangle next to the timestamp, and click "Console Output". That should hopefully illuminate the trouble.

Console Output

Assuming it went well, though, you should be able to refresh the page and see your NSF in the "Last Successful Artifacts" section.

Last Successful Artifacts

And that's one of the key benefits to the CI/CD process: you can have the server run a repeatable build on command, on a schedule, or on triggers (like when you push a change) and have the result ready for you when it's done.

More In Practice

Once you have these basics working, you can get more complicated from there. The most common next step will be to set up either push notifications from your repository host (if your Jenkins server is visible to your repo) or scheduled polling for changes. That way, this will start to happen automatically without the need to manually trigger it.

You can also set up email notifications on failure, which is handy even when you're the only developer - that can help remove some "works on my machine" trouble.

There are a few more things that I think will be worth covering. In particular, I'll want to demonstrate a multi-NSF build that creates a deployment ZIP - something that's present in the complicated OSGi example, but which can be done just as well in a less-complex project.

Getting Started with the NSF ODP Tooling

Aug 26, 2020, 2:57 PM

Tags: maven nsfodp
  1. Getting Started with the NSF ODP Tooling
  2. NSF ODP Tooling: Setting Up Jenkins Builds

I've mentioned the NSF ODP Tooling project quite a bit here, and a lot of that is just a reflection of how much use I've gotten out of it and how much time it's been saving me in my regular work.

Part of it is also, though, that I think that it should see wider use. I realized that the project can seem off-putting, or reserved only for the sort of lost-in-the-weeds sort of work I do. Generally, when I mention it, it's in the context of a massive project with a bunch of OSGi plugins, or describing the intricate work that went in to implementing it.

So I figured this was as good a time as any to describe the simplest-case scenario to get use out of the project: wrapping a normal ODP, without plugins, and then building it into an NSF outside of Designer.

Environment Setup

Domino Installation

To get started, you'll first need either a local Notes/Domino installation or a remote Domino server. Since it involves slightly-less local configuration, we'll go with the remote Domino path for now. Download the latest distribution ZIP [from the project on OpenNTF](https://openntf.org/main.nsf/project.xsp?r=project/NSF ODP Tooling/releases) and install the update site from the "Domino" directory on your server in the same way you would the OpenNTF Domino API or other XPages library, and restart HTTP.

Maven and Java

The second thing you'll need is a Maven installation locally. If you're running on macOS or Linux, the easiest way to install this is with a package manager, such as Homebrew or apt. On any platform, you can also follow the download and installation instructions from the official Maven site. You'll also need Java installed - nowadays, I use AdoptOpenJDK.

You'll also need a Maven "settings.xml" file to point to your server. If you don't have such a file already, create an ".m2" directory (with the leading dot) in your home directory. This is the same process as in my original Maven setup guide, but with different contents. Configure the contents to look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
    <profiles>
        <profile>
            <id>nsfodp</id>
            <properties>
                <!-- the server name can be anything as long as it matches below -->
                <nsfodp.compiler.server>some-server-name</nsfodp.compiler.server>
                <!-- specify the HTTP/HTTPS URL for your Domino server -->
                <nsfodp.compiler.serverUrl>https://some.server/</nsfodp.compiler.serverUrl>
                
                <!-- set to true if you use a self-signed SSL certificate -->
                <nsfodp.compiler.serverTrustSelfSignedSsl>true</nsfodp.compiler.serverTrustSelfSignedSsl>
            </properties>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>nsfodp</activeProfile>
    </activeProfiles>
    
    <servers>
        <server>
            <id>some-server-name</id>
            <!-- Use a Domino HTTP username and password -->
            <username>builduser</username>
            <password>buildpassword</password>
        </server>
    </servers>
</settings>

NSF Project Setup

The core On-Disk Project you create for your NSF is done using the normal Designer source-control. This process hasn't changed over the years; if you're unfamiliar with creating ODPs and working with source control, resources like the NotesIn9 episode remain very useful (though using Mercurial is an odd choice nowadays).

For this example, I just created a new NSF, but you can start with any simple-to-moderate NSF. For now, avoid anything that uses external XPages libraries or platform-specific things like ODBC in LotusScript. Right-click the NSF and go to "Team Development" → "Set Up Source Control for this Application":

Set up source control in Designer

In the following wizard, give it a name (your choice) and uncheck "Use default location". Pick a destination for your created project, but make sure to put it within an "odp" subfolder of your main project folder - that'll be important later.

Source control wizard

I also uncheck "Go to Navigator view after project is created" because I use Package Explorer for this. It wouldn't hurt to use the Navigator view, tough - it's basically the same idea.

At this point, you can close out of Designer if you want - it won't be needed for the rest of this.

Maven Project Setup

Create a new text file called "pom.xml" and put it in the project folder, next to the "odp" directory.

pom.xml placement

Set its contents to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<?xml version="1.0"?>
<project
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
    xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.example</groupId>
    <artifactId>nsfodp-example</artifactId>
	<version>1.0.0-SNAPSHOT</version>
    <packaging>domino-nsf</packaging>

    <pluginRepositories>
        <pluginRepository>
            <id>artifactory.openntf.org</id>
            <name>artifactory.openntf.org</name>
            <url>https://artifactory.openntf.org/openntf</url>
        </pluginRepository>
    </pluginRepositories>

    <build>
        <plugins>
            <plugin>
                <groupId>org.openntf.maven</groupId>
                <artifactId>nsfodp-maven-plugin</artifactId>
                <version>3.1.0</version>
                <extensions>true</extensions>
            </plugin>
        </plugins>
    </build>
</project>

In a terminal window, go to the project directory (the one containing this "pom.xml") and run mvn install. After a bit of churning, you should see some output ending like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[INFO] --- nsfodp-maven-plugin:3.1.0:compile (default-compile) @ nsfodp-example ---
[INFO] Compiling ODP
[INFO] Installing bundles
[INFO] - Installed no bundles
[INFO] Creating destination NSF
[INFO] Importing DB properties
[INFO] Importing basic design elements
[INFO] Importing file resources
[INFO] Importing LotusScript libraries
[INFO] Uninstalling bundles
[INFO] org.openntf.nsfodp.compiler.equinox.CompilerApplication#end
[INFO] Generated NSF: /Users/jesse/Projects/nsfodp-example/target/nsfodp-example-1.0.0-SNAPSHOT.nsf
[INFO]
[INFO] --- maven-install-plugin:3.0.0-M1:install (default-install) @ nsfodp-example ---
[INFO] Installing /Users/jesse/Projects/nsfodp-example/target/nsfodp-example-1.0.0-SNAPSHOT.nsf to /Users/jesse/.m2/repository/com/example/nsfodp-example/1.0.0-SNAPSHOT/nsfodp-example-1.0.0-SNAPSHOT.nsf
[INFO] Installing /Users/jesse/Projects/nsfodp-example/pom.xml to /Users/jesse/.m2/repository/com/example/nsfodp-example/1.0.0-SNAPSHOT/nsfodp-example-1.0.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  9.346 s
[INFO] Finished at: 2020-08-26T10:29:10-04:00
[INFO] ------------------------------------------------------------------------

The specifics will change a bit based on your system, but the main things are to see those "Compiling" and "Importing" lines followed by the "BUILD SUCCESS" banner at the end. If you look in your project directory, you'll see some generated support files and, within the "target" directory, the built NSF:

Build results

Conclusion

And that's it! Probably, at least. You can use this with most classic Notes apps and with XPages apps that just use the built-in components and JARs inside the NSF. Things can get more complex from there, and the repository contains an example of an XPages application that uses an OSGi-based library.

I plan to go into some of those details in future posts. In addition, I will demonstrate how to do this compilation in Jenkins, which allows you to have the NSF built automatically whenever you or someone else on your team commits a change to source control.

Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker

Aug 13, 2020, 6:42 PM

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

The other month, I got my feet wet with Docker after only conceptually following it for a long time. With that, I focused on getting a basic Jakarta EE app up and running with an active Notes runtime by way of the official Domino-on-Docker image provided by HCL.

Since that time, I'd been mulling over another use for it: having it handle the build process of my client's sprawling app. This started to become a more-pressing desire thanks to a couple factors:

  1. Though I have the build working pretty well on Jenkins, it periodically blocks indefinitely when it tries to launch the NSF ODP Compiler, presumably due to some sort of contention. I can go in and kill the build, but that's only when I notice it.
  2. The project is focusing more on an Angular-based UI, with a distinct set of programmers working on it, and the process of keeping a consistent Domino-side development environment up and running for them is a real hassle.
  3. Setting up a new environment with a Notes runtime is a hassle even for in-the-weeds developers like me.

The Goal

So I set out to use Docker to solve this problem. My idea was to write a script that would compose a Docker image containing all the necessary base tools - Java, Maven, Make for some reason, and so forth - bring in the Domino runtime from HCL's image, and add in a standard Notes ID file, names.nsf, and notes.ini that would be safe to keep in the private repo. Then, I'd execute a script within that environment that would run the Maven build inside the container using my current project tree.

The Dockerfile

Since I'm still not fully adept at Docker, it's been a rocky process, but I've managed to concoct something that works. I have a Dockerfile that looks like this (kindly ignore all cargo-culting for now):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM maven:3.6.3-adoptopenjdk-8-openj9
USER root

# Install toolchain files for the NPM native components
RUN apt update
RUN apt install -y python make gcc g   openssh-client git

# Configure the Maven environment and permissive root home directory
COPY settings.xml /root/.m2/
COPY build-app.sh /
RUN mkdir -p /root/.m2/repository
RUN chmod -R 777 /root

# Bring in the Domino runtime
COPY --from=domino-docker:V1101_03212020prod /opt/hcl/domino/notes/11000100/linux /opt/hcl/domino/notes/latest/linux
COPY --from=domino-docker:V1101_03212020prod /local/notesdata /local/notesdata

# Some LotusScript libraries use an all-caps name for lsconst.lss
RUN ln -s lsconst.lss /opt/hcl/domino/notes/latest/linux/LSCONST.LSS

# Copy in our stock Notes ID and configuration files
COPY notesdata/* /local/notesdata/

# Prepare a permissive data environment
RUN chmod -R 777 /local/notesdata

The gist here is similar to my previous example, where it starts from the baseline Maven package. One notable difference is that I switched away from the -alpine variant I had inherited from my original Codewind example: I found that I would encounter npm: not found during the frontend build process, and discovered that this had to do with the starting Linux distribution.

The rest of it brings in the core Domino runtime and data directory from the official image, plus my pre-prepared Maven configuration. It also does the fun job of symlinking "lsconst.lss" to "LSCONST.LSS" to account for the fact that some of the LotusScript in the NSFs was written to assume Windows and refers to the include file by that name, which doesn't fly on a case-sensitive filesystem. That was a fun one to track down.

The build-app.sh script is just a shell script that runs several Maven commands specific to this project.

The Executor Script

The other main component is a Bash script, ./build.sh:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/usr/bin/env bash

set -e

mkdir -p ~/.m2/repository
mkdir -p ~/.ssh

# Clean any existing NPM builds
rm -rf ../app-ui/*/node_modules
rm -rf ../app-ui/*/dist

# Set up the Docker workspace
rm -rf scratch
mkdir -p scratch/builder
cp maven/* scratch/builder/
cp -r notesdata-server scratch/builder/notesdata

# Build the image and execute a Maven install
docker build scratch/builder -f build.Dockerfile -t app-build
docker run \
    --mount type=bind,source="$(pwd)/..",target=/build \
    --mount type=bind,source="$HOME/.m2/repository",target=/root/.m2/repository \
    --mount type=bind,source="$HOME/.ssh",target=/root/.ssh \
    --rm \
    --user $(id -u):$(id -g) \
    app-build \
    sh /build-app.sh

This script ensures that some common directories exist for the user, clears out any built Node results (useful for a local dev environment), copies configuration files into an image-building directory, and builds the image using the aforementioned Dockerfile. Then, it executes a command to spawn a temporary container using that image, run the build, and delete the container when done. Some of the operative bits and notes are:

  • I'm using --mount here maybe as opposed to --volume because I don't know that much about Docker. Or maybe it's the right one for my needs? It works, anyway, even if performance on macOS is godawful currently
  • I bring in the current user's Maven repository so that it doesn't have to regenerate the entire world on each build. I'm going to investigate a way to pre-package the dependencies in a cacheable Maven RUN command as my previous example did, but the sheer size of the project and OSGi dependencies tree makes that prohibitive at the moment
  • I bring in the current user's ~/.ssh directory because one of the NPM dependencies references its dependency via a GitHub SSH URL, which is insane and bad but I have to account for it. Looking at it now, I should really mark that one read-only
  • The --rm is the part that discards the container after completing, which is convenient
  • I use --user to specify a non-root user ID to run the build, since otherwise Docker on Linux ends up making the target results root-owned and un-deletable by Jenkins. This is also the cause of all those chmod -R 777 ... calls in the Dockerfile. There are gotchas to keep in mind when doing this

Miscellaneous Other Configuration

To get ODP → NSF compilation working, I had to make sure that Maven knew about the Domino runtime. Fortunately, since it'll now be consistent, I'm able to make a stock settings.xml file and copy that in:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
	<profiles>
		<profile>
			<id>notes-program</id>
			<properties>
				<notes-program>/opt/hcl/domino/notes/latest/linux</notes-program>
				<notes-data>/local/notesdata</notes-data>
				<notes-ini>/local/notesdata/notes.ini</notes-ini>
			</properties>
		</profile>
	</profiles>
	<activeProfiles>
		<activeProfile>notes-program</activeProfile>
	</activeProfiles>
</settings>

Those three are the by-convention properties I use in the NSF ODP Tooling and my Tycho-run test suites to pass information along to initialize the Notes process.

Future Improvements

The main thing I want to improve in the future is getting the dependencies loaded into the image ahead of time. Currently, in addition to sharing the local Maven repository, the command brings in not only the full project structure but also the app-dependencies submodule we use to store giant blobs of p2 sites needed by the build. The "Docker way" would be to compose these in as layers of the image, so that I could skip the --mount bit for them but have Docker's cache avoid the need to regenerate a large dependencies image each time.

I'd also like to pair this with app-runner Dockerfiles to launch the webapp variants of the XPages and JAX-RS projects in Liberty-based containers. Once I get that clean enough, I'll be able to hand that off to the frontend developers so that they can build the full app and have a local development environment with the latest changes from the repo, and no longer have to wonder whether one of the server-side developers has updated the Domino server with some change. Especially when that server-side developer is me, and it's Friday afternoon, and I just want to go play Baba Is You in peace.

In the mean time, though, it works, and works in a repeatable way. Once I figure out how to get Jenkins to read the test results of a freestyle project after the build, I hope to replace the Jenkins build process with this script, which should both make the process more reliable and allow me to run multiple simultaneous builds per node without worry about deadlocking contention.