Kicking the Tires on Domino 14 and Java 17

Jun 2, 2023, 1:50 PM

Tags: domino java

As promised, HCL launched the beta program for Domino 14 the other day. There's some neat stuff in there, but I still mostly care about the JVM update.

I quickly got to downloading the container image for this to see about making sure my projects work on it, particularly the XPages JEE Support project. As expected, I had a few hoops to jump through. I did some groundwork for this a while back, but there was some more to do. Before I get to some specifics, I'll mention that I put up a beta release of 2.13.0 that gets almost everything working, minus JSP.

Now, on to the notes and tips I've found so far.

AccessController and java.policy

One of the changes in Java 17 specifically is that our old friend AccessController and the whole SecurityManager framework is marked as deprecated for removal. Not a minute too soon, in my opinion - that was an old relic of the applet days and was never useful for app servers, and has always been a thorn in the side of XPages developers.

However, though it's deprecated, it's still present and active in Java 17, so we still have to deal it. One important thing to note is that java.policy moved from the "jvm/lib/security" directory to "jvm/conf/security". A while back, I switched to putting .java.policy in the Domino user's home directory and this remains unchanged; I suggest going this route even with older versions of Domino, but editing java.policy in its new home will still work.

Extra JARs and jvm/lib/ext

Another change is that "jvm/lib/ext" is gone, as part of a general desire on Java's part to encourage installing system-wide libraries.

However, since so much Java stuff on Domino is older than dirt, having system-wide custom libraries is still necessary, so the JVM is configured to bring in everything from the "ndext" directory in the Domino program directory in the same way it used to use "jvm/lib/ext". This directory has actually always been treated this way, and so I started also using this for third-party libraries for older versions, and it'll work the same in 14. That said, you should ideally not do this too much if it's at all avoidable.

The Missing Java Compiler

I mentioned above that everything except JSP works on Domino 14, and the reason for this is that V14 as it is now doesn't ship with a Java compiler (JSP, for historical reasons, does something much like XPages in that it translates to Java and compiles, but this happens on the server).

I originally offhandedly referred to this as Domino no longer shipping with a JDK, but I think the real situation was that Domino always used a JRE, but then had tools.jar added in to provide the compiler, so it was something in between. That's why you don't see javac in the jvm/bin directory even on older releases. However, tools.jar there provided a compiler programmatically via javax.tools.ToolProvider.getSystemJavaCompiler() - on a normal JRE, this API call returns null, while on a JDK (or with tools.jar present) it'd return a usable compiler. tools.jar as such doesn't exist anymore, but the same functionality is bundled into the newer-era weird runtime archives for efficiency's sake.

So this is something of a showstopper for me at the moment. JSP uses this system compiler, and so does the XPages Bazaar. The NSF ODP Tooling project uses the Bazaar to do its XSP -> Java -> bytecode compilation of XPages, and so the missing compiler will break server-based compilation (which is often the most reliable compilation currently). And actually, NSF ODP Tooling is kind of double broken, since importing non-raw Java agents via DXL is currently broken on Domino 14 for the same reason.

I'm pondering my options here. Ideally, Domino will regain its compiler - in theory, HCL could switch to a JDK and call it good. I think this is the best route both because it's the most convenient for me but also because it's fairly expected that app servers run on a JDK anyway, hence JSP relying on it in the first place.

If Domino doesn't regain a compiler, I'll have to look into something like including ECJ, Eclipse's redistributable Java compiler. I'd actually looked into using that for NSF ODP Tooling on macOS, since Mac Notes lost its Java compiler a long time ago. The trouble is that its mechanics aren't quite the same as the standard one, and it makes some more assumptions about working with the local filesystem. Still, it'd probably be possible to work around that... it might require a lot more of extracting files to temporary directories, which is always fiddly, but it should be doable.

Still, returning the built-in one would be better, since I'm sure I'm not the only one with code assuming that that exists.

Overall

Despite the compiler bit, I've found things to hold together really well so far. Admittedly, I've so far only been using the container for the integration tests in the XPages JEE project, so I haven't installed Designer or run larger apps on it - it's possible I'll hit limitations and gotchas with other libraries as I go.

For now, though, I'm mostly pleased. I'm quite excited about the possibility of making more-dramatic updates to basically all of my projects. I have a branch of the XPages JEE project where I've bumped almost everything up to their Jakarta EE 10 versions, and there are some big improvements there. Then, of course, I'm interested in all the actual language updates. It's been a very long time since Java 8, and so there's a lot to work with: better switch expression, text blocks, some pattern matching, records, the much-better HTTP client, and so forth. I'll have a lot of code improvement to do, and I'm looking forward to it

Weekend Tinkering With Traefik

May 29, 2023, 11:57 AM

Tags: docker

For my D&D group, we've been using the venerable Roll20 for a good long time. It's served us okay, but it's barely improved for our needs over the years and our eyes have been wandering. Specifically, our eyes wandered over to Foundry VTT. Foundry has a lot going for it: it's sharp-looking, it has tons of mods, and you can host it yourself.

So, a bit ago, I set up just such an instance, making a Docker container out of it on one of my Linode servers and configuring my nginx reverse proxy on another Linode to point to it. There was a little fiddling to be done to my usual setup to make sure it passes along the WebSocket stuff, but it worked.

However, when we put it to the test, the DM side seemed slow, in a way that could be readily attributable to the fact that there's an extra network hop between the reverse proxy and the WebSocket destination. To lessen that as a possibility, I decided I should point the DNS directly to the host running it, eliminating the hop.

My first plan was to do the same thing I had with the larger setup, but just locally: spin up nginx and pair it with certbot on a cron job to handle the HTTPS certificates. However, it's been a long time since I had developed my current standard setup and I figured there's probably a nicer way to do it, since this is a very normal case.

Traefik

And so my eyes turned to Traefik, a purpose-built tool for this sort of thing. It has a lot of nice fiddly options, but one of its cleanest uses is to deploy it as a Docker container and have it use the Docker socket for picking up configuration to route to other containers.

I ended up with a Compose configuration that's more-or-less right out of any tutorial you'd find for this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
version: "3.3"
services:
  traefik:
    image: "traefik:v2.10"
    privileged: true
    userns_mode: host
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.leresolver.acme.email=<my email>"
      - "--certificatesresolvers.leresolver.acme.httpchallenge.entrypoint=web"
      - "--certificatesresolvers.leresolver.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "letsencrypt:/letsencrypt"
    networks:
      - traefiknet
networks:
  traefiknet:
    name: traefiknet
    external: true
volumes:
  letsencrypt: {}

You can configure Traefik with configuration files as well, but the route I'm taking is to pass the config I need in the command parameters, so the entire thing is specified in the Compose file. I have it configured here to use Docker for its configuration discovery, to listen on ports 80 and 443, and to enable a Let's Encrypt resolver. On that last point, it really handles basically everything automatically: if you have an app that declares itself as "app.foo.com" on the HTTPS endpoint, Traefik will pick up on that and automatically do the dance with Let's Encrypt to present the certificate.

I created a Docker network named "traefiknet" for this and all participating apps to sit in. You can also do this by using host networking, but I kind of like this way.

Foundry

With that set up, my next step was to configure Foundry to participate in this. I tweaked the Foundry Compose config to remove the published port, join the common network, and to include Traefik information in its labels:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
version: "3.8"

services:
  foundry:
    image: felddy/foundryvtt:release
    init: true
    restart: always
    volumes:
      - foundry_data:/data
    networks:
      - traefiknet
    environment:
      - "FOUNDRY_USERNAME=<my username>"
      - "FOUNDRY_PASSWORD=<my password>"
      - "FOUNDRY_ADMIN_KEY=secret-admin-key"
    labels:
      - "traefik.enable=true"
      - "traefik.docker.network=traefiknet"
      
      - "traefik.http.routers.vtt-example.rule=Host(`my.vtt.host`)"
      - "traefik.http.routers.vtt-example.entrypoints=websecure"
      - "traefik.http.services.vtt-example.loadbalancer.server.port=30000"
      - "traefik.http.routers.vtt-example.tls=true"
      - "traefik.http.routers.vtt-example.tls.certresolver=leresolver"
      - "traefik.http.routers.vtt-example.tls.domains[0].main=my.vtt.host"
volumes:
  foundry_data: {}
networks:
  traefiknet:
    name: traefiknet
    external: true

The labels are the meat of it here. I declare that the container participates in the Traefik configuration and will be accessible via the "traefiknet" network I created. Then, I have bits to describe the specific routing. Here, "vtt-example" is an arbitrary name that I picked for this routing config - mostly, it's important that it's distinct from other routing configurations, but otherwise you can pick whatever.

The .rule=Host(my.vtt.host) bit is enough to map all requests beneath that host name to this container. There are other ways to do this - by path, by headers, and other things, and a combination thereof - but this suffices for my needs. This handles the normal sensible defaults for such a thing, including passing WebSockets through nicely. With .entrypoints=websecure, I have it opt in to the HTTPS port (left out of this is that I have another container that configures blanket HTTP -> HTTPS redirection for all hosts). With .loadbalancer.server.port (under "services" instead of "router"), I can declare that the Foundry app is listening on port 30000 within the container.

The .tls bits declare that this should get a TLS certificate, that it should use the Let's Encrypt resolver (by the name I chose, "leresolver"), and that it should use the domain I specified for it. In theory, I think it should pick up on that domain from the Host rule, but in my setup that didn't work for me - it's possible that that was just due to teething problems in my config, though.

Conclusion

I haven't yet had the opportunity to see if this fixed the sluggishness problem, but I'm glad it gave me the impetus to tinker with this. While I'll probably keep using nginx for most of my configuration (some of my configs are a lot more fiddly than this), I really like this as a default for on-host routing. If you combine that with my overall move to figuring that all server software should be deployed in a container unless you have a good reason to do otherwise, this slots in very nicely. I really like how the configuration is distributed away from the reverse proxy and to the apps that are actually being proxied to. With that, you can see everything you need in one place: you know the proxy is out there somewhere, and now the app's Compose file has everything important right in it. So, if you have a need, I'd say give it a look - it's quite neat.

XPages JEE 2.12.0: JNoSQL Views and PrimeFaces Support

May 25, 2023, 3:08 PM

Tags: jakartaee

Last week, I put up version 2.12.0 of the XPages JEE Support project. Beyond the usual fit-and-finish bits here and there, there are two main improvements in this release.

Jakarta NoSQL Views

A while back, I caved to the necessity of explicit view use in Domino by adding the @ViewEntries and @ViewDocuments annotations that you can use in DominoRepository instances to point to a view to read. In the normal case, this works well: you generally know what the view you want to read from is, and these are made for that purpose.

However, you don't always know the view or folder you want to read from. The classic case here is a mail file: a user can make a bunch of custom views and folders, and so, if you were to make a web UI for this, you'll need some way to read these arbitrarily. So, to account for that, I added two new methods available on all DominoRepository instances:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Stream<T> readViewEntries(
	String viewName,
	int maxLevel,
	boolean documentsOnly,
	ViewQuery viewQuery,
	Sorts sorts,
	Pagination pagination
);

Stream<T> readViewDocuments(
	String viewName,
	int maxLevel,
	boolean distinct,
	ViewQuery viewQuery,
	Sorts sorts,
	Pagination pagination
);

These work similarly to using the annotations: the first three parameters in each correspond to the properties you can set on the annotations, while the last three are the implicitly-supported optional parameters on such a method. The results of calling them are the same as if you had called an annotated method - it's just that the calling code is a bit more detailed.

The other piece of this puzzle is that you'll need to know what views are available, say for a sidebar. To account for that, I added this method:

1
Stream<ViewInfo> getViewInfo();

This will return information about all the views and folders in the database referenced by the DominoRepository instance. It doesn't try to be too smart: it will query all views and folders, without trying to parse out selection formulas for references to the repository's form, since that would be error prone in the normal case and outright wrong in edge cases (like if you have "synthetic" entity types that don't reference a real form at all). The information you get here is what you'd likely expect: view name, whether it's a view or folder, selection formula, column info, and so forth.

Jakarta Faces and PrimeFaces

I'm calling this one "PrimeFaces" since that's the immediate goal of these changes, but it's really about allowing for third-party Faces (JSF) extensions and themes without having to jump through too many hoops.

The challenge with PrimeFaces and things like it is that, while the Java packages for JSF no longer conflict with XPages (javax.faces and jakarta.faces are clearly related, but Java considers them entirely distinct), not all of the implementation bits changed. The big one here is WEB-INF/faces-config.xml: that file goes by the same name for XPages and JSF, but any Faces lifecycle participants declared in there (ViewHandlers, PhaseListeners, etc.) are not at all compatible.

To account for this, I've carved out a subdirectory, WEB-INF/jakarta. Within that, you can put JARs in WEB-INF/jakarta/lib and make a file named WEB-INF/jakarta/faces-config.xml. When present, the new-JSF runtime will pick up on these libraries, while XPages won't, and the runtime will also redirect calls to WEB-INF/faces-config.xml from JSF to WEB-INF/jakarta/faces-config.xml. In this way, you're able to have advanced extensions for both frameworks in the same NSF.

This isn't without its necessary workarounds, though. The big one comes in if you want to reference classes from these JSF-specific libraries in Java design elements. Since Designer's classpath won't know about them, your safest bet is to access them reflectively. For example, I ported a JSF example app from rieckpil.de to an NSF. In this, almost all of the code is identical - other than removing some EJB bits (which is not part of the XPages JEE project), the majority of the code was unchanged. However, one of the classes, IndexBean, directly referenced PrimeFaces model classes in order to build the bar chart. Think of that as similar to when you use com.ibm.xsp.model.DataObject in XPages code: it's a UI-specific class that can help bridge the difference between your stuff and the UI. However, since Designer doesn't know about those classes at build time, I had to change the calls to stuff like barChartModelClass.getMethod("setSeriesColors", String.class).invoke(model, "007ad9");. Not unworkable, but definitely ungainly. In a cruel twist of fate, this is exactly the sort of time when a JVM scripting language like SSJS shines. Alas.

As a final note, I waffled a bit (and am still waffling) on whether it'd be worth wrapping libraries like PrimeFaces in OSGi bundles, potentially as an optional add-on project. The way it's done here - including the JARS in your "webapp" - is more or less the standard way to do it, but real current projects would use a dependency mechanism like Maven instead of manually adding the JAR. On the other hand, there's a distinct benefit this way in that you can pick your version without having to do anything server-wide, and the use of a side directory means you don't suffer from Designer's poor performance when using JARs on a non-local server. Still, I may at least add an extension point for JSF classpath extensions at some point, though, since it could be useful.

Next Versions

As I mentioned earlier this month, this project is in some ways waiting for the Domino 14 beta cycle to properly begin, which will allow me to make some significant long-desired changes.

Still, there'll probably be at least another release before 3.x, which is currently named 2.13.0. Beyond ideally having no-app-changes support for Java 17, I've been doing some tinkering with JavaSapi, with the idea of being able to have your app code participate in filtering and authenticating requests. As with anything related to JavaSapi, it's sort of inherently-treacherous territory, considering it's not an official feature of Domino, but I've had some promising (if crash-prone) success so far. I'll probably also want to consolidate some of my handling of individual components and how they're configured in the NSF. There'll be a bigger push for that in 3.x, but for now there's still definitely room for me to go back and clean up some of the ways I've gone about things. The specs I added early (CDI, JAX-RS, etc.) are a bit more ad-hoc than some of the newer ones, with the newer ones coalescing more around the ComponentModule part (Domino's Java conception of a running app, NSF or otherwise) and less around the XPages ApplicationEx part. There's an inherent amount of necessary grime with this stack, but I have some ideas for at least some cleaning.

Otherwise, I'm mostly champing at the bit to do my big revamps in 3.x: lowering the count of individual XPages Libraries that separate the features, bumping specs and implementations to their next major versions, improving the code with Java 9 through 17 enhancements, and so forth. That should be fun.

The Loose Roadmap for XPages Jakarta EE Support

May 4, 2023, 10:29 AM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. Adding Concurrency to the XPages Jakarta EE Support Project
  11. Adding Transactions to the XPages Jakarta EE Support Project
  12. XPages Jakarta EE 2.9.0 and Next Steps
  13. The Loose Roadmap for XPages Jakarta EE Support

At Engage, HCL officially announced Java 17 in Domino 14 (I'm sure they announced other things too, but I have my priorities). This will allow me to do a lot in pretty much all of my projects, but it's particularly pertinent to XPages JEE.

Currently, the project targets generally Jakarta EE 9, which came out in late 2020 and was "just" a switch from javax.* to jakarta.*, with no official new features. However, Jakarta EE 10 came out a year ago - in addition to bringing a raft of new features, it also bumped the minimum Java version to Java 11, pushing it outside of Domino's realm. Accordingly, I've had to hold off on a lot of major- and minor-version bumps in the XPages JEE project as new releases started being compiled for Java 11. Once V14 is out, though, I'll be able to move to the current JEE platform... at least until JEE 11 comes out next year and requires Java 21, anyway.

So I've been working on how I'm going to approach this, and what I'm thinking is that I'll do it in two phases: first, a final 2.x release that provides Java 17/Domino 14 compatibility for existing components, and then a new 3.x breaking-changes release to bring in Jakarta EE 10 components.

The Final 2.x Release

I currently have this penciled in for the next release, 2.12.0, but that may change if I decide I want to get a real 2.12.0 release out before Domino 14 is at least in stable beta form. Let's call it "2.99.0" for now.

The idea here will be that I'll want to make sure all existing code in NSFs continues to work unchanged: upgrade your server to V14, install 2.99.0, and your apps keep working. In theory, this shouldn't be too complex. There's some shimming needed for Weld (the CDI implementation) to account for changes from Project Jigsaw in Java 9 and later, and there might be some stuff around AccessController, but in general I expect it'll just be some tweaks here and there. Time will tell, of course.

Once that's out, I plan to not look back (unless there's demand, I suppose). The switch to Java 17 is a huge deal, and I don't think it'll be worth spending much more time with Java 8 once it's no longer required. The 2.x branch is already, I feel, in a pretty good place, so I'll feel comfortable having a stable final version.

The Breaking 3.0 Release

Then, the plan will be to start down the path of 3.x with breaking changes - not everything, but some. For one, JEE 10 has a handful of backwards-incompatible changes. Those are mostly for legacy true-JEE code, though, and the main ones that XPages JEE code will likely want to be aware of will be the switch of XML namespaces to shorter representations. That will affect JSP and JSF code, but the old URIs (the jcp.org ones) will continue to work, at least for a while.

Most of the breaking changes will probably happen internally. I've talked for a long while now about my desire to do some reorganization of the project. The big one is wrangling the proliferation of Eclipse Features and XPages Libraries. Anyone who has installed the project in Designer is well aware of just how many times you have to click "yes, I want to install the thing I'm installing", and that alone is enough to warrant a reorganization. Beyond that, though, I've had to take care to try to make it so that the individual components don't depend on each other unnecessarily. There's a certain amount of good discipline that provides, but it eventually wears a bit thin.

I'm not quite sure what form the consolidation will take, but it'll probably be something like three features: "core", "extended", and "MicroProfile". "Core" would probably roughly map to the actual Jakarta Core Profile, plus things that I find essentially obligatory like Bean Validation. "Extended" would be all the things like JSP and JSF, the "leaves" on the dependency tree: they depend on core features, but nothing depends on them. Then "MicroProfile" would be, well, MicroProfile features. The only thing still giving me pause is that there's not too much case for not installing all of these all the time anyway - if you don't want to use, say, JSF, you don't have to; additionally, it's not like Domino is a svelte cloud-native mini server meant to be deployed a thousand times in a cluster, so having the extra bundles sitting there isn't really onerous. We'll see. I hem and haw a lot on this, but eventually I'll have to make a decision.

Regardless of what form that takes, I expect that the changes to in-NSF code will be either minimal or none - for users of the project, it'll mostly be a matter of making sure to fully uninstall the old plugins before an upgrade and then tweaking Xsp Properties to select whatever the new form of the XPages Libraries ends up being.

Side Note: Jakarta NoSQL and Data

One interesting aspect of this move will be the path Jakarta NoSQL has been on. Though I've included it in the XPages JEE project for a little while now (and continue to heavily expand on it), it's always been technically a beta release. It's clearly proven itself stable even in its beta form, but it's going through a shift in the run-up to Jakarta EE 11. Specifically, the higher abstraction levels - the Repository interface and friends - are moving to a new project, Jakarta Data. The idea of that project will be that it will be able to sit on top of Jakarta NoSQL and other storage types, namely JPA.

It's going to be very neat, but it's created a bit of a pickle for me. Since it's targetting Jakarta EE 11, that means the release of it and NoSQL are going to require at least Java 21, and there's no word on when Domino will support that.

One option would be to stick with what I have now for the foreseeable future: a mildly-forked version of Jakarta NoSQL 1.0.0-b4. It's a workhorse and has been doing a good job, and it'd mean that app code wouldn't have to change. I'm not crazy about this for obvious reasons: I don't want to have one component stuck way behind while all the other parts get a nice jump forward, even if it works.

The other main option I'm considering is sliding forward to another beta release and landing there until Java 21 support shows up. The current development versions of the Data spec and JNoSQL with its implementation target Java 17, so I'll probably go with whatever the last beta is before the official switch to 21. Though it's tough to predict the future, that will probably end up being API-wise similar enough to the release forms of them that future jumps won't be difficult. We shall see, anyway.

Timeline

Anyway, the timeline for this is a little vague, and will mostly depend on when the Domino 14 betas come out and whether they contain anything show-stopping. My hope is to be able to have something that passes all the test cases ASAP with betas and then to have it continue to be stable through to the actual release.

I'm looking forward to leaving Java 8 behind for good, though, that much is certain.

Integrating External Java Apps With Keep And Keycloak

May 3, 2023, 9:43 AM

Last year, I wrote a post describing some early work on a Jakarta NoSQL driver for the Domino REST API (hereafter referred to as "Keep" to avoid ambiguity with the various other Domino REST APIs).

I've since picked back up on the project and similar aspects, and I figured it'd be useful to return to provide some more details.

OpenAPI

For starters, I mentioned in passing my configuration of the delightful openapi-generator tool, but didn't actually detail my configuration. It's changed a little since my first work, since I found where you can specify using the jakarta.* namespace.

I use a config.yaml file like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
additionalProperties:
  library: microprofile
  dateLibrary: java8
  apiPackage: org.openntf.xsp.nosql.communication.driver.keep.client.api
  invokerPackage: org.openntf.xsp.nosql.communication.driver.keep.client
  modelPackage: org.openntf.xsp.nosql.communication.driver.keep.client.model
  useBeanValidation: true
  useRuntimeException: true
  openApiNullable: false
  microprofileRestClientVersion: "3.0"
  useJakartaEe: true

That will generate client interfaces that will mostly compile in a plain Jakarta EE project. The files have some references to an implementation-specific MIME class to work around JAX-RS's historical lack of one, but those imports can be safely deleted.

Keycloak/OIDC in Keep

I also mentioned only in passing that you could configure Keep to trust the Keycloak server's public keys with a link to the documentation. Things on the Keep side have expanded since then, and you can now configure Keep to reference Keycloak using Vert.x's internal OIDC support, and also skip the step of creating special fields in your person docs to house the Notes-format DN. For example, in a Keep JSON config file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
	"oidc": {
		"my-keycloak": {
			"active": true,
			"providerUrl": "https://my.keycloak.server/auth/realms/myrealm",
			"clientId": "keep-app",
			"clientSecret": "<my secret>",
			"userIdentifierInLdapFormat": true
		}
	}
}

That will cause Keep to fetch much of the configuration information from the well-known endpoint Keycloak exposes, and also to map names from Keycloak from the LDAP-style format of "cn=Foo Fooson,o=SomeOrg" to Domino-style "CN=Foo Fooson/O=SomeOrg". This is useful even when using Domino as the Keycloak LDAP backend, since Domino does the translation in the other direction first.

Keycloak/OIDC in Jakarta EE

In the original post in the series, talking about configuring app authentication for the AppDev Pack, I talked about Open Liberty's openidConnectClient feature, which lets you configure OIDC at the server level. That's neat, and I remain partial to putting authentication at the server level when it makes sense, but it's no longer the only game in town. The version of Jakarta Security that comes with Jakarta EE 10 supports OIDC inside the app in a neat way, and so I've switched to using that.

To do that, you make a CDI bean that defines your OIDC configuration - this can actually be on a class that does other things as well, but I like putting it in its own place:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
package config;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.security.enterprise.authentication.mechanism.http.OpenIdAuthenticationMechanismDefinition;
import jakarta.security.enterprise.authentication.mechanism.http.openid.ClaimsDefinition;

@ApplicationScoped
@OpenIdAuthenticationMechanismDefinition(
	clientId="${oidc.clientId}",
	clientSecret="${oidc.clientSecret}",
	redirectURI="${baseURL}/app/",
	providerURI="${oidc.domain}",
	claimsDefinition = @ClaimsDefinition(
		callerGroupsClaim = "groups"
	)
)
public class AppSecurity {
}

There are a couple EL references here. baseURL is provided for "free" by the framework, allowing you to say "wherever the app is hosted" without having to hard-code it. oidc here refers to a bean I made that's annotated with @Named("oidc") and has getters like getClientId() and so forth. You can make a class like that to pull in your OIDC config and secrets from outside, such as a resource file, environment variables, or so forth. providerURI should be the same base URL as Keep uses above.

Once you do that, you can start putting @RolesAllowed annotations on resources you want protected. So far, I've been using @RolesAllowed("users"), since my Keycloak puts all authenticated users in that group, but you could mix it up with "admin" or other meaningful roles per endpoint. For example, inside a JAX-RS class:

1
2
3
4
5
6
7
@Path("superSecure")
@GET
@Produces(MediaType.TEXT_PLAIN)
@RolesAllowed("users")
public String getSuperSecure() {
	return "You're allowed in!";
}

When accessing that endpoint, the app will redirect the user to Keycloak (or your OIDC provider) automatically if they're not already logged in.

Accessing the Token

In my previous posts, I mentioned that I was able to access the OIDC token that the server used by setting accessTokenInLtpaCookie in the Liberty config, and then getting oidc_access_token from the Servlet request object's attributes, and that that only showed up on requests after the first.

The good news is that, with the latest Jakarta Security, there's a standardized way to do this. In a CDI bean, you can inject an OpenIdContext object to get the current user's token:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package bean;

import jakarta.enterprise.context.RequestScoped;
import jakarta.inject.Inject;
import jakarta.security.enterprise.identitystore.openid.OpenIdContext;

@RequestScoped
public class OidcContextBean {
  
	@Inject
	private OpenIdContext context;
  
	public String getToken() {
		// Note: if you don't restrict everything in your app, do a null check here
		return context.getAccessToken().getToken();
	}
}

There are other methods on that OpenIdContext object, providing access to specific claims and information from the token, which would be useful in other situations. Here, I only really care about the token as a string, since that's what I'll send to Keep.

With that token in hand, you can build a MicroProfile Rest Client using the generated API interfaces. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
public class SomeClass {
	/* snip */
	@Inject
	private OidcContextBean oidcContext;

	/* snip */

	private DataApi getDataApi() {
		return RestClientBuilder.newBuilder()
			.baseUri("http://your.keep.server:8880/api/v1/")
			.register((ClientRequestFilter) (ctx) -> {
				ctx.getHeaders().add(HttpHeaders.AUTHORIZATION, "Bearer " + oidcContext.getToken()); //$NON-NLS-1$
			})
			.build(DataApi.class);
	}
}

That will cascade the OIDC token used for your app login over to Keep, allowing your app to access data on behalf of the logged-in user smoothly.

I've been kicking the tires on some example apps and fleshing out the Jakarta NoSQL driver using this, and it's been going really smoothly so far. Eventually, my goal will be to make it so that you can take code using the JNoSQL driver for Domino inside an NSF using the XPages JEE project and move it with minimal changes over to a "normal" JEE app using Keep for access. There'll be a bit of rockiness in that the upstream JNoSQL API is changing a bit to adapt to Jakarta Data and will do so in time for JEE to require Java 21, but at least it won't be too painful an analogy.

In Development: Containerized Builds in NSF ODP

Apr 30, 2023, 11:46 AM

Most of my active development happens macOS-side - I'll periodically use Designer in Windows when necessary, but otherwise I'll jump through a tremendous number of hoops to keep things in the Mac realm. The biggest example of this is the NSF ODP Tooling, born from my annoyance with syncing ODPs in Designer and expanded to add some pleasantries for working with ODPs directly in normal Eclipse.

Over the last few years, though, the process of compiling NSFs on macOS has gotten kind of... melty. Apple's progressive locking-down of traditional native loading mechanisms and the general weirdness of the Notes package and its embedded non-JDK JVM have made things get a little weird. I always end up with a configuration that can work, but it's rough going for sure.

Switching to Remote

The switch to ARM for my workspace and the lack of an ARM-native macOS Notes client threw another wrench into the works, and I decided it'd be simpler to switch to remote compilation. Remote operations were actually the first mechanism I added in, since it was a lot easier to have a pre-made Domino+OSGi environment than spinning one up locally, and I've kept things up since.

My first pass at this was to install the NSF ODP bundles on my main dev server whenever I needed them. This worked, but it was annoying: I'd frequently need to uninstall whatever other bundles I was using for normal work, install NSF ODP, to my compilation/export, and then swap back. Not the best.

Basic Container

Since I had already gotten in the habit of using a remote x64 Docker host, I decided it'd make sense to make a container specifically to handle NSF ODP operations. Since I would just be feeding it ODPs and NSFs, it could be almost entirely faceless, listening only via HTTP and using an auto-generated server ID.

The tack I took for this was to piggyback on the work I had already done to make an IT-suite container for the XPages JEE project. I start with the baseline Domino container from the community script, feed it some basic auto-configure params to relax the HTTP upload-size limits, and add a current build of the NSF ODP OSGi plugins to the Domino server via the filesystem. Leaving out the specifics of the auto-config script, the Dockerfile looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
FROM hclcom/domino:12.0.2

ENV LANG="en_US.UTF-8"
ENV SetupAutoConfigure="1"
ENV SetupAutoConfigureParams="/local/runner/domino-config.json"
ENV DOMINO_DOCKER_STDOUT="yes"

RUN mkdir -p /local/runner && mkdir -p /local/eclipse/eclipse/plugins

COPY --chown=notes:notes domino-config.json /local/runner/
COPY --chown=notes:notes container.link /opt/hcl/domino/notes/latest/linux/osgi/rcp/eclipse/links/container.link
COPY --chown=notes:notes staging/plugins/* /local/eclipse/eclipse/plugins/

The runner script copies the current NSF ODP build to "staging/plugins" and it all holds together nicely. Technically, I could skip the container.link bit - that's mostly an affectation because I prefer to take as light a touch as possible when modifying the Domino program directory in a container image.

Automating The Process

While this server has worked splendidly for me, it got me thinking about an idea I've been kicking around for a little while. Since the needs of NSF ODP are very predictable, there's no reason that I wouldn't automate the whole process in a Maven build, add a third option beyond local and remote operations where the plugin could spin up a temporary container to do the work. That would dramatically lower the requirements on the local environment, making it so that you just need a Docker-compatible environment with a Domino image.

And, as above, my experience writing integration tests with Testcontainers paid off. In fact, it paid off directly: though Testcontainers is clearly meant for testing, the work it does is exactly what I need, so I'm re-using it here. It has exactly the sort of API I want for this: I can specify that I want a container from a Dockerfile, I can add in resources from the current project and generate them on the fly, and the library's scaffolding will ensure that the container is removed when the process is complete.

The path I've taken so far is to start up a true Domino server and communicate with it via HTTP, piggybacking on the existing weird little line-delimited-JSON format I made. This is working really well, and I have it successfully building my most-complex NSFs nicely. I'm not fully comfortable with the HTTP approach, though, since it requires that you can contact the Docker host on an arbitrary port. That's fine for a local Docker runtime or, in my case, a VM on the same local network, where you don't have to worry about firewalls blocking off the random port it opens. I think I could do this by executing CLI commands in the container and copying a file out, which would happen all via the Docker socket, but that'll take some work to make sure I can reliably monitor the status. I have some ideas for that, but I may just ship it using HTTP for the first version so I can have a solid baseline.

Overall, I'm pleased with the experience, and continue to be very happy with Testcontainers even when I'm using it outside its remit. My plan for the short term is to clean the experience up a bit and ship it as part of 3.11.0.

XPages JEE 2.11.0 and the Javadoc Provider

Apr 20, 2023, 9:47 AM

Yesterday, I put two releases up on OpenNTF, and I figure it'd be worth mentioning them here.

XPages Jakarta EE Support

The first is a new version of the XPages Jakarta EE Support project. As with the last few, this one is mostly iterative, focusing on consolidation and bug fixes, but it added a couple neat features.

The largest of those is the JPA support I blogged about the other week, where you can build on the JDBC support in XPages to add JPA entities. This is probably a limited-need thing, but it'd be pretty cool if put into practice. This will also pay off all the more down the line if I'm able to add in Jakarta Data support in future versions, which expands the Repository idiom currently in the NoSQL build I use to cover both NoSQL and RDBMS databases.

I also added the ability to specify a custom JsonbConfig object via CDI to customize the output of JSON in REST services. That is, if you have a service like this:

1
2
3
4
5
@GET
@Produces(MediaType.APPLICATION_JSON)
public SomeCustomObject get() {
	return findSomeObject();
}

In this case, the REST framework uses JSON-B to turn SomeCustomObject into JSON. The defaults are usually fine, but sometimes (either for personal preference or for migration needs) you'll want to customize it, particularly changing the behavior from using bean getters for properties to instead use object fields directly as Gson does.

I also expanded view support in NoSQL by adding a mechanism for querying views with full-text searches. This is done via the ViewQuery object that you can pass to a repository method. For example, you could have a repository like this:

1
2
3
4
public interface EmployeeRepository extends DominoRepository<Employee, String> {
	@ViewEntries("SomeView")
	Stream<Employee> listFromSomeView(Sorts sorts, ViewQuery query);
}

Then, you could perform a full-text query and retrieve only the matching entries:

1
2
3
4
5
Stream<Employee> result = repo.listFromSomeView(
	Sorts.sorts().asc("lastName"),
	ViewQuery.query()
		.ftSearch("Department = 'HR'", Collections.singleton(FTSearchOption.EXACT))
);

Down the line, I plan to add this capability for whole-DB queries, but (kind of counter-intuitively) that would get a bit fiddlier than doing it for views.

XPages Javadoc Provider

The second one is a new project, the XPages Javadoc Provider. This is a teeny-tiny project, though, not even containing any Java code. This is a plugin for either Designer or normal Eclipse and it provides Javadoc for some standard XPages classes - specifically, those covered in the official Javadoc for Designer and the XPages Extensibility APIs. This covers things like com.ibm.commons and the core stuff from com.ibm.xsp, but doesn't cover things like javax.faces.* or lotus.domino.

The way this works is that it uses Eclipse's Javadoc extension point to tell Designer/Eclipse that it can find Javadoc for a couple bundles via the hosted version, really just linking the IDE to the public HTML. I went this route (as opposed to embedding the Javadoc in the plugin) because the docs don't explicitly say they're redistributable, so I have to treat them as not. Interestingly, the docs are actually still hosted at public.dhe.ibm.com. If HCL publishes them on their site or makes them officially redistributable, I'll be able to update the project, but for now it's relying on nobody at IBM remembering that they're up there.

In any event, it's not a huge deal, but it's actually kind of nice. Being able to have Javadoc for things like XspLibrary removes a bit of the guesswork in using the API and makes the experience feel just a bit better.

Dipping My Feet Into DKIM and DMARC

Apr 10, 2023, 10:56 AM

Tags: admin

For a very long time now, I've had my mail set up in a grandfathered-in free Google Whatever-It's-Called-Now account, which, despite its creepiness, serves me well. It's readily supported by everything and it takes almost all of the mail-hosting hassle out of my hands.

Not all of the hassle, though, and over the past couple weeks I decided that I should look into configuring DKIM and DMARC, first for my personal mail and (if it doesn't blow up) for my company mail. I had set up SPF a couple years back, and I figured it was high time to finish the rest.

As with any admin-related post, keep in mind that I'm just tinkering with this stuff. I Am Not A Lawyer, and so forth.

The Standards

DKIM is a neat little standard. It's sort of like S/MIME's mail-signing capabilities, except less hierarchical and more commonly enforced on the server than on the client. That "sort of" does some heavy lifting, but it should suit to think of it like that. What you do is have your server generate a keypair (Google has a system for this), take the public key from that, and stick it in your DNS configuration. The sending server will then add a header to outgoing messages with a signature and a lookup key - in turn, the receiving server can choose to look up the key in the claimed DNS to verify it. If the key exists in DNS and the signature is valid, then the receiver can be fairly certain that the receiver can at least be confident that the sender is who they say they are (in the sense of having control of a sending server and DNS, anyway). Since this signing is server-based, it requires a lot less setup than S/MIME or GPG for mail users, though it also doesn't confer all the benefits. Neat, though.

DMARC is an interesting thing. It kind of sits on top of SPF and DKIM and allows an admin to define some requested handling of mail for their domain. You can explicitly state that you expect your SPF and DKIM records to be enforced and provide some guidance for recipient servers to do so. For example, you might own "foo.com" and go whole-hog: declare that your definitions are complete and that remote servers should outright reject 100% of email claiming to be from "foo.com" but either didn't come from a server enumerated in your SPF record or lack a valid DKIM signature. Most likely, at least when rolling it out, you'll start softer, maybe saying to not reject anything, or to quarantine some percentage of failing messages. It's a whole process, but it's good that gradual adoption is built in.

Interestingly, DMARC also lets you request that servers that received mail from "you" email you summaries from time to time. These generally (always?) take the form of a ZIP attachment containing an XML file. In there, you'll get a list of servers that contacted them claiming to be you and a summary of the pass/fail state of SPF and DKIM for them. This has been useful - I found that I had to do a little tweaking to SPF for known-good servers. This is vital for a slow roll-out, since it's very difficult to be completely sure you got everything when you first start setting this stuff up, and you don't want to too-eagerly poison your outgoing mail.

Configuring

Really, configuring this stuff wasn't bad. I mostly followed Google's guides for DKIM and DMARC, which are pretty clear and give you a good plan for a slow rollout.

Though Google is my main sender, I still have some older agents that might send out mail for my old ID from time to time from Domino, so I wanted to make sure that was covered too. Fortunately, Domino supports DKIM as well, and it wasn't too bad. Admittedly, the process is a little more "raw" than with Google's admin site, but it's not too bad. It's not like I'm uncomfortable with a CLI-based approach, and it's in line with other recent-era security additions using the keymgmt tool, like shared DAOS encryption.

It just came down to following the instructions in HCL's docs and it worked swimmingly. If you have a document in your cred store that matches an INI-configured "domain to ID" value for outgoing mail, Domino will use it. Like how DMARC has a slow-roll-out system built in, Domino lets you choose between signing mail just when available or being harsher about it, and refusing to send out any mail it doesn't know how to sign. I'll probably switch to the second option eventually, since it sounds like a good way to ensure that your server is being a good citizen across the board.

Conclusion

In any event, this is all pretty neat. It's outside my bailiwick, but it's good to know about it, and it also helps reinforce a pub-key mental model similar to things like OIDC. It also, as always, just feels good to check a couple more boxes for being a good modern server.

Quick Tip: Stashing Log Files From Domino Testcontainers

Mar 28, 2023, 11:36 AM

Tags: docker

I've been doing a little future-proofing in the XPages Jakarta EE project lately and bumped against a common pitfall in my test setup: since I create a fresh Domino Testcontainer with each run, diagnostic information like the XPages log files are destroyed at the end of each test-suite execution.

Historically, I've combatted this manually: if I make sure to not close the container and I kill the Ryuk watcher container the framework spawns before testing is over, then the Domino container will linger around. That's fine and all, but it's obviously a bit crude. Plus, other than when I want to make subsequent HTTP calls against it, I generally want the same stuff: IBM_TECHNICAL_SUPPORT and the Equinox logs dir.

Building on a hint from a GitHub issue reply, I modified my test container to add a hook to its close event to copy the log files into the IT module's target directory.

In my DominoContainer class, which builds up the container from my settings, I added an implementation of containerIsStopping:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@SuppressWarnings("nls")
@Override
protected void containerIsStopping(InspectContainerResponse containerInfo) {
	super.containerIsStopping(containerInfo);
		
	try {
		// If we can see the target dir, copy log files
		Path target = Paths.get(".").resolve("target"); //$NON-NLS-1$ //$NON-NLS-2$
		if(Files.isDirectory(target)) {
			this.execInContainer("tar", "-czvf", "/tmp/IBM_TECHNICAL_SUPPORT.tar.gz", "/local/notesdata/IBM_TECHNICAL_SUPPORT");
			this.copyFileFromContainer("/tmp/IBM_TECHNICAL_SUPPORT.tar.gz", target.resolve("IBM_TECHNICAL_SUPPORT.tar.gz").toString());
				
			this.execInContainer("tar", "-czvf", "/tmp/workspace-logs.tar.gz", "/local/notesdata/domino/workspace/logs");
			this.copyFileFromContainer("/tmp/workspace-logs.tar.gz", target.resolve("workspace-logs.tar.gz").toString());
		}
	} catch(IOException | UnsupportedOperationException | InterruptedException e) {
		e.printStackTrace();
	}
}

This will tar/gzip up the logs en masse and drop them in my project's output:

Screenshot of the target directory with logs copied

Having this happen automatically should save me a ton of hassle in the cases where I need this, and I figured it was worth sharing in case it's useful to others.

JPA in the XPages Jakarta EE Project

Mar 18, 2023, 11:55 AM

For a little while now, I'd had an issue open to implement Jakarta Persistence (JPA) in the project.

JPA is the long-standing API for working with relational-database data in JEE and is one of the bedrocks of the platform, used by presumably most normal apps. That said, it's been a pretty low priority here, since the desire to write applications based on a SQL database but running on Domino could be charitably described as "specialized". Still, the spec has been staring me in the face, maybe it'd be useful, and I could pull a neat trick with it.

The Neat Trick

When possible, I like to make the XPages JEE project act as a friendly participant in the underlying stack, building on good use of the ComponentModule system, the existing app lifecycle, and so forth. This is another one of those areas: XPages (re-)gained support for relational data over a decade ago and I could use this.

Tucked away in the slide deck that ships with the old ExtLib is this tidbit:

Screenshot of a slide, highlighting 'Available using JNDI'

JNDI is a common, albeit creaky, mechanism used by app servers to provide resources to apps running on them. If you've done LDAP from Java, you've probably run into it via InitialContext and whatnot, but it's used for all sorts of things, DB connections included. What this meant is that I could piggyback on the existing mechanism, including its connection pooling. Given its age and lack of attention, I imagine that it's not necessarily the absolute best option, but it has the advantage of being built in to the platform, limiting the work I'd need to do and the scope of bugs I'd be responsible for.

Implementation

With one piece of the puzzle taken care for me, my next step was to actually get a JPA implementation working. The big, go-to name in this area is Hibernate (which, incidentally, I remember Toby Samples getting running in XPages long ago). However, it looks like Hibernate kind of skipped over the Jakarta EE 9 target with its official releases: the 5.x series uses the javax.persistence namespace, while the 6.x series uses jakarta.persistence but requires Java 11, matching Jakarta EE 10. Until Domino updates its creaky JVM, I can't use that.

Fortunately, while I might be able to transform it, Hibernate isn't the only game in town. There's also EclipseLink, another well-established implementation that has the benefits of having an official release series targeting JEE 9 and also using a preferable license.

And actually, there's not much more to add on that front. Other than writing a library to provide it to the NSF and a resolver to account for OSGi's separation, I didn't have to write a lot of code.

Most of what I did write was the necessary code and configuration for normal JPA use. There's a persistence.xml file in the normal format (referencing the source made by the XPages JDBC config file), a model class, and then access using the normal API.

In a normal full app server, the container would take care of some of the dirty work done by the REST resource there, and that's something I'm considering for the future, but this will do for now.

Writing Tests

One of the neat side effects is that, when I went to write the test case for this, I got to make better use of Testcontainers. I'm a huge fan of Testcontainers and I've used it for a good while for my IT suites, but I've always lost a bit by not getting to use the scaffolding it provides for common open-source projects. Now, though, I could add a PostgreSQL container alongside the Domino one:

1
2
3
4
5
6
postgres = new PostgreSQLContainer<>("postgres:15.2")
	.withUsername("postgres")
	.withPassword("postgres")
	.withDatabaseName("jakarta")
	.withNetwork(network)
	.withNetworkAliases("postgresql");

Here, I configure a basic Postgres container, and the wrapper class provides methods to specify the extremely-secure username and password to use, as well as the default database name. Here, I pass it a network object that lets it share the same container network space as the Domino server, which will then be able to refer to it via TCP/IP as the bare name "postgresql".

The remaining task was to write a method in the test suite to make sure the table exists. You can do this in other ways - Testcontainers lets you run init scripts via URL, for example - but for one table this suits me well. In the test class where I want to access the REST service I wrote, I made a @BeforeAll method to create the table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@BeforeAll
public static void createTable() throws SQLException {
	PostgreSQLContainer<?> container = JakartaTestContainers.instance.postgres;
		
	try(Connection conn = container.createConnection(""); Statement stmt = conn.createStatement()) {
		stmt.executeUpdate("CREATE TABLE IF NOT EXISTS public.companies (\n"
				+ "	id BIGSERIAL PRIMARY KEY,\n"
				+ "	name character varying(255) NOT NULL\n"
				+ ");");
	}
}

Testcontainers takes care of some of the dirty work of figuring out and initializing the JDBC connection for me. That's not particularly-onerous work, but it's one of the small benefits you get when you're doing the same sort of thing other users of the tool are doing.

With that, everything went swimmingly. Domino saw the Postgres container (thanks to copying the JDBC driver to the classpath) and the JPA access worked just the same as it does in my real environment.

Like with the implementation, there's not much there beyond "yep, do the things the docs say and it works". Though there were the usual hurdles that I've gotten used to with adding things like this to Domino, this all went pleasantly smoothly. I may build on this in the future - such as the aforementioned server-managed JPA bits - but that will depend on whether I or others have need. Regardless, I'm glad it's in there.