Next Steps With the Open Liberty Runtime

Mar 12, 2021, 11:37 AM

Tags: liberty
  1. Options for the Future of the Domino Open Liberty Runtime
  2. Next Steps With the Open Liberty Runtime
  3. Rapid Progress in Open-Liberty-Runtime Land

About a year and a half ago, I wrote a post musing about my options with the future of the Domino Open Liberty Runtime project. It's been serving me well - I still use it here and with a client CI setup - but it hasn't quite hit its full potential yet.

Its short-term goal was easy enough to accomplish: I wanted a good way to run modern Servlet apps using an active Domino runtime, and that works great. Its long-term goal takes more work, though: becoming the clear best way to do "web stuff" on Domino. There are a lot of definitions for what that might be, and that "on Domino" bit may not even be the way one would want to go about doing it. Still, I think there's potential there.

So, this week, I decided to go back in and see if I could spruce it up a bit.

The Core of Domino Java HTTP

This started with me musing a bit on Twitter about the true lowest-level entrypoint in Domino's HTTP stack is, and where the border between native and Java lies. After overthinking it a bit, I found that the answer was obvious from any stack trace: XspCmdManager.

From what I gather, the native HTTP task (which is much more opaque than the Java part) loads up its JVM, uses the code in xsp.http.bootstrap.jar to initialize the OSGi environment, asks that environment for the com.ibm.domino.xsp.bridge.http bundle, and uses XspCmdManager in there to handle the layering.

That class has a couple public methods, but two are of immediate interest: isXspUrl and service. The isXspUrl is called for each incoming HTTP request. If that returns false, then nHTTP goes about its normal business like it always did; if it returns true, then nHTTP calls service with a bunch of handle parameters and lets the Java runtime take it from there.

That got me to tinkering. Since that class is in an OSGi bundle, you can readily "outrank" it by having another bundle with the same name and a higher version available. Then, since the class and its methods are just called by strings (more or less), you can have other classes with the same names and APIs in place to do whatever you want. And, such as it is, that works well: you can pretty readily inject whatever code you want into the isXspUrl and service methods and have it take over.

However, that doesn't actually buy you much. What I'd really want to do would be to improve on the actual HTTP server - HTTP/2 support, web sockets, all that - and the Java layer only comes into play after nHTTP has received and started interpreting the connection as an HTTP request. You're not given the raw incoming stream. Additionally, there's not actually any real need to override this low level: the HttpService classes you can register via the com.ibm.xsp.adapter.serviceFactory extension point can choose to handle any incoming URL directly at essentially the same low level as XspCmdManager.

So, while that was fun to poke around with, I don't think there's anything really to be gained there.

Reverse Proxy Improvements

So I went back to an older idea I had kicking around for spawning an all-encompassing reverse proxy. The project has had a lesser version of this for a good while, originally as a WAR file you could add to a Liberty server and then later as a lower-level Liberty feature. However, the way that worked was limited: it would allow you to proxy non-matched requests to a Liberty server to Domino, but didn't do anything to coordinate multiple servers beyond that. Additionally, being a Liberty feature, it limited my future options, such as genericizing the project to work with other app servers.

For my next swing at the problem, I went with Undertow, which is an embeddable Java web server in many ways similar to Jetty, and which is (I gather) the core HTTP part of Wildfly. What made Undertow appealing to me was its modern standards compliance, its relatively-low dependency footprint, and its built-in reverse proxy handler. Additionally, since it's Java, that meant I could embed it in the running JVM without spawning yet another process, hopefully making things all the more reliable.

To go with this, the config DB sprouted some more configuration options:

Reverse proxy config

Along with configuration you explicitly set there and in the individual Liberty server configurations, I have the proxy pick up Domino connection information from names.nsf, allowing it to avoid inconvenient extra environment variables or flags.

And, so far, this has been working splendidly. Undertow's configuration is pretty straightforward, and it wasn't too bad to configure it with prefix matching for the context roots of opted-in apps.

The Next Overall Goal

There's more work to do, beyond just finishing the basic implementation here. I'd really like to get it to a point where you can use this to deploy (at least) WAR-based apps "to Domino" without having to think too much about it, like how you don't have to think about deploying an NSF-based app. It should be thoroughly doable to have the reverse proxy pick up its certificate chain from Domino if desired (especially with the revamped capabilities coming in V12), and some recent changes I made make app deployment noticeably smoother than previously.

Certainly, this sort of project has some inherent limitations compared to nHTTP, but this feels like it's getting a lot closer to a direct upgrade and less like a janky proof-of-concept.

Carving Out A Workspace On Apple Silicon

Feb 17, 2021, 11:24 AM

Last month, I mentioned my particular computer trouble, in that my trusty iMac Pro has been afflicted by an ever-worsening fan noise problem. I'd just been toughing it out, since there's never a good time to lose your main machine for a week or two, and my traveler MacBook Escape wasn't up to the task of being a full replacement.

After about a month's delay, my fresh new M1 MacBook Air arrived a few weeks ago and I've been putting it through its paces.

The Basics

As pretty much anyone who has one of these computers has said, the performance is outstanding. For the most part, even with emulation, most of the tasks I do during the day feel the same as they did on my wildly-more-expensive iMac Pro. On top of that, the fact that this thing doesn't even have a fan is both a technical marvel and a godsend as far as ambient room noise is concerned.

For continuity's sake, I used Migration Assistant to bring over my iMac's environment, and everything there went swimmingly. The good-citizen apps I use like MarsEdit and Tower were already ported to ARM, while the laggards (unsurprisingly, the ones made by larger companies with more resources) remain Intel-only but run just fine in emulation.

Hardware

For a good while now, I've had the iMac screen flanked by a pair of similarly-sized but far-inferior Asus screens. With the iMac's lovely screen out of the setup for now, I've switched to using those two Asus screens as my primary ones, with the pretty-but-tiny laptop screen sitting beneath them. It works well enough, though I do miss the retina resolution and general brightness of the iMac.

The second external screen itself was a bit of an issue. Of themselves, these M1 Macs, either for good reason or to mark them as low end, support only two screens total, the laptop screen included. So I ended up ordering one of the StarTech DisplayLink adapters. I expected it to be a crappy experience overall, with noticeable lag, but it actually works much more smoothly than I'd have expected. Other than the fact that it doesn't support Night Shift and some wake-from-sleep slowness that I attribute to it, it actually feels just like a normally-attached monitor.

I also, in order to regain my precious Ethernet connection and (sort of) clean up the dongle situation, I got one of these Anker USB-C docks. I've only had it for a day, but it seems to be working like you'd want so far. So that's nice.

Eclipse and Java

Here's where I've hit my first bout of jankiness, though it's not too surprising. In general, Eclipse and Java work just fine through emulation, and I can even keep running tests and web servers using the libnotes.dylib from the Notes client as I want.

I've found times where tests lag or fail now when they didn't before, though, and that's a little ominous. Compiling locally with NSF ODP, which spawns a sub-process that loads the Notes libraries, usually works, though now I've set up another Domino server on my network to handle that reliably.

I've also noticed some trouble in one of my Eclipse workspaces where it periodically spends a long time (10+ minutes) "Building" without explaining what exactly it's doing, and this is new behavior since the switch. I can't say what the core trouble is there. It's my largest active workspace, so it could be that file polling or other system-call-intensive work is just slower, or it could be an artifact of moving it from machine to machine. I'll probably scrap it and make a new workspace with the same projects to see if it alleviates it.

This all should improve in time, though, when Eclipse, AdoptOpenJDK, and HCL all release macOS ARM ports. IntelliJ has an experimental ARM port out, and I'm curious how that does its thing. I'll probably spend some time kicking the tires on that, though I still find Eclipse's UI much more conducive to the "lots of semi-related projects" working style I have. Visual Studio Code is in a similar boat, so that'll be good for the JavaScript development I do (under protest).

In the mean time, I've done some tinkering with how I could get a fully-native Eclipse environment running and showing up on my Mac, including firing up the venerable XQuartz to run Eclipse as an X client from a Linux VM in the basement. While that technically works, the experience is... well, I'll charitably call it "not Mac-like". Still, it's kind of neat and would in theory push aside any number of concerns.

Docker

Here's the real trouble I'm butting my head against. I've taken to using Docker more and more for various reasons: running app servers with a Domino runtime, running Domino outright, and (where my trouble is now) performing cross-compilation and other native-specific compilation tasks. For example, for one of my clients, I have a script that mounts the project directory to a Docker container to perform a full Maven build with NSF compilation and compile-time tests, without having to worry about the user's particular Notes or Domino installation.

However, while Docker is doing Hurculean work to smooth the process, most of the work I do ends up hitting one of the crashing snags in poor qemu, which crop up particularly with Java compilation tasks. Since compiling Java is basically all I do all day, that leaves me hoping either for improvements in future versions or a Linux/aarch64 port of Domino (or at least libnotes.so).

In the mean time, I'm making use of Docker's network transparency to run Docker on an x64 VM and set DOCKER_HOST locally to point to it. For about half of what I need, this works great: I can run Domino servers and Notes-enabled webapps this way, and I just change which address I'm pointing to to interact with them. However, it naturally removes the possibility of connecting with the local filesystem, at least without pairing it with some file-share jankiness, so it's not a replacement all around. It also topples quickly into the bizarre inner Docker world: for example, I wanted to set up Codewind to work remotely, but the instructions I found for getting started with your own server were not helpful.

Future Use

Still, despite the warts, I'd say this laptop is performing admirably, and better than one would normally expect. Plus, it's a useful exercise in finding more ways to make my workflow less machine-specific. Though I still bristle at the thought of going full Eclipse Che and working out of a web browser, at least moving some more aspects of my workspace to float above the rough seas is just good practice.

I'll probably go back to using the iMac Pro as my main machine once I get it fixed, even if only for the display, but this humble, low-end M1 has planted its flag more firmly than a MacBook Air normally has any right to.

Java Travelogue: The Care and Feeding of Locales

Feb 14, 2021, 1:37 PM

Tags: java
  1. Java Hiccups
  2. Bitwise Operators
  3. Java Grab Bag 2
  4. Java Travelogue: The Care and Feeding of Locales
  5. More Notes on Filesystem and Charset Portability

Over time, people using the NSF ODP Tooling project have periodically hit troubles with files using non-ASCII filenames, as well as some related encoding issues.

Now, I know what you're thinking: why don't people hitting this trouble just be Americans and not use languages with accents? And yes, obviously, that's the optimal solution. However, given that, apparently, most people on the planet are not American, it's for the best to not write software that completely falls apart when encountering an umlaut.

When working to fix this, I found some areas where the fix was pretty obvious, and others where the trouble was a bit more insidious. I figure it'll be potentially useful to write these down, either for others running into similar trouble or my own future self next time I write overly-American code.

Early Encounters: ZIP Files

The earliest place people encountered trouble was with the handling of ZIP files when transferring packages around. When compiling remotely, the local Maven plugin ZIPs up the ODP and related support files (OSGi sites, etc.) for transfer to the server, which then unzips them. This led to a problem wherein the handling of file names in ZIP files is wildly inconsistent over platforms and locales.

Fortunately, this one has a clean fix: when using ZipOutputStream and ZipInputStream (which were my preferred mechanisms), you can specify your encoding:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
try(OutputStream fos = Files.newOutputStream(packageZip, StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING)) {
    try(ZipOutputStream zos = new ZipOutputStream(fos, StandardCharsets.UTF_8)) {
        // Add entries to the ZIP here
    }
}

// And to read:
try(InputStream is = Files.newInputStream(zipFilePath)) {
    try(ZipInputStream zis = new ZipInputStream(is, StandardCharsets.UTF_8)) {
        // Iterate over entries here
    }
}

Since I control both sides of the operation in this case, I can then be confident that it will use UTF-8 across the board.

Next Problem: Filesystem Restrictions

The next problem I ran into actually happened when I was setting up a compiler server in a Docker container. One of the design elements in the example projects is an agent containing umlauts, based on a reported problem. When I tried compiling this project in a Docker-housed Domino server, I ran into this trouble:

java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters: Code/Agents/Example Agent with ref?r?ns.fa
    at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
    at sun.nio.fs.UnixPath.<init>(UnixPath.java:71)
    at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
    at sun.nio.fs.AbstractPath.resolve(AbstractPath.java:53)
    at org.openntf.nsfodp.compiler.servlet.ODPCompilerServlet.expandZip(ODPCompilerServlet.java:241)

Basically, it was trying to write out what it considered an illegal filename and choked on it.

I first spent some time double-checking my ZIP handling, since I was assuming that the trouble was that the name it got out of the ZIP file was corrupted, hence the "?" instead of "ë". This search brought me to this Stack Overflow question, which is asking about the same exception and which talks about the locale of the underlying system. The gist of it is that Java uses a semi-standard property (sun.jnu.encoding) to interpret a lot of things, filename mapping included, and it derives this from the system locale.

I hopped into the Domino container to see what locale it uses (by way of echo $LANG) and saw that it's "C.utf8". I like the sound of that "utf8" part, but the "C" part is different from the comfy "en_US" that I'm used to, and likely causes Java to be more restrictive. Uncharacteristically, the typical "en_US" setup actually avoids this trouble, causing Java NIO to allow all sorts of characters in filenames.

So I started seeing what I could do by way of setting ENV variables as part of the Dockerfile, but then realized that it'd be better to fix this in a way that doesn't depend on external configuration like that.

Java NIO

Here I realized that I didn't actually need to write these files out to the filesystem at all. Over a year ago, I wrote part 1 of an unfinished series talking about the Java NIO filesystem API from Java 7. That API exists for a number of reasons, and the best way to dive into it is to replace your uses of java.io.File, java.io.FileInputStream, etc. with it, which I did in the NSF ODP Tooling a while ago.

What struck me, then, was that this earlier work also separated out the specifics of filesystem access. And, critically, Java ships with a ZIP file system provider that lets you point at a ZIP or JAR file and treat it like any old filesystem. The on-disk project representation I wrote for the compiler uses this NIO API as its entrypoint. By skipping the step of extracting the ODP from the ZIP to the filesystem, I could remove that entire problem from my view.

The Fiddly Parts

This process was mostly smooth, but there are a few fiddly parts that I had to account for:

  1. You have to use newFileSystem when you crack open a ZIP this way, rather than trying to open it by "jar:file" URL directly. Additionally, you have to pass a Map of options including "create":"true" to make it work.
  2. Paths.get, which is a common mechanism for creating either a full or relative path, is a bit insidious. Since those paths are created using the default system filesystem, you can't just pass them to methods like resolve for paths created from another filesystem type. Accordingly, I replaced uses of that with methods based on a context filesystem.
  3. Nested ZIPs aren't supported. That is, they exist like other files in there, but you can't reach further inside of them with a "jar:jar:file" URL. So, when building the classpath for compilation, I have to extract them. I suppose this part is technically a bug if those files have non-ASCII names, but that's rare enough to hopefully not be an issue.

Once I dealt with those, though, things went surprisingly smoothly. I even refactored earlier code to use this, replacing more-complicated streaming logic with conceptually-simpler file-copying logic. My guess is that this new route is slower, but the difference is negligible for my needs, so I'll take the higher abstraction here.

Stream Locales

Unfortunately, while that helped a bit and is definitely conceptually neat, it didn't solve all my trouble. If I recall correctly, at this point, I was able to get the file imported, but the agent name itself was mangled in Notes, something that didn't happen when I compiled it locally.

This brought me to looking into locales used when reading and writing XML from the ZIP or filesystem. Hypothetically, I had done this cleanly. My file-reading utility methods were very similar, just opening up an InputStream (which is too low-level to care about encoding) and passing it along to IBM Commons utilities to interpret it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public static String readFile(Path path) {
    try(InputStream is = Files.newInputStream(path)) {
        return StreamUtil.readString(is);
    } catch(IOException e) {
        throw new RuntimeException(e);
    }
}

public static Document readXml(Path file) {
    try(InputStream is = Files.newInputStream(file)) {
        return DOMUtil.createDocument(is);
    } catch(IOException | XMLException e) {
        throw new RuntimeException(e);
    }
}

However, I realized that these were insidious traps, too. By not handling encoding on my side, I was leaving it up to the internals to pick a default encoding, which isn't guaranteed to be UTF-8 (even though it really should be for XML). StreamUtil.readString there has a variant that takes an encoding as the second argument, but I decided to instead handle this one step earlier. Rather than using InputStream, which deals with bytes directly, I decided to switch to Readers, which are more specialized for dealing with character sequences. The Files class provides methods to do this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public static String readFile(Path path) {
    try(Reader r = Files.newBufferedReader(path, StandardCharsets.UTF_8)) {
        return StreamUtil.readString(r);
    } catch(IOException e) {
        throw new RuntimeException(e);
    }
}

public static Document readXml(Path file) {
    try(Reader r = Files.newBufferedReader(file, StandardCharsets.UTF_8)) {
        return DOMUtil.createDocument(r);
    } catch(IOException | XMLException e) {
        throw new RuntimeException(e);
    }
}

This way, it's explicit what I'm doing, and it allows for extra optimization at the NIO level if possible.

Writing Back Out

These rules also applied to writing back out. For the most part, Files.newBufferedWriter(..., StandardCharsets.UTF_8) was the way to go, though I did find one extra insidious bit:

1
2
3
try(PrintWriter writer = new PrintWriter(os)) {
    // ...
}

Here, PrintWriter doesn't have a character-set argument at all, and so one could be forgiven (hopefully) for just kind of assuming it'll use Unicode. However, delving into the implementation, it uses OutputStreamWriter's no-charset constructor, which in turn calls Charset.defaultCharset(), and there's your potential bug. Since I didn't actually need PrintWriter as such, I replaced this with a charset-specific call and all was well:

1
2
3
try(Writer writer = new OutputStreamWriter(os, StandardCharsets.UTF_8)) {
    // ...
}

Overall

I felt that this was a pretty good exercise to perform, not just because it'll be immediately useful for NSF ODP, but also because it's a good reminder to be more diligent about character encoding. And it's also just a good lesson for two critical parts of programming: take the higher abstraction when you can and be as explicit as possible in your intent.

By switching to using the ZIP filesystem implementation, I was able to remove an entire step and problem domain from my plate. Now, the code that reads and writes filenames server-side should be able to run on basically any locale setting, without concern for the restrictions of the filesystem (within reason). The code is simpler, the operations are the same whether it's working with the filesystem directly or not, and reading the ZIP'ed ODP should actually be slightly more efficient.

And for the rest, explicitly picking your character set is just good practice. Even in a case where the documentation says that it will default to UTF-8, I think it's better to do it this way, so anyone reading your code can see what you're doing without resting on implied behavior. Certainly, you can be too explicit in places where relying on natural behavior makes sense, but this highlighted that character sets aren't one of those cases.

A Simpler Load-Balancing Setup With HAProxy

Feb 5, 2021, 3:32 PM

...where by "simpler" I mean relative to the setup I detailed six years ago.

For a good long time now, I've had a reverse-proxy + load balancer setup that uses nginx for the main front end and HAProxy as an intermediary to do the actual load balancing. The reason I set it up this way was that I was constrained by two limitations:

  • nginx's built-in load balancing didn't do sticky sessions like I needed, which would break server-side-state frameworks like XPages
  • HAProxy didn't do HTTPS

In the intervening half-decade, things have improved. I haven't checked on nginx's load balancing, but HAProxy sprouted splendid HTTPS capabilities. So, for the new servers I've been setting up, I decided to take a swing at it with HAProxy alone.

Disclaimer

Before I go any further, I should point out that this is only a viable solution because I would otherwise use nginx only for being the HTTPS frontend. In other cases, I've used it to host files directly, run CGI scripts, etc., and it'd be best to keep it around if you want to do similar things.

Basic SSL Config

The "global" section of haproxy.cfg contains settings for your TLS ciphers and related parameters, and Mozilla's config generator is your friend here. Today, I ended up with this (slightly tweaked to generate dhparams locally):

global
	#
	# SNIP: a bunch of default stuff
	#
	crt-base /etc/ssl/private

	# See: https://ssl-config.mozilla.org/#server=haproxy&version=2.0.13&config=intermediate&openssl=1.1.1d&guideline=5.6
	ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
	ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
	ssl-default-bind-options prefer-client-ciphers no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

	ssl-default-server-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
	ssl-default-server-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
	ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
	
	# sudo openssl dhparam -out /etc/haproxy/dhparams.pem 2048
	ssl-dh-param-file /etc/haproxy/dhparams.pem

Though arcane, that's fairly standard stuff for TLS configuration.

Frontend Config

Years ago, my original config put everything in a listen block, but it's properly split up into frontend and backend now. The frontend block is pretty simple:

frontend frontend1
	bind *:80
	bind *:443 ssl crt star.clientdomain1.com.pem crt star.clientdomain2.com.pem alpn h2,http/1.1
	http-request redirect scheme https unless { ssl_fc }
	default_backend domino

HAProxy's configuration file is almost painfully terse, but at least this part ends up readable enough. I bind to ports 80 and 443 on all IP addresses, and then provide multiple certificate files to be picked based on SNI. Conveniently, HAProxy does a nice job of just picking the right one, and you don't have to explicitly match them up with incoming host names.

One oddity here is the particular format for those ".pem" files. HAProxy expects the actual certificate, its chain, and the private key to all be concatenated together. This is as opposed to nginx, where the chain and private key are two files, or Apache's split into cert+chain+key files. It's also very explicitly not a PKCS file, which is the more-common way to package a key in with the certs: there's no encryption and no password assigned for this.

Additionally, I just put the base names for the files there because they're in /etc/ssl/private, as configured in global.

Back to the rest of the configuration: the http-request line does the work of auto-redirecting from HTTP to HTTPS. Again, very terse, and it's using the ssl_fc configuration token to check if the incoming connection is SSL.

Finally, default_backend domino ties in to the next section.

Backend Config

The backend configuration is the meat of it:

backend domino
	balance roundrobin
	cookie backend insert httponly secure
	option httpchk HEAD /names.nsf?login HTTP/1.0
	http-request add-header $WSRA %[src]
	http-request add-header $WSRH %[src]
	http-request add-header X-ConnectorHeaders-Secret 12345
	
	# "cookie d*" = set and use a cookie to tie to the backend
	# "check" = I don't know, but I assume it checks something
	# "ssl" = Connect to the backend with SSL
	# "verify none" = Don't bother with SSL verification checks
	# "sni ssl_fc_sni" = Use the incoming SNI hint when connecting to the backend
	server domino-1 domino-1.client.com:443 cookie d1 check ssl verify none sni ssl_fc_sni
	server domino-2 domino-2.client.com:443 cookie d2 check ssl verify none sni ssl_fc_sni

The balance roundrobin and cookie ... lines tell HAProxy to cycle through the backends for incoming connections, but to stick the client with a specific backend server based on the value of the backend cookie, if present, and then to set it in the response. That covers our sticky sessions.

The next line, option httpchk HEAD /names.nsf?login HTTP/1.0, tells HAProxy how to check the health of the servers. This should be something very inexpensive that's also a reliable way to tell if the server is working. I went with asking for headers for the default login page - something all Domino servers (with session auth) will have and which doesn't risk running application code like / might.

The next three lines are my beloved Domino connector headers, plus the shared secret from my locking-down DSAPI filter (I mean, it's not the actual shared secret, but that's where it goes). Note that I don't need to include $WSSN to denote the requested Host value, since HAProxy passes that along by default.

Finally, there are the actual backend configuration lines. Because the load balancer is communicating with Domino via SSL, I tell it to do so and to not bother validating the certificates. Additionally, I tell it to pass along the incoming SNI hint to Domino, which, since Domino finally supports SNI, routes the request to the correct web site on the Domino site.

If you were to connect to the Domino servers via HTTP, you could snip off a bit from those lines and add http-request add-header $WSIS True above.

Conclusion

I haven't actually put this into production yet, so the details my change, but I'm thoroughly pleased that I can simplify the configuration a good deal. I've found learning about how to configure HAProxy a little less pleasant than learning about nginx, but part of that is just learning some of the terminology and how to navigate the documentation - it's all there; it's just a little arcane.

XPages: Dealing With "Cookie name X is a reserved token"

Feb 3, 2021, 10:49 AM

Tags: xpages

The other day, John Dalsgaard asked a question in the XPages Slack Community to do with an exception that a client was seeing when going to any XPage:

java.lang.IllegalArgumentException: Cookie name ""categories":"[\"performance\",\"unclassified\",\"targeting\",\"functionality\"]"" is a reserved token
	at javax.servlet.http.Cookie.<init>(Cookie.java:144)
	at com.ibm.domino.xsp.bridge.http.servlet.XspCmdHttpServletRequest.parseCookieString(XspCmdHttpServletRequest.java:349)
	at com.ibm.domino.xsp.bridge.http.servlet.XspCmdHttpServletRequest.getCookies(XspCmdHttpServletRequest.java:283)
	at com.ibm.domino.xsp.bridge.http.servlet.XspCmdHttpServletRequest.readSessionId(XspCmdHttpServletRequest.java:185)
	at com.ibm.domino.xsp.bridge.http.servlet.XspCmdHttpServletRequest.<init>(XspCmdHttpServletRequest.java:156)
	at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:256)

As the uncharacteristically-short stack trace implies, this happens long before any actual XPage code in an NSF. What's going on here is that something - possibly a too-clever-for-its-own good script - set a cookie using a JSON value so that it can store structured data. However, this is kind of an illegal thing to do: by the spec, commas are reserved in the Set-Cookie header and, by virtue of the shared cookie-octet part of the spec, are also illegal in the client-sent Cookie header.

Who Is Wrong Here?

And actually, as I type, I'm starting to blame XPages less for this: commas in HTTP headers indicate multiple wholly-distinct values. For example, take an Accept header, like:

text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

The commas there indicate distinct values according to the HTTP spec itself, while the semicolons are just an idiom used by the Accept header.

The Cookie header doesn't make use of this meaning of the comma, instead relying entirely on semicolons for some reason. Still, HTTP-wise, it seems that a server should treat this:

Cookie: foo=[bar,baz];othercookie=hi

...as equivalent to this conceptual version:

Cookie: foo=[bar
Cookie: baz];othercookie=hi

... which then should break, as "baz];othercookie" is wildly illegal in the rules for tokens because "]" and ";" show up in the separators list.

Long story short, unencoded JSON is extremely likely to run afoul of all sorts of rules here, and ideally the browser wouldn't send a header like that in the first place.

The Workaround

The XPages developers were aware of this, but made the fix an opt-in thing at the server filesystem level. To avoid this specific trouble, go to the "xsp" directory in your Domino program directory (not the data directory), create a file named "bootstrap.properties", and set its contents to:

1
xsp.commas.not.delimiters.in.cookie=true

To my knowledge, the only "documentation" that exists for this is an incidental mention in the XPages Portable Command Guide, where the property being false by default shows up in the sample output from running tell http xsp show settings on the console. Fortunately, once you know that it exists, the name is pretty self-documenting, and it does just what it says on the tin.

As with other server configuration options, I think this should be configurable at the NSF level, and should at the very least be something configurable in the data directory. Doing anything in the program directory only gives me the willies. The stack should also give a better error earlier, rather than relying on the Servlet Cookie class to balk at the malformed name.

In any event, if you have a case where you're using a library or same-domain-server app that sets a header like this, this property should help.

A Partially-Successful Venture Into Improving Reverse Proxies With Domino

Jan 30, 2021, 4:00 PM

I've long been an advocate for Domino's HTTPEnableConnectorHeaders notes.ini setting. That's the one that lets you pass some WebSphere-derived headers like $WSRA and (particularly-dangerously) $WSRU to Domino and have them fully override the normal values for the incoming host, user, and more.

I'm still a big fan of it, but it always come with the irritating absolute requirement that Domino not be publicly-accessible, lest any schmoe come along and pretend to be any user on your server. That's fine and all, but sometimes it's useful to have Domino be accessible without the proxy, such as for troubleshooting. What I really want it selective enabling of this feature based on source IP. So I set out this weekend to try to implement this.

The Core Idea

The mechanism for doing the lowest-level manipulation you can with HTTP on Domino is to use DSAPI, the specialized API for the web server task specifically. Unfortunately, working with DSAPI means writing a native library with C ABI entrypoints for a handful of known functions. I don't enjoy writing C (though I respect it thoroughly), but I've done it before, so I set out making a project plan.

My goal was to support the X-Forwarded-* headers that are common among reverse proxies, allowing administrators to use this common idiom instead of the $WS* headers, and also having the side effect of removing the "act as this user" part from the equation. I figured I'd make up a notes.ini property that took a comma-separated set of IP patterns, look for requests coming in from those hosts, and elevate the X-Forwarded-* headers in them up to their equivalent real properties.

Initial Side Trip Into Rust

Since C in the hands of a dilettante such as myself has a tendency to result in crashed servers and memory bugs, I figured this might be a good opportunity to learn Rust, the programming language that sprung out of Mozilla-the-organization that focuses heavily on safety, and which has some mechanisms for low-level C interoperability.

I figured this would let me learn the language, have a safer program, and avoid having to bring in third-party C libraries for e.g. regex matching.

I made some progress on this, getting to the point where I was able to actually get the filter compiled and loaded properly. However, I ended up throwing up my hands and shelving it for now when I got lost in the weeds of how to call function pointers contained in pointers to C structs. I'm sure it's possible, but I figured I shouldn't keep spending all my time working out fiddly details before I even knew if what I wanted to do was possible.

Back to C

So, hat in hand, I returned to C. Fortunately, I realized that fnmatch(3) would suit my pattern-matching needs just fine, so I didn't actually need to worry about regexes after all.

Jaw set and brimming with confidence, I set out writing the filter. I cast my mind back to my old AP Computer Science days to remember how to do a linked list to allow for an arbitrary number of patterns, got my filter registered, and then set about intercepting requests.

However... it looks like I can't actually do what I want. The core trouble is this: while I can intercept the request fairly early and add or override headers, it seems that the server/CGI variables - like REMOTE_HOST - are read-only. So, while I could certainly identify the remote host as a legal proxy server and then read the X-Forwarded-For header, I can't do anything with that information. Curses.

My next thought was that I could stick with the $WS* headers on the proxy side, but then use a DSAPI filter to remove them when they're being sent from an unauthorized host. Unfortunately, I was stymied there too: the acceptance of those headers happens before any of the DSAPI events fire. So, by the time I can see the headers, I don't have any record of the true originating host - only the one sent in via the $WSRA header.

The Next-Best Route

Since I can't undo the $WS* processing, I'd have to find a way to identify the remote proxy other than its address. I decided that, while not ideal, I could at least do this with a shared secret. So that's what I've settled on for now: you can specify a value in the HTTPConnectorHeadersSecret notes.ini property and then the filter will verify that $WS*-containing requests also include the header X-ConnectorHeaders-Secret with that value. When it finds a request with a connector header but without a matching secret, it unceremoniously drops the request into processing oblivion, resulting in a 404.

I'm not as happy with this as I would have been with my original plan, but I figure it should work. Or, at least, I figure it will enough that it's worth mulling over for a while to decide if I want to deploy it anywhere. In the mean time, it's up on GitHub for anyone curious.

Additionally, kindly go vote for the X-Forwarded-For ideas on the Domino Ideas portal, since this should really be built in.

fontconfig, Java, and Domino 11

Jan 29, 2021, 12:09 PM

Tags: java

In my last post, I quickly mentioned some trouble I had run into with fontconfig and Poi, in the context of configuring a Docker-based Domino server. However, I think it deserves its own post, so I have something to point to if others run into the same trouble down the line.

The Upshot

The upshot of the issue is that, if you're going to use Poi or or other graphics-adjacent Java libraries in Domino 11 on Linux, you'll need fontconfig and potentially some other support files installed on your system. If you have any GUI stuff installed, they'll probably be there, but it's common for them to be missing on headless servers.

For the official Domino Docker image, which uses Red Hat's package system, I wrote this Dockerfile for my derivative version:

FROM domino-docker:V1101FP2_10202020prod
USER root
RUN yum install --assumeyes fontconfig urw-fonts
USER notes

On Debian-based systems, I believe you just need apt install fontconfig.

The Details

AdoptOpenJDK builds of Java apparently don't include the same font-related support files that the Oracle ones did, and that results in calls made to the AWT layer to throw NullPointerExceptions at various times related to getting font information. This has shown up in a couple issues over in the openjdk-support project on GitHub, with two representative ones being:

https://github.com/AdoptOpenJDK/openjdk-support/issues/80

java.lang.NullPointerException
  at sun.awt.FcFontManager.getDefaultPlatformFont(FcFontManager.java:76)
  at sun.font.SunFontManager$2.run(SunFontManager.java:433)
  ...

https://github.com/AdoptOpenJDK/openjdk-build/issues/693

java.lang.NullPointerException
  at sun.awt.FontConfiguration.getVersion(FontConfiguration.java:1264)
  at sun.awt.FontConfiguration.readFontConfigFile(FontConfiguration.java:219)
  at sun.awt.FontConfiguration.init(FontConfiguration.java:107)
  ...

Domino 11 switched from IBM's proprietary variant of J9 to OpenJ9, and this is another one of the little fiddly details that isn't quite the same between the two.

Most commonly, I've found this crop up when using Poi, specifically calling autosizeColumns when generating a spreadsheet, but in theory any number of things like that will trip across this. Unfortunately, the internal JVM classes aren't terribly helpful in their error reporting, since they get several method calls in just assuming that all is well with the world before bailing with the NPEs like above.

It's a mild annoyance to deal with, but fortunately one with a straightforward fix, at least once you know what the trouble is.

Notes From A Week in Administration-Land

Jan 26, 2021, 8:06 PM

Tags: admin domino

This past week, I've been delving back into the world of Domino administration for a client and taking the opportunity to brush up on the niceties we've gotten in the past few releases. A few things struck me as I went along, so I'll list them here in no particular order.

Containers are the Way to Go

This isn't too much of a surprise, I suppose, but I imagine the only reason I'll set up a server the "installer" way again is for dev servers on Windows. Using Docker just skips over that installation phase completely and makes things so much quicker and more consistent.

It also essentially forces you to make an install-support script in the form of a Dockerfile. I started out using the default one from FlexNet, but then had a need to install fontconfig to avoid this delightful little gotcha that crops up with Poi. Since the program container is intended to be ephemeral, this meant that I had to make a Dockerfile to make a proper image for it, and now there's inherently an artifact for future admins to use.

Cluster Symmetry is a Delight

Years ago, I wrote a "Generic Replicator" agent that I would configure per-server to tell it to do the work of mirroring all NSFs. It's done yeoman's work since then, but I'm all the happier to use a built-in capability. So, tip-of-the-hat to the team that added that one in.

It'd be nice if it didn't also require notes.ini settings, but I suppose that's the way of things.

DBMT is Still Great

I know it's years and years old at this point, but I can never help but gush over DBMT. It's great and should be promoted to an on-by-default setting instead of being something you have to know to configure via a Program document.

It Still Sucks to Configure Every Server Doc

Every time I make a new server document, there's this pile of obligatory "fix the defaults" work: filling in all the stuff on the security tab, enabling web site documents, changing all the fiddly Ports tab options (including having to enable enforcing access settings (?!)), and so forth. That's on top of the giant tower of notes.ini settings in the Configuration document, but at least those can be applied to a server group and are less tedious once you know them.

I put an idea in for that last year and it sounds like it's in the works, so... that'll be nice.

The Server Doc Could Use Lots More Settings

I took the opportunity of re-laying-out servers to move as much as I can out of the data directory - namely, DAOS, transaction logs, FT indexes, and view indexes. The first two of these are configurable in the server doc, which is nice, but the latter two require specification via notes.ini properties. Since they're server-specific, it feels like a leaky abstraction to put them in a Configuration document - while it would work, and I could remove them from the doc once applied, that's just gross.

It would also be good to have a way to properly share filesystem-bound files and have them auto-deployed. For example, I have a notes.ini property in the Configuration doc for JavaUserOptionsFile=jvm.properties. The property is set automatically, but I have to create the file manually per-server. I could certainly write an agent to do that, and it'd work, but it's server configuration and belongs in the Directory.

Ideally, I'd like to be able to obliterate the container and data image, recreate them with the ID and location info, and have the server reconstitute itself after that entirely from NSF-based configuration.

HTTP is Better Than It Used to Be, But Still Needs Work

I'd love to replace my use of the WebSphere connector headers with X-Forwarded-For, but it doesn't work like that, and I'm not about to write a DSAPI filter to do it. Ideally, that'd be supported and promoted to the server config.

Same goes for Java-related settings that you just have to kind of magically know, like HTTPJVMMaxHeapSize+HTTPJVMMaxHeapSizeSet and ENABLE_SNI (I don't know why you wouldn't want SNI enabled by default).

The SSL cert manager in V12 can't come soon enough.

HTTP's better off than it was for a while, and it's nice that the TLS stack isn't dangerous now, but knowing the right way to configure it is still essentially playground lore.

Domino Configuration Tuner Deserves a New Life

I remember discovering DCT back at my old company in the 7.x days, but it unfortunately looks like it hasn't been updated since not long after that, and now doesn't even parse the current Domino version correctly. If it was brought up to date and produced reliable suggestions, it'd be huge.

As it is, my server configuration docs have all sorts of notes.ini properties like NLCACHE_SIZE=67108864 and UPDATE_NOTE_MINIMUM=40 that I saw recommended somewhere once years ago, but I have no idea whether they're still good or appropriately-sized. I want the computer to tell me that (and, in a lot of cases, just do the right thing without configuration).

Conclusion

Anyway, those are the things that came to me as I was working on this. The last few major releases have had some huge server-side improvements, and I like that the pace is continuing. Good work, server core team.

State of my Workspace 2021

Jan 15, 2021, 10:59 AM

Tags: sotu
  1. State of my Workspace 2017
  2. State of my Workspace 2021

A few years back, I wrote a post about the State of my Workspace, and I figure now is as good a time as any to write an updated version.

That said, looking through my old post, it's a little surprising how much remains the same, considering how many of the categories where allegedly in transition at the time.

Eclipse is Eternal, Apparently

Over the years, I've spent varying amounts of time in IntelliJ, and it's remarkably good in a lot of ways. I love how well-integrated its features are, and the revamped Endpoints pane in the latest releases is outstanding. It's speedy, consistently-updated, and well-considered. It particularly shines when working with a single app (either an individual thing or a small multi-module project), such as working on this blog.

That said, I still use Eclipse daily and always return to it when I try to switch away. Eclipse itself has been improving steadily too. Recent quarterly releases have focused a lot on performance, and it's paid off - the switch to non-blocking completion proposals in particular has brought some of the editing snappiness that other editors revel in. In addition, the Wild Web Developer project, though still jankier than, say, VS Code, does a good-enough job bringing fully-modern JS editing to Eclipse by way of using the same underlying LSP as other editors (the fact that it's also the backbone of NSF ODP's Eclipse support helps too). It also has some of the inline Java hints like IntelliJ does now, too, dubbed "Code Minings".

There are a few things that keep me coming back to Eclipse other than just "it's speedy like others now", too. Most of my work involves working with multiple sprawling Maven trees from distinct repositories. While IntelliJ can do that by way of importing modules, Eclipse's UI just makes it easier to manage. There's also the eternal back-and-forth about Build Automatically, and I come down thoroughly on Eclipse's side there. While IntelliJ has some options to do something like that, it's not as consistent as Eclipse. With Eclipse, I can be confident that the Problems pane will always show everything applicable. In general, I just feel more in-control of a large set of projects when working in Eclipse, and that goes a long way.

Issue Trackers Are Even Worse Off Now

Keeping track of open issues for my various tasks - both open-source projects and client work - remains a real thorn in my side, and the situation has degraded further over the years. I quite liked using the app Ship years ago, but then the company behind it shut down. The Mylyn Bitbucket connector, too, kept breaking and I gave up on it entirely a while ago. This has left me back in the situation of manually checking various browser tabs to follow everything, and that stinks.

Every once in a while, I've tinkered with writing my own little coordinated issue-tracker app to bridge all the differences, but it never really gets off the ground. It's just one of those things where it never really makes sense to sink a bunch of non-billable time into slightly improving the way I manage actually-billable time. Maybe one day it'll tip over the mental threshold and I'll actually solve the problem. We'll see.

My Computer Is Due For A Change

I ended up buying the iMac Pro I mentioned in my last post, and it's been pretty decent. However, I've hit the same trouble that Marco Arment of ATP ran into, which is that the fans on this thing go constantly. It crept up over time, with them just spinning up when I would e.g. compile a bunch of stuff at once, but now they're going basically any time I use it. It's still under AppleCare, but, between the pandemic and the fact that there's no time when it's convenient for me to be without my development environment for a week, I haven't taken it in. It's just led to be getting more and more annoyed over time.

Fortunately, the M1 Macs should be an opportunity to cut the Gordian knot: my old MacBook was due for a replacement, so I'm swapping that out for an Air. In theory, that Air will be equivalent or better than the iMac Pro for the type of work I do anyway, and I'd long ago moved my Windows VM to a PC in the basement. I plan to give it a shot as my main desktop once it arrives, and that'll also give me some room to take this thing in to be fixed. I'm looking forward to that, since it'll be nice to not glare at my computer in annoyance all day.

Oh, and, since I mentioned Storage Spaces last time: since then, I took an old tower Mac Pro and installed FreeNAS in it, and it's a delight. FreeNAS is cool and I legitimately missed working with FreeBSD. So now I have that thing handling mass storage, while my gaming PC runs Windows Server and hosts my various development VMs using Hyper-V.

Other Misc. Tools

In my last post, I mentioned Franz as a coordinated tool for the ridiculous number of chat apps I had. I ended up settling on it and, other than some consistent problems with notifications and audio, it does a good-enough job. Certainly, it's a much-better experience than running all of those stupid apps individually, that's for sure.

I also did indeed make the switch from SourceTree to Tower. While I appreciated the $0 price of SourceTree, its general crashiness and tendency to hang on complex operations got to me and I've been using Tower ever since. It does exactly what it's supposed to and does it well. Nice job, Tower.

Browser-wise, I used Firefox Developer Edition for a while, but switched back to Safari when Firefox developed a tendency to somehow hard-crash my computer. I don't know if it's a computer-specific thing (likely related to whatever the fan trouble is, I'd guess) or trouble with Firefox, but it's not exactly the sort of thing I want to have to diagnose. Safari is speedy and I'm getting used to the developer tools, so it's a fine replacement.

For mail, I (and my company) have been using Spark for a while, and it's great. The ability to share emails and have inline threads among people in your organization is wonderful.

For calendars, I use Fantastical. I miss the days when I didn't have to really care about calendars at all, but, given that I regularly have meetings, this does a splendid job dealing with them. I quite appreciate the recent additions of weather reports and the automatic detection of meeting URLs, too.

For blogging, I use MarsEdit, both because it's good on its own and because writing an API for my blog meant I didn't need to bother making a good editing UI on my own. I think it's also just conceptually good to use tools that work with open protocols like that.

Getting Started with Hotwire in a Java Webapp

Jan 12, 2021, 5:19 PM

Whenever I have a great deal of discretion over how a web app is made these days, I like to push to see how simple I can make the front end portion. I spend some of my client time writing heavy client-JS front ends in React and Angular and what-have-you, and, though I get why they are good, I kind of hate them all.

One of the manifestations of my desires has been this very blog, where I set out to try not only some interesting current tools on the Java side, but also challenged myself heavily to use little to no JavaScript. On that front, I was tremendously successful - and, in fact, the only JavaScript on here is the Turbolinks library, which intercepts same-app links and updates the changed parts inline, without the server knowing about the "partial refresh" going on.

Since then, Turbolinks merged with its cousin Stimulus and apotheosized into Hotwire, which is somewhere in between a JavaScript framework and a manifesto. Specifically, it's a manifesto to my liking, so I've been champing at the bit to use it more.

Hotwire Overview

The "Hotwire" name is a cheeky truncation of HTML-over-the-wire, which itself is a neologism for how the web has historically worked: your server sends HTML, and then your browser does stuff with that. It "needs" a new name to set it apart from full-JS apps, which amount to basically sending an application to the browser, having it initialize the app, and then having the app do what would otherwise be the server's job by way of shuttling JSON around.

Turbo is that part that subsumed Turbolinks, and it focuses on enhancing existing HTML and providing a few web components to bring single-page-application niceties to server-rendered apps. The "Drive" part is Turbolinks, so that was familiar to me. What interested me next was Turbo Frames.

Turbo Frames

If you've ever used the XPages Dojo Tab Container's partialRefresh property before, Turbo Frames will be familiar. There are two main ways you can go about using it: making a "frame" that contains some navigable content (say, a form) that will then refresh in-place or making a lazy-loaded frame that pulls from another URL. The latter is what interested me now, and is what carries similar benefits to the Tab Container. It lets you serve the main page and then defer complex complication of an inner part without having to write your own JavaScript to do an API call or otherwise populate the section.

In my case, I wanted to do something very similar to the example. I have my main page, then a sidebar that can be potentially complicated to generate. So, I set up a Turbo Frame using this bit of JSP:

1
<turbo-frame id="links" src="${pageContext.request.contextPath}/links"></turbo-frame>

The only difference from the example, really, is the bit of EL in ${...}, which just makes sure that the final URL adapts to wherever the app is hosted.

The "links" resource there is another MVC controller that renders a different JSP page, truncated like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
<html>
    <head>
        <script type="text/javascript" src="${pageContext.request.contextPath}/webjars/hotwired__turbo/7.0.0-beta.2/dist/turbo.es5-umd.js"></script>
    </head>
    <body>
        <turbo-frame id="links">
            <!-- expensive content here -->
        </turbo-frame>
    </body>
</html>

The <turbo-frame id="links"> on the initiating page matches up with the one in the embedded page to figure out what to extract and render.

One little side note here is my use of WebJars to bring in Turbo. This isn't an NPM-based project, so there's no package.json bringing the dependency in, but I also didn't want to just paste the JS into my project. Fortunately, WebJars does yeoman's work: it makes various JS libraries available in Servlet-friendly Java JAR format, giving you a JAR with the JS from whatever the library is in META-INF/resources. In turn, an at-least-reasonably-modern servlet container will serve files up from there as if they're part of your main app. That way, you can just use a Maven dependency and not have to worry.

A Hitch: 406 Not Acceptable

Edit 2021-01-13: Thanks to a new release of Turbo, this workaround is no longer needed.

When I first put this together, I saw that Turbo was doing its job of fetching from the remote URL, but it was getting a 406 Not Acceptable response from the server. It took me a minute to figure out why - the URL was correct, it was just a normal GET request, and nothing immediately stood out as a problem in the headers.

It turned out that the trouble was in the Accept header. To work with other Turbo components, Frames makes a request with a header like Accept: text/html; turbo-stream, text/html, application/xhtml+xml. That first one - text/html; turbo-stream - is problematic. I'm not sure if it's the presence of a qualifier at all on text/html, the space, or the lack of an = (as in text/html;charset=UTF-8), but Liberty didn't like it.

Since I'm not (yet, at least) using Turbo Streams, I decided to filter this out on the server. Since MVC is built on JAX-RS, I wrote a JAX-RS request filter to find any Accept values of this type and strip them out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
@Provider
@PreMatching
public class TurboStreamAcceptFilter implements ContainerRequestFilter {
    @Override
    public void filter(ContainerRequestContext requestContext) throws IOException {
        MultivaluedMap<String, String> headers = requestContext.getHeaders();
        if(headers.containsKey(HttpHeaders.ACCEPT)) {
            List<String> cleaned = headers.get(HttpHeaders.ACCEPT).stream()
                .map(accept -> {
                    String[] vals = accept.split(",\\s*"); //$NON-NLS-1$
                    List<String> localClean = Arrays.stream(vals)
                        .filter(val -> val.indexOf(';') < 0)
                        .collect(Collectors.toList());
                    return String.join(", ", localClean); //$NON-NLS-1$
                })
                .collect(Collectors.toList());
            headers.put(HttpHeaders.ACCEPT, cleaned);
        }
    }
}

Since those filters happen before almost anything else, this cleared up the trouble.

Summary

Setting the Accept quirk aside, this was a pleasant success, and I look forward to using this more. I've found the modern Java stack of JAX-RS + CDI + MVC + simple JSP to be a delight, and Hotwire slots perfectly-smoothly into it. I still quire enjoy rendering HTML on the server and the associated perk of not having to duplicate business logic on both sides. Next time I have an app that requires a bit of actual JavaScript, I'll likely throw Stimulus into the mix here.